No it's the opposite. It means that the i9 consumes more than 2.3x the power used by the M2 Max.So that would mean, that i9 single core consumes less than 2.3x M2 Max's single core. I see it as hard to believe.
No it's the opposite. It means that the i9 consumes more than 2.3x the power used by the M2 Max.So that would mean, that i9 single core consumes less than 2.3x M2 Max's single core. I see it as hard to believe.
It’s confusing at the moment. The i7-1360p gets around 10000-11000 gb6 points. The M2 gets around 10000. So is it 50% faster (15000] or 2x faster (20000-22000)?So Qualcomm CPU has 50% faster multithreaded performance than base M2 but it says on their graph that it consumes up to 50W
So that means it’s 50% faster than M2 in multi core while using 3.6x more energy.
How is that good?
Even at these numbers, considering Apple, Google, and Microsoft are serving hundreds of millions of users each the rate is a rounding error to them (I may have been working off of old data, my apologies).The fees seem much higher and very confusing, as there seems to be more than one patent pool.
Standard AVC HEVC Licensing group MPEG LA MPEG LA HEVC Advance Velos estimate Total estimate Number of WW Patents 3,704 4,417 3,321 3,200 10,938 Handset royalty ($) – highest rate $0.20 $0.20 $0.65 $0.75 $1.60 $ per 1,000 patents for handset $0.05 $0.05 $0.20 $0.23 $0.27 Handset cap $10 million $25 million $30 million Unknown $55 million plus Sample total royalty for 10 million units $1.5 million $2.0 million $6.5 million $7.5 million $16.0 million What will TV cost you? Putting a price on HEVC licences
Changes in how you watch movies, stream TV and use video chat are on the way. These will fundamentally affect the economics of how content is delivered to you, as well as the way that the patents underpinning the enabling technology are licensedwww.iam-media.com
It would be fun if they only release the Pro chips, and we get the 27/30” iMac together with the MBPs. Would take everyone by surprise.It is possible that the event will focus on prosumer chips. At this point I wouldn’t be surprised if M3 ships in spring.
I most definitely wouldn't want that. I have no actual info, just noting the rumors:
- surplus of M2 chips
- M3 waiting for N3E
I hope that's wrong and Apple M3s all the things.
Those cores can turbo from 3.80 GHz to 4.3GHz
You misread. It's not two clusters. It's two *cores* that can go to 4.3GHz. And it's not designed with two special cores, they'll just pick the two most efficient cores. (This is what Intel and AMD do.)What I am surprised is the amount of CPU claimed by Qualcomm, ie. 12 HP cores vs rumored 8P+4E. Yet CPU cores are divided by 3 clusters of quadcore and only two clusters of CPU able turbo boost up to 4.3GHz. That's mean the last cluster of quadcore might be getting lesser power?
If a 12P Oryon really only does 50% better than a 4P+4E M2, that's... pathetic. And if you take them at their word that single-core is substantially better than M2... that's even worse.
I think they've put out a lot of really confusing numbers and we won't really know what's what until we see some actual running systems (by which point M3 will likely have ben out for 6-9 months).
But if the single- and multi-core scores are even close to accurate, it suggests that they have a really serious scaling problem, worse than Apple did with the M1 Ultra.
Qualcomm showed a slide where they claim that their new chip is 50% faster than the M2 in multi-threaded tasks. Which makes it about as fast as an M2Pro/Max (according to Geekbench 6, which is the tool they apparently use).
I understand why they used the M2 as a reference if they can't beat the "pro" Apple SoC.
Still, that's not bad if their chip actually use 30% less power than an M2 Max for that.
What I am surprised is the amount of CPU claimed by Qualcomm, ie. 12 HP cores vs rumored 8P+4E. Yet CPU cores are divided by 3 clusters of quadcore and only two clusters of CPU able turbo boost up to 4.3GHz. That's mean the last cluster of quadcore might be getting lesser power? As mentioned by AT:
"but it’s a safe assumption that each cluster is on its own power rail, so that unneeded clusters can be powered down when only a handful of cores are called for."
It is possible that Qualcomm might completely disable one main cluster of quadcore CPU, thus lower the power to compete with M3 which most likely still maintain 4P+4E configuration.
Pretty clever CPU design, instead of two different dies, they just design one die for 2 product categories. I am curious how big the die size would be?
If a 12P Oryon really only does 50% better than a 4P+4E M2, that's... pathetic. And if you take them at their word that single-core is substantially better than M2... that's even worse.
Yes, of course. But the announced all-cores turbo is 3.8GHz, and I am perhaps naively assuming that their benchmarks are from a system where they've engineered top-quality cooling. That's not *that* much slower, so the numbers are still lower than I would expect for 12 P cores.I think this simply suggest that their "turbo boost" comes at a cost and the clock speed of the CPU cores running a multi-core workload is significantly lower, probably around 3.2-3.4 Ghz. Which would be again consistent with the "it's a server core" explanation. We also should keep in mind that GB6 uses cooperative multi-core benchmarks, which is more difficult to scale for everyone (Apple included).
Yes, of course. But the announced all-cores turbo is 3.8GHz, and I am perhaps naively assuming that their benchmarks are from a system where they've engineered top-quality cooling. That's not *that* much slower, so the numbers are still lower than I would expect for 12 P cores.
Apropos of @deconstruct60's later comment, I wonder why they didn't throw a cluster of ARM's E cores in there, if they didn't have time to design their own? Presumably their NoC, SLC, etc. are different enough from ARM's that it wouldn't be easy to integrate. They may regret not doing the work (or designing their own) though. I guess we'll see once we've got real systems to test.
Hah, they don't get to make that choice. Look what happened to AMD in the first-get Zens. They accommodate the Windows scheduler and like it, or possibly work with MS to make some improvements. They do NOT get to ignore it if they want results that aren't embarrassing.Another factor might be that they don't want to deal with the Windows scheduler.
Apropos of @deconstruct60's later comment, I wonder why they didn't throw a cluster of ARM's E cores in there, if they didn't have time to design their own?
Presumably their NoC, SLC, etc. are different enough from ARM's that it wouldn't be easy to integrate. They may regret not doing the work (or designing their own) though. I guess we'll see once we've got real systems to test.
Yes, of course. But the announced all-cores turbo is 3.8GHz, and I am perhaps naively assuming that their benchmarks are from a system where they've engineered top-quality cooling. That's not *that* much slower, so the numbers are still lower than I would expect for 12 P cores.
Except there just isn't that pot of money in Cable TV any more...
They seem well aware that their subscribers are mostly over 60 and dying off, and that the most sensible way to run the business is to extract every last dollar they can from it, but not bother to improve anything.
As far as I can tell (from my limited interaction with Spectrum about a year ago) they were still using MPEG2 for their codec (so no need to buy new encoder hardware or provide new decoder boxes), and they seemed throughly uninterested in such modern features as 4K or HDR.
That isn't particularly matching reality. Overwhelming vast majority of Cable TV operators are also Internet service operators. You notion is that it is currently "cable card boxes forever" at Cable operators and that isn't the case. Most are switch to TV over IP. All the major players have Video on Demand. Which makes them do basically the same thing Netflix does only there is usually more commericials ( which Netflix is also moving toward).
The old style signalling for 'two way' pay per view signalling is being throw out for more IP bandwidth.
Comcast and Spectrum are handing out Xumo boxes.
" ... One key difference in the way Mediacom customers will experience the Xumo Stream Box is that they will not purchase cable plans directly through the box. Instead, the Xumo Stream Box will be made available to Mediacom Xtream Internet customers, so it will be used purely as a streaming player by Mediacom users. Both Comcast and Charter customers can purchase pay-TV plans directly through the Xumo Stream Box, and use the device to watch their cable subscription. ..."
Another Broadband Provider to Offer Xumo Streaming Box as Mediacom Signs up for Comcast, Charter Joint Venture
The cable provider won’t offer any of its TV plans through the Xumo Stream Box initially, but it wouldn’t be surprising to see that capability added in the future.thestreamable.com
And Spectrum has been selling "Choice" with TV service with "no cable box" for years in various markets.
Dish runs both satellite offerings and Sling. DirecTV same thing ( if they survive being mismanaged by AT&T For a long time. ).
The cable signal bucket is largely just a ballon squeeze into many of the same folks with a 'streamed' distribution as before. The industry is still hooked to pushing high carriage fees and collecting them via "cable vendors" to generate most of the revenue. So just following the money.
Largely because when go from sending one single to 100K endpoints to sending 100K different ordered streams to separate end points then the bandwidth explosion and inefficiencies are relatively large. The compression gap between AV1 and H.266 is 'pissing in the wind' kind of difference. They need more bandwidth period.
Similar with ATSC 3.0 more compression mainly is leading to broadcasting more channels with better feedback and DRM control. Not really a big net savings in aggregate bandwidth consumed.