Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Status
The first post of this thread is a WikiPost and can be edited by anyone with the appropiate permissions. Your edits will be public.

Do you think the first benchmarks are correct?


  • Total voters
    314

fertilized-egg

macrumors 68020
Dec 18, 2009
2,109
57
Interesting figures from Anandtech:

Power Consumption - Mac Mini 2020 (M1)​
Rise of the Tomb Raider (Enthusiast)​
GFXBench Aztec
(High)​
Package Power
16.5 Watts​
11.5 Watts​
GPU Power
7 Watts​
10 Watts​
CPU Power
7.5 Watts​
0.16 Watts​
DRAM Power
1.5 Watts​
0.75 Watts​


So for Rise of Tomb Raider, the M1 hit exactly 15W peak, ignoring the power consumed by the RAM. The GPU side is seemingly overlooked but looks to be pretty efficient.
 
  • Like
Reactions: bill-p

bill-p

macrumors 68030
Jul 23, 2011
2,929
1,589
Yeah, so it looks like 15 - 18W is the TDP of the chip in the Mac Mini. The MacBook Pro likely runs the chip at that as well.

The MacBook Air running the chip at 10W and losing 15% of multi-core performance seems more plausible now.

It also means scaling from the MacBook Air to Pro is just a 15% gain in performance, but a whopping 50% increase in power consumption. That... honestly doesn't bode well for the chip in upcoming 16" MacBook Pro or iMac from what I can see.

Let's say power scaling is linear from this point on, and we gain 15% for every 5W. That's very dubious math, but... if I were to take that on face value, it means Apple needs to make the M1X roughly 40W in order for it to gain a 2x performance improvement. And Apple needs to make the GPU at least 3x faster to match the 5600M.

That's quite the uphill climb.
 

playtech1

macrumors 6502a
Oct 10, 2014
695
889
It also means scaling from the MacBook Air to Pro is just a 15% gain in performance, but a whopping 50% increase in power consumption. That... honestly doesn't bode well for the chip in upcoming 16" MacBook Pro or iMac from what I can see.

Let's say power scaling is linear from this point on, and we gain 15% for every 5W. That's very dubious math, but... if I were to take that on face value, it means Apple needs to make the M1X roughly 40W in order for it to gain a 2x performance improvement. And Apple needs to make the GPU at least 3x faster to match the 5600M.
I don't think it really works like this unless Apple were to simply take the M1 and overclock it. Instead I expect the higher end CPUs to have more cores but run at similar (perhaps slightly lower) clock speeds. It will consume more power, certainly, but it's not pushing the silicon harder in quite the same way - there will be more silicon so less need for pushing it to the limits on clocks and power.
 

bill-p

macrumors 68030
Jul 23, 2011
2,929
1,589
I don't think it really works like this unless Apple were to simply take the M1 and overclock it. Instead I expect the higher end CPUs to have more cores but run at similar (perhaps slightly lower) clock speeds. It will consume more power, certainly, but it's not pushing the silicon harder in quite the same way - there will be more silicon so less need for pushing it to the limits on clocks and power.

The main problem is the GPU. Apple will have to throw more cores and maybe even higher clocks at this problem because we can't expect performance scaling to be linear, but we can expect an exponential increase in power consumption.

AMD's work on their Navi GPU is nothing to sneeze at. Apple was able to beat Intel and AMD at making CPU cores, but I doubt they can maintain the same momentum with the GPU cores here.
 
  • Like
Reactions: g75d3

Homy

macrumors 68030
Jan 14, 2006
2,506
2,458
Sweden
League of Legends on MBA through Rosetta
2560x1600 Very High 50-60 fps

Max Tech said that the game was glitchy with dropped frames on a MBP 2018 Radeon Pro 555X.

Rise of the Tomb Raider on Mac Mini through Rosetta
1920x1080 Very High FXAA 39.6 fps

Fortnite on MBP 13" through Rosetta
Upscaled 2560x1600 3D resolution 75% 1920x1200 high settings 40 fps
 
  • Like
Reactions: Sanpete

name99

macrumors 68020
Jun 21, 2004
2,410
2,315
Yeah, so it looks like 15 - 18W is the TDP of the chip in the Mac Mini. The MacBook Pro likely runs the chip at that as well.

The MacBook Air running the chip at 10W and losing 15% of multi-core performance seems more plausible now.

It also means scaling from the MacBook Air to Pro is just a 15% gain in performance, but a whopping 50% increase in power consumption. That... honestly doesn't bode well for the chip in upcoming 16" MacBook Pro or iMac from what I can see.

Let's say power scaling is linear from this point on, and we gain 15% for every 5W. That's very dubious math, but... if I were to take that on face value, it means Apple needs to make the M1X roughly 40W in order for it to gain a 2x performance improvement. And Apple needs to make the GPU at least 3x faster to match the 5600M.

That's quite the uphill climb.
What is your goal?
If you want faster multi-core throughput that scales linearly in energy. No-one's going to beat Apple there. Right now Apple 4+4 matches (~handwaving~) 6 x86 cores+SMT, at about half to a third the power. They can easily double that power (8+8 cores, 16 GPU cores) at 40W, absolute worst case scenario 65W (essentially all cores maxed out at 40W, GPU maxed out at 25W). That's linear scaling, will kill anything reasonable at throughput (ie equiv of 12 to 16 SMT x86 cores depending on how badly they throttle), while continuing to almost match the insane K-level highest Intel x86 single threaded performance. This is a feasible TDP (remember it's not just the CPU power, includes GPU as well), below the maxima hit by an MBP or iMac with a dGPU. And remember it only kicks in when you insist on cranking everything up to 11... In theory the maximum on an M1 mini is max 32W (20W CPU, 12W GPU) but no-one has yet managed to observe that in practice, no realistic workload is that demanding of every part of the SoC.


If you want faster single thread, Apple is not going to achieve that by pushing frequency to crazy levels. The whole point of their design is to achieve performance through brains (lots of small transistors) not just crank up the frequency.

Meaning you'll get that single threaded performance boost (probably the usual 20 to 30% IPC boost that's been the average since about the A7) with the A15, not earlier.
Will AMD and Intel have picked up 20% in their single threaded performance by then? Unlikely.
Can Apple keep cranking out these IPC boosts? It's hard to be sure because, while I am aware of many techniques that are available to keep going, no-one knows for sure quite which have and have not already been implemented by Apple; all one can do is guess -- sometimes with a reasonable evidence base, sometimes not.

But the one very obvious IPC boost still on the table that everyone knows about is SVE/2. The point here is not the wider vectors (Apple will probably use 2x256b wide, which matches the existing 4x128b NEON units); it's that SVE/2 allows for the vectorization, especially automatically by compilers, of a much wider class of loops, while incurring much less overhead than NEON (or SSE/AVX*). This appears to be worth about 20..30% averaged over a wide range of code, though this varies from basically nothing (code that just doesn't operate as lots of identical instructions on similar data items) to 80% or so.
 

consumeritis

macrumors member
Mar 9, 2015
86
43
The obvious way Apple will scale M1 is by adding more performance cores. No need to ramp up clocks.

Comparisons to 4700U and 4800U are a bit unfair IMO, because the M1 is effectively a four-core CPU, whereas the AMD parts have eight cores, and the 4800U has SMT. In heavy workloads, the efficiency cores are not going to be making a significant contribution.

It's like saying the 4800U is faster than the 4300U. Yes. They are different classes of processor.

AMD have worked miracles with Zen. They also single-handedly saved x86 from the scrap heap when they invented x86_64, which cleaned up the architecture and made it bearable to work with again. But M1 is also very very impressive.

It'll be interesting to compare an 8+4 core M1X with a Zen 3 5800U.

And I'll also be interested to see if AMD resurrect K12 or Intel make a new StrongARM part.
 
  • Like
Reactions: torana355

jeanlain

macrumors 68020
Mar 14, 2009
2,459
953
In the battery saver it is locked to 15 watts and I get 4800 MC in CB23 compared to 7000 MC in performance mode.
Isn't 7000 a bit low for the performance mode? Anandtech reports that the 15W 4800U yields 9286 in cinebench.
 

jeanlain

macrumors 68020
Mar 14, 2009
2,459
953
Apple doesn't claim to outperform the peak performance of the mystery chip, only match it. If the AMD throttles during testing, that may be why the cooler M1 matches it, as explained above. The figures do fit that scenario.
Apple does claim that the M1 consumes 1/4 of the power of the "latest Laptop PC chip" at its peak performance. It's right here on the M1 webpage. Whatever that "PC chip" is, it can't be a 16-thread Ryzen with all cores loaded. No way.
The M1 is great and all. Apple doesn't have to be deceptive about its performance.
 
  • Haha
Reactions: Serban55

Serban55

Suspended
Oct 18, 2020
2,153
4,344

check min 13 Lroom and they compare to surface pro x arm win10...the exact same difference in experience i too have...BUT i had for almost 1 year the surface pro x...and after so many months nothing new ...
 
  • Like
Reactions: firewire9000

Sanpete

macrumors 68040
Nov 17, 2016
3,695
1,665
Utah
Apple does claim that the M1 consumes 1/4 of the power of the "latest Laptop PC chip" at its peak performance. It's right here on the M1 webpage. Whatever that "PC chip" is, it can't be a 16-thread Ryzen with all cores loaded. No way.
The M1 is great and all. Apple doesn't have to be deceptive about its performance.
You'd have to explain how it's deceptive.

And why you think the mystery chip couldn't be the Ryzen. (Not that it really matters, but I like a mystery.)
 

jeanlain

macrumors 68020
Mar 14, 2009
2,459
953
You'd have to explain how it's deceptive.
They're deceptive because they didn't even say what chip they compared the M1 to. People might think "oh, it's the 8-core Ryzen since it's the best laptop chip". Then they realise that the M1 is slower than what they expected, if they expected it to beat the Ryzen with all cores loaded.
That's why Apple is being deceptive. They could have avoided that by at least specifying the number of cores/threads this mysterious PC chip had.
 
  • Like
  • Haha
Reactions: g75d3 and Serban55

thingstoponder

macrumors 6502a
Oct 23, 2014
916
1,100
Yeah, so it looks like 15 - 18W is the TDP of the chip in the Mac Mini. The MacBook Pro likely runs the chip at that as well.

The MacBook Air running the chip at 10W and losing 15% of multi-core performance seems more plausible now.

It also means scaling from the MacBook Air to Pro is just a 15% gain in performance, but a whopping 50% increase in power consumption. That... honestly doesn't bode well for the chip in upcoming 16" MacBook Pro or iMac from what I can see.

Let's say power scaling is linear from this point on, and we gain 15% for every 5W. That's very dubious math, but... if I were to take that on face value, it means Apple needs to make the M1X roughly 40W in order for it to gain a 2x performance improvement. And Apple needs to make the GPU at least 3x faster to match the 5600M.

That's quite the uphill climb.
They’ll get speed from higher core counts, not cranking up the clock speeds past the point of efficiency. Their CPU and GPU cores are already very fast, they just need more of them for higher end machines.

But cranking up the clock speeds is still an option for desktops where energy doesn’t matter much. Not laptops though.

The main problem is the GPU. Apple will have to throw more cores and maybe even higher clocks at this problem because we can't expect performance scaling to be linear, but we can expect an exponential increase in power consumption.

AMD's work on their Navi GPU is nothing to sneeze at. Apple was able to beat Intel and AMD at making CPU cores, but I doubt they can maintain the same momentum with the GPU cores here.
With GPUs you can expect linear scaling up until a point. But you have to have massive GPUs to get there. You have to go the top of the Nvidia and AMD stack to stop getting linear performance, and those are 500-700mm/2 GPUs. Apple has a lot of room to go.
 

Sanpete

macrumors 68040
Nov 17, 2016
3,695
1,665
Utah
They're deceptive because they didn't even say what chip they compared the M1 to. People might think "oh, it's the 8-core Ryzen since it's the best laptop chip". Then they realise that the M1 is slower than what they expected, if they expected it to beat the Ryzen with all cores loaded.
That's why Apple is being deceptive. They could have avoided that by at least specifying the number of cores/threads this mysterious PC chip had.
You assume a lot! It could very well be the Ryzen. The M1 does significantly better than the 4800U in some CPU tests, so it's not that unlikely an average of results from various benchmarks would have even results between them. Also, if some of the tests were long enough for throttling to become an issue, that might work in the M1's favor.
 
  • Like
Reactions: Serban55

Serban55

Suspended
Oct 18, 2020
2,153
4,344
They're deceptive because they didn't even say what chip they compared the M1 to. People might think "oh, it's the 8-core Ryzen since it's the best laptop chip". Then they realise that the M1 is slower than what they expected, if they expected it to beat the Ryzen with all cores loaded.
That's why Apple is being deceptive. They could have avoided that by at least specifying the number of cores/threads this mysterious PC chip had.
they didnt said its faster than the best pc/laptops....but its faster than 90% most sold units...(and we all know the majority of people buys cheap)
Where apple did say its faster than the top of the line ? Apple didn't lied
Come on, and after 1 day , apple was right...it is faster than over 90% of pc sold
Apple let us to be surprised and we all are
 

onfire23

macrumors member
Oct 20, 2020
37
26
Isn't 7000 a bit low for the performance mode? Anandtech reports that the 15W 4800U yields 9286 in cinebench.
My laptop has a 4700u. At 15 watts in battery saver I get 4800 multi core in CB R23. performance mode nets me 7000 IN CB R23 but with package power of 30+ watts
 
  • Like
Reactions: jeanlain

nikidimi

macrumors newbie
Nov 13, 2020
17
12
Comparisons to 4700U and 4800U are a bit unfair IMO

Yes, the M1 is the best CPU in its class of 15W power usage.
But to be honest, it's basically in a class of its own, AMD and Intel don't actually have a (good) 15W offering. Yes, some chips can downclock to that level, but their optimal configuration is way higher and are not really designed for that level. Their strong point is at 25W, boosting to 35W.
It'll be interesting to compare an 8+4 core M1X with a Zen 3 5800U.
Yes, this will be interesting, because AMD cannot efficiently scale to 15W (for now), but adding more cores to the M1 should result in higher usage, so we will be able to compare CPU in the same TDP class, but with both operating in their optimal conditions.

If you want faster multi-core throughput that scales linearly in energy. No-one's going to beat Apple there. Right now Apple 4+4 matches (~handwaving~) 6 x86 cores+SMT, at about half to a third the power. They can easily double that power (8+8 cores, 16 GPU cores) at 40W, absolute worst case scenario 65W (essentially all cores maxed out at 40W, GPU maxed out at 25W). That's linear scaling, will kill anything
Scaling up and down is not linear. For example, the 4-core 4300U is very close in power usage to the 4800U. Just adding more cores is not that easy, for example they have to access the same memory via the same connection. Yes, it might work up to 8 cores, but it might not. AMD has been struggling with this a lot.

Will AMD and Intel have picked up 20% in their single threaded performance by then? Unlikely
Zen 3 on desktop (which is set to be released on mobile early 2021) is 20% faster than Zen 2 (the current ones). This is actually the normal increase generation to generation. So in January, Apple will be at most just one generation ahead. This is no small feat, but they are not impossible to reach. Don't forget that Apple was using 14nm Intel CPUs up to this point, that are basically 5 years old (with some optimizations, but still). In the good days of Intel, they were moving with Apple's pace. And they were set to transition to 10nm (which has similar single-core performance actually, but they only have 4 cores.) years ago. If they weren't struggling, it would have been near impossible to match them.

And Apple needs to make the GPU at least 3x faster to match the 5600M.

That's quite the uphill climb.

Discrete GPU performance will be way more difficult to match, NVIDIA didn't have competition for some time, so they weren't exactly improving very fast, but they haven't faced the delays that Intel has.

And I'll also be interested to see if AMD resurrect K12 or Intel make a new StrongARM part.

Intel is making 4+4 x86 - https://www.anandtech.com/show/15877/intel-hybrid-cpu-lakefield-all-you-need-to-know. The architecture in Apple's ARM is very complex and their efficiency comes from using 5nm, having low-power cores, good execution and good design decision. It's very unlikely that ARM vs x86 actually plays a big role. For example - https://www.extremetech.com/extreme...-or-mips-intrinsically-more-power-efficient/3
 
  • Like
Reactions: consumeritis

Buck_Turgidson

macrumors newbie
Nov 18, 2020
1
2
Screenshot 2020-11-18 at 14.56.37.png

Screenshot 2020-11-18 at 14.58.09.png
 
  • Like
Reactions: Sanpete and Susurs

theSeb

macrumors 604
Aug 10, 2010
7,466
1,893
none
That CPU score is VERY impressive.

GPU score puts it in this company according to the benchmark

1605708535484.png
 

jeanlain

macrumors 68020
Mar 14, 2009
2,459
953
under rosetta...so add around 20% improvements....and you have the final result
Not if the test is GPU bound. In that case Rosetta has no impact. Our course, you can always extract more performance by optimising the Metal code for Apple GPUs.
 

Serban55

Suspended
Oct 18, 2020
2,153
4,344
Not if the test is GPU bound. In that case Rosetta has no impact. Our course, you can always extract more performance by optimising the Metal code for Apple GPUs.
that app is using metal? if yes, ok, almost no impact ...but still kind of strange to be up there with 580X or vega 56...i think these cant be different with pro apps...580x and vega56 could be a lot faster in pro apps that are using heavy gpu
From my personal test this M1 is more on par with 570/5300M, very impressive nevertheless
 

bill-p

macrumors 68030
Jul 23, 2011
2,929
1,589
Discrete GPU performance will be way more difficult to match, NVIDIA didn't have competition for some time, so they weren't exactly improving very fast, but they haven't faced the delays that Intel has.

Well, nVidia just came out with the 3000 series, and AMD just launched Big Navi.

Since the GPU world still has healthy competition, they haven't lagged behind by that much, and it'll be harder for Apple to match that level of performance with just the M1. I don't think many here understand the implication, but... for instance, the 5600M is anywhere between 2-3x faster than M1 at pretty much everything. The difference rises up to 3-4x at higher resolution, hinting at the fact that memory bandwidth is holding back the M1.

So Apple may really need to pair M1 with a discrete GPU for the next round... if they're taking the "easy way out". The harder way would be to try and cram all of that into a single SoC, which IMHO isn't a good idea. But then again, I can still tell other folks are still drunk on benchmark numbers that are coming from the M1.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.