Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
We do know the work done per watt in these tests.. webbrowsing and video watching.

Now we don’t. We don’t know how many fps the video managed, whether it played smoothly, the brightness, the resolution, etc. We don’t know whether the “web browsing” accomplished the same amount of work. Did it render the same number of pages as a MacBook Pro would in the same amount of time? Etc.

These aren’t “references,” by the way. It’s one table with no information in it.
 

Jorbanead

macrumors 65816
Aug 31, 2018
1,209
1,438
I looked up the first two of those phones and they're offbrand chunky bois (literally thicker than the thickest part of a MacBook Air) with batteries in the same size class as a MacBook Air (about 50 Wh). I doubt many people are buying these, but whatever.

Apple rates the MBA for about 15 hours web browsing, 18 hours video playback. So if you're a deliberately obtuse @Leifi, you might try to treat this as if it's a result which proves your point.

But complaining that a laptop has worse video playback time than a phone is silly. Both have SoCs which handle all the computation required for video playback in a dedicated video decompression block which needs only a few milliwatts. For both, the main drain on the battery during video playback is the display.

And guess what? Display power is proportional to screen area. You simply cannot expect a laptop with ~4x the display area to light up to have the same battery life as a phone with roughly the same battery capacity. Not unless you expect the physically impossible, that is.

Aside from screen size/resolution and battery size, we also don’t know the screens brightness and FPS, plus one is a full-on computer likely running a complex OS and one is a phone. It’s really just a useless comparison and seems like they’re grasping at straws here to prove a point.

Aside from the mountain of evidence provided by many forum members on this, it’s still funny to me when we have a literal chip engineer in this forum saying they’re wrong, and @Leifi still continues to push this narrative.
 

Leifi

macrumors regular
Nov 6, 2021
128
121
I looked up the first two of those phones and they're offbrand chunky bois (literally thicker than the thickest part of a MacBook Air) with batteries in the same size class as a MacBook Air (about 50 Wh). I doubt many people are buying these, but whatever.

Apple rates the MBA for about 15 hours web browsing, 18 hours video playback. So if you're a deliberately obtuse @Leifi, you might try to treat this as if it's a result which proves your point.

But complaining that a laptop has worse video playback time than a phone is silly. Both have SoCs which handle all the computation required for video playback in a dedicated video decompression block which needs only a few milliwatts. For both, the main drain on the battery during video playback is the display.

And guess what? Display power is proportional to screen area. You simply cannot expect a laptop with ~4x the display area to light up to have the same battery life as a phone with roughly the same battery capacity. Not unless you expect the physically impossible, that is.

As you say an M1 Max 14-inch does about 17 hours of video playback.. which is about 0.24286 hours per Wh battery (70Wh). The Note 7 does +8h video playback.. which is about 0.65041 hours video playback per Wh (12.3).

Display power depends on many things, not just the screen area. The display of a Macbook LCD will undoubtedly draw more power than an OLED display, but that's a design decision, and could hardly be the only factor explaining the x3 work-done to watt ratio there.

The same thing of apples vs oranges will also be an issue when comparing Macbooks against other laptops. Some may have bigger RAM, extra GPUs, more ports, better Wifi, higher-res screens, OLED or screens with a higher refresh, etc. But it is still generally considered that Apple does well when comparing video-playback for example compared to other laptops.

But You cant have the cake and eat it too. You cannot choose only to make the power-efficiency comparisons that are beneficial to Apple, you should look at things from a wider perspective.

A Samsung note owner could make pretty much the same arguments against a MacBook regarding power-efficiency as the MacBook reviewers do against a fully-fledged gaming laptop.
 

Leifi

macrumors regular
Nov 6, 2021
128
121
But complaining that a laptop has worse video playback time than a phone is silly. Both have SoCs which handle all the computation required for video playback in a dedicated video decompression block which needs only a few milliwatts. For both, the main drain on the battery during video playback is the display.
You have a fair point there.. but similar results are also seen for webbrowsing etc.
 

TiggrToo

macrumors 601
Aug 24, 2017
4,205
8,838
As you say an M1 Max 14-inch does about 17 hours of video playback.. which is about 0.24286 hours per Wh battery (70Wh). The Note 7 does +8h video playback.. which is about 0.65041 hours video playback per Wh (12.3).

Display power depends on many things, not just the screen area. The display of a Macbook LCD will undoubtedly draw more power than an OLED display, but that's a design decision, and could hardly be the only factor explaining the x3 work-done to watt ratio there.

The same thing of apples vs oranges will also be an issue when comparing Macbooks against other laptops. Some may have bigger RAM, extra GPUs, more ports, better Wifi, higher-res screens, OLED or screens with a higher refresh, etc. But it is still generally considered that Apple does well when comparing video-playback for example compared to other laptops.

But You cant have the cake and eat it too. You cannot choose only to make the power-efficiency comparisons that are beneficial to Apple, you should look at things from a wider perspective.

A Samsung note owner could make pretty much the same arguments against a MacBook regarding power-efficiency as the MacBook reviewers do against a fully-fledged gaming laptop.
A Note 7? Are you for real?

That’s what, 5 years old! You’re seriously trying to compare a discontinued 5 year old Phablet with a new MacBook????
 

Pressure

macrumors 603
May 30, 2006
5,182
1,545
Denmark
I just want to make the point that Cinebench uses Intel maintained and optimised x86 Intel Embree kernels for ray tracing that aren't optimised for Aarch64 (and M1 obviously). I know, shocking.

Right now it is not a good cross-architecture benchmark because the load being applied is not equivalent.

The current implementation uses an SSE-to-Neon translation layer instead of Arm SIMD and it also uses the older SSE codepath instead of AVX2, while also ignoring the fact that M1 has four NEON 128-bit SIMD units.

This will put any Arm architecture at a disadvantage. However, the Apple Open Source developers on GitHub have made a git pull request with changes that will get merged it into the main git at some point.
 

mr_roboto

macrumors 6502a
Sep 30, 2020
856
1,866
But You cant have the cake and eat it too. You cannot choose only to make the power-efficiency comparisons that are beneficial to Apple, you should look at things from a wider perspective.
I do. Which is why I told you that it's nonsense to take movie playback numbers from a 0.75" thick 50Wh battery phone with a sub-7" screen and claim that a notebook which can't play back movies or browse the web as long means that the notebook's SoC is bad. I showed you why that's the wider perspective and even as you acknowledge you're trying to claim it's narrow.
 
Last edited by a moderator:

Rigby

macrumors 603
Aug 5, 2008
6,257
10,215
San Jose, CA
I just want to make the point that Cinebench uses Intel maintained and optimised x86 Intel Embree kernels for ray tracing that aren't optimised for Aarch64 (and M1 obviously). I know, shocking.

Right now it is not a good cross-architecture benchmark because the load being applied is not equivalent.

The current implementation uses an SSE-to-Neon translation layer instead of Arm SIMD and it also uses the older SSE codepath instead of AVX2, while also ignoring the fact that M1 has four NEON 128-bit SIMD units.

This will put any Arm architecture at a disadvantage. However, the Apple Open Source developers on GitHub have made a git pull request with changes that will get merged it into the main git at some point.
It's great that Apple is actively working in open source projects like this to improve how they run on M1. However, the purported 8% improvement achieved by Apple's optimizations won't be nearly enough to catch up in Cinebench, even if it translates 1:1 to the benchmark. Perhaps the M1's SIMD instructions are simply not that good for this particular use case.
 

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
It's great that Apple is actively working in open source projects like this to improve how they run on M1. However, the purported 8% improvement achieved by Apple's optimizations won't be nearly enough to catch up in Cinebench, even if it translates 1:1 to the benchmark. Perhaps the M1's SIMD instructions are simply not that good for this particular use case.

The 8% was due to a little work on one type of problem. There is much more to be gained in Cinebench, which has numerous types of code where optimizations would be beneficial.
 
  • Like
Reactions: tomO2013

Rigby

macrumors 603
Aug 5, 2008
6,257
10,215
San Jose, CA
The 8% was due to a little work on one type of problem. There is much more to be gained in Cinebench, which has numerous types of code where optimizations would be beneficial.
That may or may not be true in this particular case (I would expect Apple to step in if there were other major performance gains to be had). But the fact that pretty much all software is optimized for x86 is an undeniable disadvantage for ARM that won't just go away.
 
  • Like
Reactions: mi7chy and jdb8167

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
That may or may not be true in this particular case (I would expect Apple to step in if there were other major performance gains to be had). But the fact that pretty much all software is optimized for x86 is an undeniable disadvantage for ARM that won't just go away.

That makes no sense. It is already going away. Over time it will continue to vanish. Because even with that supposed “undeniable disadvantage,” for almost all workloads, a performance-optimized Arm chip is faster than an x86 chip at any given power level.
 
  • Like
Reactions: ahurst

jdb8167

macrumors 601
Nov 17, 2008
4,859
4,599
That makes no sense. It is already going away. Over time it will continue to vanish. Because even with that supposed “undeniable disadvantage,” for almost all workloads, a performance-optimized Arm chip is faster than an x86 chip at any given power level.
I’m afraid that might be too optimistic. I hope it is true but it depends on companies other than Apple to get their act together. Relying on Qualcomm and Arm to produce CPUs that are competitive with x86 might be wishful thinking. The existence proof of the M1/M1 Pro/M1 Max will probably help though.
 

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
I’m afraid that might be too optimistic. I hope it is true but it depends on companies other than Apple to get their act together. Relying on Qualcomm and Arm to produce CPUs that are competitive with x86 might be wishful thinking. The existence proof of the M1/M1 Pro/M1 Max will probably help though.
Given the inherent disadvantages of x86-64, and the fact that there is no champion for x86-64 other than x86-64 makers (Microsoft doesn’t champion it. The OEMs don’t champion it. They’d be perfectly happy if you buy machines with Arm in them), it is going to happen.

And, in any case, I was referring to the mac ecosystem. That’s the software we are talking about.
 
  • Like
Reactions: jdb8167

tomO2013

macrumors member
Feb 11, 2020
67
102
Canada
I'd also add to @cmaier point , that the most prevalent computational device type available to consumers today is the cellular phone... NOT a laptop or desktop workstation. x86 lost the battle for low power device applications years ago with Intels' Atom attempts.

ARM dominates this space and in general, a much broader and more diverse space of applications of technology (cars, embedded devices, laptops, desktops, server, super computers etc...).
On the consumer side, there is a wealth of code out there currently well optimized for mobile devices and solutions that are starting to make an upward path to desktop, or at least blur the lines of "what is a mobile app, what is a desktop app.... are mobile apps really inferior to desktop?".

Microsoft sees this - a case in point; MS is laying the groundwork for ARM on windows with capabilities in Windows 11 ( to run your android apps natively ).
Apple was also very smart rolling out catalyst and enabling (iOS apps on ARM Mac OS desktops) to tap into the huge wealth of iOS and iPad OS native apps.

There is most certainly a wealth of performance latent in the Apple Silicon approach being left on the table if all that we do is simply recompile x86 to Arm 64. This says nothing about compiling and optimizing to native Apple Silicon as a further level of optimization where GPU, AMX2, neural engine, ISP's can all come into play and offload traditional CPU/GPU tasks away from the main CPU/GPU , thereby freeing up the CPU/GPU for other operations.

TLDR; as evidenced across a wealth of hardware review sites, Apple Silicon is an entire paradigm shift to how we think about performance on the desktop (unified memory, multiple specialized accelerators and co-processors). If we choose to ignore this, then we leave unrealized potential and performance on the table.
 
  • Like
Reactions: ahurst and jdb8167

bombardier10

macrumors member
Nov 20, 2020
63
45
M1 Max killer :cool:
 

Attachments

  • i7-12700kadler.png
    i7-12700kadler.png
    1.8 MB · Views: 101

tomO2013

macrumors member
Feb 11, 2020
67
102
Canada
M1 Max killer :cool:
You are right.… but let’s put this in context because devil is in the details! ;)

Today, that top of the line x86-64, 16 core, 272 Watt desktop/workstation monster part from Intel is an M1 Max laptop killer with the M1 Max’ smaller 10 core count on unoptimized for M1 software.
For comparison, a hypothetical 200+ Watt M1 would likely scale to 28-30 M1 cores, maybe more!

It’s probably worthwhile revisiting the accuracy of this killer statement in 6 months to 1 year (once cinebench has been optimized for Apple Silicon) in order to see if it remains true for users of Cinebench and Cinema 4d.

In the short term, on most other apps, I’d continue to still expect to see ADL desktop to show it’s 5-10% performance advantage with it’s 200W+ power footprint under heavy workstation load when compared against M1 on unoptimized synthetic benchmarks, games and legacy code relative to an M1 Pro/Max. Of course using native Apple apps optimized shows the a significant performance advantage the other way in Davinci Resolve, Final Cut Pro, Logic etc…

That being said, given the fixation on cinebench it’s got me thinking...
Clearly Maxon Cinema 4d is very important to a number of our friends on this thread. I was wondering if the next person who is dissatisfied about the M1 Max performance in Cinebench would be willing to please show and share with me, the types of 3d professional workload that they are running in Cinema 4d (not pre-canned benchmark scenes) so that I can get an understanding of the nature of your workload and where you are limited by the performance offered by M1 Max in portable laptop VS you would realize greater value jumping to a desktop workstation configuration?

I really want to calibrate my understanding of the level of expectation here for performance…
For example @Leifi / @bombardier10 : is the cinebench test scene that you using representative of the types of scenes that you are both building in your spare time / your job / your professional lives?
If so, would you care to share samples of your 3D work (even appreciating copyright considerations with your employer, I’d just be grateful to see a personal project) so that perhaps the community here can help evaluate as to whether the level of performance that you are unhappy with while investigating an M1 Pro / Max VS alderlake with the community on an apple forum?
You may be better served looking at other software that is better optimized for your workflow - have you looked at alternatives?

Look forward to seeing what all you creative talented people are able to show and share and seeing if I can be of any assistance to you in finding a better alternative that may better meet your needs.

Happy Holidays,

Tom.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
Today, that top of the line x86-64, 16 core, 272 Watt desktop/workstation monster part from Intel is an M1 Max laptop killer with the M1 Max’ smaller 10 core count on unoptimized for M1 software.
For comparison, a hypothetical 200+ Watt M1 would likely scale to 28-30 M1 cores, maybe more!

Probably not. The M1 max ( 10 cores) is in the ballpark of 80W TDP because dragging along all that much large GPU cores. Twice that would be 180W ballpark (and 20 cores ) and 4x would be in the 300-320W range. Apple is more so designing at the upper levels a GPU with some CPU cores sprinkled on top. Which means highly likely not to get to core counts that match up with SoC which entirely remove ( or at least minimally great reduce the GPU transistor budget allocation. )

Apple will be under 200W but the CPU core count would be in the 20 range. For CPU core only, but high parallel workloads it likely will have issues with 32+ like core counts of CPU only "SoC" packages.

Apple's desktop processor is pretty likely going to carry a large amount of laptop design constraints along with it. One of those is the dual edge sword of having to feed an order of magnitude more math function units with bandwidth to get their work done. ( and therefore a governor cap on the CPU allocated bandwidth).


P.S. 16 P cores and 4 E cores is still "trouble" for Intel and AMD mainstream Gen 12 (an 13) and Zen 3 (and 4) offerings.

However, higher still counts though Apple probably has substantive power dissipation issues to deal with.
 
Last edited:

Bandaman

Cancelled
Aug 28, 2019
2,005
4,091
Fascinating, thanks for the analysis. I still disagree with you conclusion about Apple making the right choice because of power savings.

Since I cancelled my M1 Max, I'll probably be ordering a Dell or Lenovo Alder lake laptop (probably i7 level) in not too long a time. I have to live in a windows world at work...
Believe it or not, some people prefer more the 1 hour of battery life in their laptop.
 

Bandaman

Cancelled
Aug 28, 2019
2,005
4,091
I've never owned a laptop that only lasted an hour, but just how is that relevant to anything I said?
This thing is gonna run HOT and consume a lot of power. I'm not sure how well battery life is going to be with this new generation of Intel chips. My comment was in regards to power savings. They chose power savings because it's a laptop. I'm curious to see how their desktop chips are going to be when power savings is not on the table.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.