Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

nquinn

macrumors 6502a
Original poster
Jun 25, 2020
829
621
Define "far".

Intel is at least 2 generations behind the M1 by my estimation.
Or even 3.

Just look here comparing AMD Zen 3 with the 2021 M1 Pro (based on 2020 core):

Screen Shot 2022-10-18 at 2.19.26 PM.png


The 6800u is just appearing now in laptops and hard to find and the M1 is nearly 3x more efficient.
 
  • Like
Reactions: ArkSingularity

mi7chy

macrumors G4
Oct 24, 2014
10,620
11,293
Underwater is an overstatement. I bought a couple Mac Studios for my workplace last March, and for the apps we are using heavily--photo-oriented applications like Lightroom, Photoshop and related--the Studio beats out even my high-end AMD Threadrippers with top of the line Nvidia cards on all but one or two tasks.

Apps like Photoshop favor strong single-core performance so Threadripper which is for heavily multithreaded apps is the wrong tool for the job.

PS.png
 
  • Like
Reactions: ct2k7

tomO2013

macrumors member
Feb 11, 2020
67
102
Canada
Apps like Photoshop favor strong single-core performance so Threadripper which is for heavily multithreaded apps is the wrong tool for the job.

PS.png


This was news to me , as many other youtubers appear to show that the Apple Silicon in recent optimized versions of Photoshop simply leaving the Intel/AMD variants in the dust.

I decided to take a look at the benchmarking methodology from Puget (who openly sell some very fine systems that compete directly against Apple, Dell etc…).

This is what I found … (to help I’ve highlighted the important part).

PNG image.png


In a nutshell any benchmark that Puget is running today must (by process of elimination - be running under Rosetta if we are to take their release notes at face value). This certainly helps explain why M1 and M2 fall behind the Xeon Mac Pro in the Puget test database, a system that has been demonstrated time and time again to be overtaken by laptop Apple Silicon parts.

Moral of the story, I’m not sure of the applicability of the bench tests posted (at least from the Apple side of things). Folks who buy an Apple Silicon mac to use Photoshop can use an actual optimized version of PhotoShop available to Adobe’s apple user base - something that Puget Systems by their own documentation do not and cannot test currently!

As an aside, to the OP’s comment, the Apple Silicon optimized version of Photoshop performs significantly better (in some cases on object creative fill, AI automasking operations that take 4-5X faster than the x86 equivalent).
 
  • Like
Reactions: Colstan

Gudi

Suspended
May 3, 2013
4,590
3,267
Berlin, Berlin
I don't really mean "ever" - more like anytime in next 10-15yrs or so
It took Apple about a decade to develop arm64 into the M1 desktop chip and by then they must've worked well over a decade on arm32. Don't expect another major transition in less than 25 years. Nonetheless there will be sizable performance gains within the platform. Maybe not every year, but over time. It doesn't even make sense to compare them year by year.

Former Apple engineer details how the magic of M1 Mac performance began 10 years ago
 

senttoschool

macrumors 68030
Nov 2, 2017
2,626
5,482
Or even 3.

Just look here comparing AMD Zen 3 with the 2021 M1 Pro (based on 2020 core):

View attachment 2097276

The 6800u is just appearing now in laptops and hard to find and the M1 is nearly 3x more efficient.
I've written extensively on why Cinebench is a terrible general purpose CPU benchmark. It heavily favors x86 and CPUs that have SMT/Hyperthreading as well as CPUs that have many weak cores.

If you use Geekbench or SPEC,the perf/watt of the M1 is definitely generations ahead of AMD, which is already ahead of Intel.
 

ArkSingularity

macrumors 6502a
Mar 5, 2022
928
1,130
I've written extensively on why Cinebench is a terrible general purpose CPU benchmark. It heavily favors x86 and CPUs that have SMT/Hyperthreading as well as CPUs that have many weak cores.

If you use Geekbench or SPEC,the perf/watt of the M1 is definitely generations ahead of AMD, which is already ahead of Intel.
Interesting. I noticed that Cinebench seemed to report much stronger-than-expected SMT gains as well. Never really thought much of it, I figure Cinebench just utilizes SMT very well for these kinds of workloads.
 

leman

macrumors Core
Oct 14, 2008
19,520
19,671
Interesting. I noticed that Cinebench seemed to report much stronger-than-expected SMT gains as well. Never really thought much of it, I figure Cinebench just utilizes SMT very well for these kinds of workloads.

Long dependent computation chains and therefore low inherent ILP. This is best-case scenario for SMT and worst-case situation for Apple. Add to that that it's optimised for x86 SIMD with ARM SIMD being generated using a per-instruction translation tool and it becomes very clear why Cinebench results look so different from any other benchmark.
 

sam_dean

Suspended
Sep 9, 2022
1,262
1,091
The M1 bump was one of the most impressive CPU bumps I've ever seen, and I think it's from a combination of variables:

- Apple accidentally "overshooting" with how good their mobile chip was
- It's a strange point in time where desktops and laptops have similar performance at the same core counts
- It's soooo quiet and cool which is more impressive to me than just the performance

The M2 is already giving hints that at higher performance the laptop chassis can't keep up with more heat, and I suspect moving forward, even with a dip to 3nm, 2nm, or below, the sheer amount of performance simply won't be able to be cooled as passively as the M1.

I have the 16" 24-core igpu variant and it's just incredible.

To give you an idea what the transition from Intel to Apple chips meant here are some figures:

- 2014-2019: Intel Macs were stuck on 14nm. That's 5 years at the same die shrink.

- Late 2020=2022: Apple is now stuck on 5nm. That's 2 years at the same die shrink.

Apple skipped 10nm & 7nm that AMD is using from 2016-2022.

"4nm" that Apple claims to be use on the Apple A16 Bionic SoC chip found in the iPhone 14 Pro & Pro Max is the next die shrink. Odds are it will be used in the M2 Pro, M2 Max & M2 Ultra by as early as this December or next year. While others speculate that 3nm will be 1st used by Apple.

My last Macs I kept using them until macOS Security Updates ended at ~120 months. So say my 2012 iMac 27" Core i7 22nm that I replaced with a 2022 Mac Studio M1 Max 5nm was mind blowing. When I replace that 5nm Mac in 2032 then odds are a sub-1nm Apple chip will be a mind blowing again.

Other than die shrinks another key difference of Intel & Apple chips is how much power input their desktop & laptop chips accepts.

For Intel if you have a desktop chip it tends to accept more power than laptop chip.

With Apple whether it be a M1, M2, M1 Pro or M1 Max chip it uses the same power input whether it be placed into a desktop, laptop or tablet.

In this regard I saw a missed opportunity for Apple to push how superior Apple chips are by not using similar Intel desktop chip power input to show off what 5nm means. This would have made the M1, M2, M1 Pro, M1 Max & M1 Ultra raw performance installed into desktops heads above shoulders better than anything Intel, AMD or NVidia could ever do at a larger die shrink without higher power consumption. But then again it would be counter to Apple's push of being a green company.

Key use case of Macs are in noise sensitive environments like music or film production studios so no fans or very low fan noise is a key purchase consideration.
 

quarkysg

macrumors 65816
Oct 12, 2019
1,247
841
In this regard I saw a missed opportunity for Apple to push how superior Apple chips are by not using similar Intel desktop chip power input to show off what 5nm means. This would have made the M1, M2, M1 Pro, M1 Max & M1 Ultra raw performance installed into desktops heads above shoulders better than anything Intel, AMD or NVidia could ever do at a larger die shrink without higher power consumption. But then again it would be counter to Apple's push of being a green company.
Apple probably don't really care much about being king of the hill when it comes to their SoC performance. They'll probably brag about it if they can, but will point out their strong points if they are not.

Desktop is a very small part of their Mac revenue, as noted by many, so it's unlikely Apple want to spend too much on making the fastest SoC. They most likely will just continue to design their SoC to scale from the Apple Watch all the way to the Mac Pro.

It's quite an achievement for them to be able to achieve single thread performance parity from an iPhone all the way to the Mac Studio, and I think this will continue on.
 
  • Like
Reactions: sam_dean and nquinn

tomO2013

macrumors member
Feb 11, 2020
67
102
Canada
I think they don’t heed to match Intel/AMD at the Uber high end traditional cpu game… they can exceed them (with a fraction of the power consumption) continuing their approach of dedicated coprocessors and accelerators targeting apples tightly controlled and integrated vertical ecosystem.
It’s not really a formula that is easily replicated in its entirety by others in the desktop/laptop/workstation market.
 

leman

macrumors Core
Oct 14, 2008
19,520
19,671
I think they don’t heed to match Intel/AMD at the Uber high end traditional cpu game… they can exceed them (with a fraction of the power consumption) continuing their approach of dedicated coprocessors and accelerators targeting apples tightly controlled and integrated vertical ecosystem.
It’s not really a formula that is easily replicated in its entirety by others in the desktop/laptop/workstation market.

You mention and important thing, which is brand uniqueness and non-replicability. This is indeed a huge problem that Apple has been trying to solve and Apple Silicon is the key to this. Ten years ago MacBooks were the considered the pinnacle of laptop design, but the rest of the industry has since caught up and other premium laptops are not much worse. So Apple needs something to truly differentiate themselves, which is the technology stack you mention.

The Mac has already established itself as a premium personal computer for home/business/education/content creation user, which are all aligned with the perception of Apple as a "lifestyle company". The question is whether Apple is content with that or whether they want to continue the push into the expert sector as well. Personally, I think this would be a wise direction because it's a certain class of IT experts that drive software adoption and in a way, hardware fashion. For the Mac to survive as a tool (and not just a lifestyle product) it has to attract (all kinds of) software developers, data scientists, machine learning experts. These folks need performance. And judging by what Apple is doing, that's the direction they go in. I am unsure whether Apple will release a desktop that can challenge the performance of enthusiast PC towers, but in the mobile sector Apple has a steady lead in many areas.
 
  • Like
Reactions: tomO2013

Gudi

Suspended
May 3, 2013
4,590
3,267
Berlin, Berlin
For the Mac to survive as a tool (and not just a lifestyle product) it has to attract (all kinds of) software developers, data scientists, machine learning experts. These folks need performance.
Nah, Apple won graphic designers with pixel-perfect color accuracy and reproducibility. They won programmers and scientists by being based on Unix. Machine Learning didn't really exist outside of scientific papers before the Neural Engine enabled it. Professionals always found Apple at the forefront of innovation. Only when Apple offers a product in more than two colors it also becomes a lifestyle product. The fastest computer in the world was never a Mac.
 

leman

macrumors Core
Oct 14, 2008
19,520
19,671
Machine Learning didn't really exist outside of scientific papers before the Neural Engine enabled it.


I am very puzzled about your statement about Neural Engine. There is literally nobody in the machine learning business using the Neural Engine. ML is Nvidia's domain, where they managed to promote CUDA while the iron was still hot. Now it's very difficult to get them out. Apple will have to massively improve performance and software compatibility until anyone considers using Macs seriously for ML. There is some development in this area, for sure, but we are still far off.


The fastest computer in the world was never a Mac.

The fastest laptop in the world (for all practical purposes) is a Mac. Sure, the are PC gaming laptops which are faster, but they don't deliver their peak performance in a mobile environment. A MacBook Pro offers constant high performance on or off battery, with an amazing battery life on top of that. Nobody else even comes close.
 

maflynn

macrumors Haswell
May 3, 2009
73,682
43,740
I think they don’t heed to match Intel/AMD at the Uber high end traditional cpu game
But that's the silly games they play, and I don't mean just apple. We the consumer use those metrics to decide whether AMD, Intel or Apple is going to be our CPU of choice.

For most people buying PCs, Macs, there is little need to get anything over a 10th gen Intel, Zen 3, or a M1 processor. My Razer is on a 9th gen Intel and it handles everything I throw at it. Professionals of course require performance when time is money but what percentage of the market do those folks occupy?

I believe we hobbyists tend to over exaggerate our needs and justify the latest and fastest processor. There's nothing wrong with that, its our money and we can use it however we choose but all I'm getting at, is that most computer buying people have little need for all of the performance which the M2, Intel 13th Gen, or a Ryzen 9 7950x produces.

The same can be said for the RTX 40 series cards btw
 

Gudi

Suspended
May 3, 2013
4,590
3,267
Berlin, Berlin
ML is Nvidia's domain, where they managed to promote CUDA while the iron was still hot.
But that's for professional business to business applications, not attracting all kinds of individual software developers, data scientists, machine learning experts and other folks. The PC is a computer one person buys for his own private and professional needs. Apple (the largest tech company that ever was) isn't even in the business of supplying data centers with graphic cards. That's a too obscure use case to even be considered worthwhile. But they are putting a Neural Engine in the hand of every teenager. And all the machine learning experts can make use of that for the first time.
Apple will have to massively improve performance and software compatibility until anyone considers using Macs seriously for ML.
Amazon and Google will never use Macs to analyze their big data. But for everyone who isn't a business customer there's basically a free Neural Engine in every Apple product. How could you not use it for a small business or science project?
The fastest laptop in the world (for all practical purposes) is a Mac.
But laptops in general were never famous for their performance. Only with the M1 for the first time laptop and desktop performance are roughly the same. People who run their business on a MacBook, bought them because of their mobility and reliability, not just performance. Professionals who need performance, bought desktop Macs.
A MacBook Pro offers constant high performance on or off battery, with an amazing battery life on top of that. Nobody else even comes close.
And yet that's not the main selling point. If a Windows laptop was the fastest, Apple professionals wouldn't buy it. In fact for years Apple sacrificed performance in order to maximize battery life and minimize weight. Only now with ARM RISC technology M1 MacBooks can deliver both, incredible performance and incredible battery life. The true reason to buy a Mac was always macOS. And the reason to not just build a Hackintosh was the level of hardware and software integration.
 

leman

macrumors Core
Oct 14, 2008
19,520
19,671
But that's for professional business to business applications, not attracting all kinds of individual software developers, data scientists, machine learning experts and other folks. The PC is a computer one person buys for his own private and professional needs. Apple (the largest tech company that ever was) isn't even in the business of supplying data centers with graphic cards. That's a too obscure use case to even be considered worthwhile. But they are putting a Neural Engine in the hand of every teenager. And all the machine learning experts can make use of that for the first time. [...] But for everyone who isn't a business customer there's basically a free Neural Engine in every Apple product. How could you not use it for a small business or science project?

Apologies if I might sound somewhat condescending... but you are not a data scientist, are you? Things don't work the way you describe. Individual machine learning experts are mostly either using a local Nvidia GPU or a cloud based solution like Google Collab for their work. The Neural Engine is practially useless to a ML researcher since you cannot use it to train models. In its current form it's a device for accelerating a class of ML models for end client software (e.g. image recognition, image processing or similar).

Apple does have hardware suitable for ML work (AMX and the GPU), but the software support for the state of the art ML frameworks is still wobbly at best. And the performance could be better too. While Apple maintains a plugin for Tensorflow (a popular ML framework), there is zero communication and documentation, and people who'm I know who work with this stuff say that the situation is fairly frustrating because sometimes stuff doesn't work and there is no communication channel.



But laptops in general were never famous for their performance. Only with the M1 for the first time laptop and desktop performance are roughly the same. People who run their business on a MacBook, bought them because of their mobility and reliability, not just performance. Professionals who need performance, bought desktop Macs.

Professionals who don't need mobility and absolutely need performance have very little reason to choose a Mac currently. Enthusiast or professional desktop does the job better.

In fact for years Apple sacrificed performance in order to maximize battery life and minimize weight.

I am not sure I agree. In my opinion Apple's design target was to maximise performance under a fixed portability constraint. I don't see how they sacrificed performance. Their design targets for power consumption were essentially unchanged for the last decade or so. It's another problem that Intel CPUs kept getting hotter which made this model unsustainable. Apple Silicon allows Apple to stay within their design constraints. But it currently lacks vertical scalability.
 

Gudi

Suspended
May 3, 2013
4,590
3,267
Berlin, Berlin
Apple does have hardware suitable for ML work (AMX and the GPU), but the software support for the state of the art ML frameworks is still wobbly at best.
And it will stay that way, because programming your own ML framework is a tiny niche of the whole PC market. Whereas automatic image processing of your iPhone photos is useful for almost everyone.
Professionals who don't need mobility and absolutely need performance have very little reason to choose a Mac currently.
And yet they do, because of macOS.
In my opinion Apple's design target was to maximise performance under a fixed portability constraint. I don't see how they sacrificed performance.
So you never heard that the 12" MacBook and the first years of MacBook Air were underpowered? That Apple used mobile versions of Intel CPUs even in the iMac and Mac mini desktops? That Apple rather lets their Macs run hot and throttle than spin up the fans audibly? Performance was always a second class citizen versus other design goals.
Their design targets for power consumption were essentially unchanged for the last decade or so. It's another problem that Intel CPUs kept getting hotter which made this model unsustainable. Apple Silicon allows Apple to stay within their design constraints.
No, the M2 also runs hotter than the M1. It just happens on a different level. And with the occasional process node shrink Intel's power consumption came down too. It's just that Apple's constraints were never set for maximum performance to begin with.

MacBook Pro M2 runs significantly hotter than M1 model
 

ArkSingularity

macrumors 6502a
Mar 5, 2022
928
1,130
And it will stay that way, because programming your own ML framework is a tiny niche of the whole PC market. Whereas automatic image processing of your iPhone photos is useful for almost everyone.

And yet they do, because of macOS.

So you never heard that the 12" MacBook and the first years of MacBook Air were underpowered? That Apple used mobile versions of Intel CPUs even in the iMac and Mac mini desktops? That Apple rather lets their Macs run hot and throttle than spin up the fans audibly? Performance was always a second class citizen versus other design goals.

No, the M2 also runs hotter than the M1. It just happens on a different level. And with the occasional process node shrink Intel's power consumption came down too. It's just that Apple's constraints were never set for maximum performance to begin with.

MacBook Pro M2 runs significantly hotter than M1 model
I'm not sure you understand Leman's post. You can't just take the ANE and use it to train anything on the M1. Artificial neural networks have to be trained, and that is a process that is wholly separate from the process of then using the neural model that is generated in-the-field (say, for recognizing user images on the end-device or for transcribing text from them).

Apple's neural engine accelerates the latter, not the former. Models still need to be trained, and this isn't done with the accelerated neural engine on the M1.
 

Gudi

Suspended
May 3, 2013
4,590
3,267
Berlin, Berlin
Artificial neural networks have to be trained, and that is a process that is wholly separate from the process of then using the neural model that is generated in-the-field.
And as I said, that's something hardly anyone ever does. Like most people who drive a car professionally, do not build their own motors. Even scientists, developers and experts, do not necessary need to train neural networks. BMW doesn't provide ample documentation on how to build your own motor. If you're a freaking scientist, you should figure it out yourself.
 

leman

macrumors Core
Oct 14, 2008
19,520
19,671
And it will stay that way, because programming your own ML framework is a tiny niche of the whole PC market. Whereas automatic image processing of your iPhone photos is useful for almost everyone.

Let's decide what we are talking about. In your last post you mentioned data scientists and model researchers (who btw. have to develop models, not frameworks — big distinction!). Now you are only talking about some basic consumers of the ML models like automatic image classification. Sure, that's very important and the ability to do low-energy classification and processing tasks is definitely something that sets Apple Silicon Macs apart, but that's not what ML researchers do.


And yet they do, because of macOS.

You say that based on what? I don't know anyone who would go "I really need a big Threadripper to do my work but I will choose Mac Studio instead because of macOS". I know plenty of people however who would prefer to use macOS but had to get a beefy PC because their computational needs are not currently satisfied by Apple Silicon.

So you never heard that the 12" MacBook and the first years of MacBook Air were underpowered?
That Apple used mobile versions of Intel CPUs even in the iMac and Mac mini desktops?
Performance was always a second class citizen versus other design goals.

No, performance was always part of their design goal, it just wasn't the only goal. Apple's designs are balanced, meticulously so. If you want to have an example of a design that really sacrifices performance look at something like an LG Gram that uses a 15W subnotebook CPU in a 17" chassis. Apple always worked with strict constraints, but let's not confuse "maximise performance under given constraints" with "sacrifice performance". First MacBook Airs were much faster than the contemporary sub notebooks because Apple had Intel develop a principally new class of CPUs for them (which now became the core of Intel's business btw). Apple routinely used the most expensive and premium mobile processors in their designs because they offered the most performance per watt — they could have easily done like other manufacturers and just used cheaper/slower CPUs. For example Apple insisted on using 28W-class CPUs in the 13" MBP for a while while everyone else just dropped down to the 15W CPU tier.


That Apple rather lets their Macs run hot and throttle than spin up the fans audibly?

How is that an example of "sacrificing performance"? Quite in contrary, this is a power management technique that enables the system to achieve maximal possible performance under the given thermal conditions. It's a testimony of their precision thermal engineering more than anything else. If you look at independent hardware reviews, the MacBook Pro was noted for extracting more performance out of Intel CPUS precisely because Apple temperature-throttled instead of power-throttling unlike other vendors. They traditionally run the CPUs unrestricted by total power consumption. Of course, in the recent years this became moot since Intel's power consumption went up into stratosphere.

It's just that Apple's constraints were never set for maximum performance to begin with.

Apple does "maximal performance for the given chassis constraints". Not "maximal performance at any cost". If that's what you mean, we are in agreement. But much of your post tried to argue that Apple has been deliberately sacrificing performance, and I just don't see any evidence for that. Again, there is a considerable difference between "ok, this is my design goal and I am going to do everything I can to extract every last bit of it" (what Apple does) vs. "ok, I'll just drop the performance until it fits my design goal" (what you seem to suggest that Apple does).

Which incidentally puts the current family of Apple Silicon at an odd spot. M1 series no longer maximise the performance for the given chassis because they can't scale vertically. Ideally, something like M1 Max CPU should be able to sustain at least 50-60W under full load for some extra performance. The chassis is currently underutilised, it can easily dissipate that amount of power even with fans on low. So it's actually a good sign that M2 can more power. It means that it might be more scalable.
 

leman

macrumors Core
Oct 14, 2008
19,520
19,671
And as I said, that's something hardly anyone ever does. Like most people who drive a car professionally, do not build their own motors. Even scientists, developers and experts, do not necessary need to train neural networks.

You do realise that you are talking to scientists, developers and experts, who try to explain to you that yes, they absolutely have to train their own neural networks? And even if we don't, the models we use are too complex to run on the Apple Neural Engine anyway which is currently only suitable for a certain subclass of consumer software problems.
 

ArkSingularity

macrumors 6502a
Mar 5, 2022
928
1,130
And as I said, that's something hardly anyone ever does. Like most people who drive a car professionally, do not build their own motors. Even scientists, developers and experts, do not necessary need to train neural networks. BMW doesn't provide ample documentation on how to build your own motor. If you're a freaking scientist, you should figure it out yourself.
Previously, you stated that this was a market solely for business to business sales, one that you stated was "too obscure" for Apple to take a part of. You then stated that people should just use the ANE that is packaged with every Apple product to use for scientific projects of their own. Evidently, data scientists aren't professional enough to need to train their own networks?

My guy, people create neural networks all the time in scientific computing, and those neural networks need to be trained. Not all networks are of the scale of the ones that Google trains to distribute to their customers, software developers train networks all the time for various reasons and at various scales. Not everyone needs (or uses) a supercomputer to do this.
 

Gudi

Suspended
May 3, 2013
4,590
3,267
Berlin, Berlin
You say that based on what? I don't know anyone who would go "I really need a big Threadripper to do my work but I will choose Mac Studio instead because of macOS".
Because lots of publishing, reporting and advertising is done on Macs and some even on iPhones. Being useful for professional work is hardly depending on raw performance alone.
No, performance was always part of their design goal, it just wasn't the only goal.
And as a goal, it was behind thin, light and silent. Performance was a low priority goal.
Apple's designs are balanced, meticulously so.
Indeed they are. If you wanted thin, light and silent with great battery live, you couldn't find anything better than a Mac. But if you wanted raw performance at any cost...
Apple always worked with strict constraints, but let's not confuse "maximise performance under given constraints" with "sacrifice performance".
Let's not confuse who chose these constraints and gave us the best Macs money can buy. But they are Macs, they are not freaking RTX monsters. No wonder a brick like that beats an M1 at some performance metric.

RTX4090.jpg

First MacBook Airs were much faster than the contemporary sub notebooks because Apple had Intel develop a principally new class of CPUs for them (which now became the core of Intel's business btw).
You just need to make up a new class and suddenly you're the best in it. So it all depends on who decides, what is or isn't a fair comparison. Does Apple's Neural Engine need to beat a Geforce RTX4090 to be considered viable for scientific work? Just make up your class!
Apple routinely used the most expensive and premium mobile processors in their designs because they offered the most performance per watt — they could have easily done like other manufacturers and just used cheaper/slower CPUs.
I'm sure Apple's Neural Engine beats every Nvidia Geforce in performance per watt. But when did we change the goal from: "These folks need performance." to "Most performance per watt is enough." ?
How is that an example of "sacrificing performance"?
Because it adds weight, mobility, silence and other convenience factors at the expense of delivering the maximum performance possible.
Quite in contrary, this is a power management technique that enables the system to achieve maximal possible performance under the given thermal conditions.
Nobody gave Apple thermal conditions, they designed them according to their priorities.
Apple does "maximal performance for the given chassis constraints". Not "maximal performance at any cost".
And who designed the chassis?
Which incidentally puts the current family of Apple Silicon at an odd spot. M1 series no longer maximise the performance for the given chassis because they can't scale vertically.
Geez, finally we have a CPU that runs cool and you complain that it doesn't run at the very edge of what's thermally possible for a given chassis! You should make up your mind what you really want, x86 or arm64.
Ideally, something like M1 Max CPU should be able to sustain at least 50-60W under full load for some extra performance. The chassis is currently underutilised, it can easily dissipate that amount of power even with fans on low.
But we want to crunch numbers, not heat the room. Customers like cool and silent, maximum heat dissipation was never Apple's goal. Just like raw performance.
 

ArkSingularity

macrumors 6502a
Mar 5, 2022
928
1,130
Because lots of publishing, reporting and advertising is done on Macs and some even on iPhones. Being useful for professional work is hardly depending on raw performance alone.

And as a goal, it was behind thin, light and silent. Performance was a low priority goal.

Indeed they are. If you wanted thin, light and silent with great battery live, you couldn't find anything better than a Mac. But if you wanted raw performance at any cost...

Let's not confuse who chose these constraints and gave us the best Macs money can buy. But they are Macs, they are not freaking RTX monsters. No wonder a brick like that beats an M1 at some performance metric.

View attachment 2098607

You just need to make up a new class and suddenly you're the best in it. So it all depends on who decides, what is or isn't a fair comparison. Does Apple's Neural Engine need to beat a Geforce RTX4090 to be considered viable for scientific work? Just make up your class!

I'm sure Apple's Neural Engine beats every Nvidia Geforce in performance per watt. But when did we change the goal from: "These folks need performance." to "Most performance per watt is enough." ?

Because it adds weight, mobility, silence and other convenience factors at the expense of delivering the maximum performance possible.

Nobody gave Apple thermal conditions, they designed them according to their priorities.

And who designed the chassis?

Geez, finally we have a CPU that runs cool and you complain that it doesn't run at the very edge of what's thermally possible for a given chassis! You should make up your mind what you really want, x86 or arm64.

But we want to crunch numbers, not heat the room. Customers like cool and silent, maximum heat dissipation was never Apple's goal. Just like raw performance.

I'm not entirely sure that high-efficiency and high-performance are mutually exclusive things. Do you remember the Pentium 4 Prescott fiasco? Intel got these processors over 3ghz on terribly primitive fabs compared to what we had today, but they made a frying pan in the process and backed themselves into a corner, EVEN on performance. They had to lengthen the pipeline to a whopping 31 stages (longer than anything we use today) to make this work on such an old fab. It technically outperformed the old processors, but it shot IPC and efficiency both down into the toilet and it became very clear that this was massively unsustainable once dual core processors came around. You could justify a single core using 120W if that's all the CPU had, but not if you needed two or four cores.

The Core 2 Duo replaced the Pentium 4. It was based on the Pentium M, which itself was literally based on the Pentium 3. Pentium 4, as an architecture, got thrown in the trash completely, and Intel walked away with an expensive lesson. The Core 2 Duo and Pentium M were MUCH more efficient, but they were also substantially faster than the Pentium 4 (especially later iterations of the Core 2 Duo) despite operating at much lower clock speeds. Nobody was complaining about Intel's strategy shift towards efficiency because having more efficient cores was a massive advantage, even for performance. This was at the dawn of the multicore era, and having efficient cores meant you could easily throw more of them in a chip. At the time, quad core chips were groundbreaking and impressive.

The vast majority of people who need to extract the absolute maximum possible performance on Threadripper-style rigs are much more concerned with multithreaded performance than with single threaded performance. Make no mistake, you still need decent single threaded performance (this is why we haven't just thrown a bunch of Cortex-A53s into every server rig and called it a day), but when you're running 32 threads, the absolute bottom line is "how much work can this system do?" Energy consumption doesn't scale linearly with clock speed, you have to increase the CPU voltage in order to increase clock speeds. This results in a quadratic increase in power consumption as you push clock speeds higher, and this doesn't serve HEDT rigs very well. You could push a 3ghz up to 4ghz and double the power consumption, but then you would have only enough thermal headroom for half the cores, and you'd have a slower system overall.

Performance and efficiency haven't been mutually exclusive since the multicore era began in the mid 2000s. If they were to just throw efficiency out the window, they wouldn't necessarily be able to just "pour more performance right out of the bottle," they'd shoot themselves in the foot when it comes to high-core-count systems (and probably would have to redesign some of the CPU's pipelines internally in the process). The Mac Studio might feel like it's rolling on softball in terms of TDP, but we're still awaiting the Mac Pro (which is widely anticipated to double core counts again). We haven't even seen all that Apple can do yet.
 
Last edited:

Gudi

Suspended
May 3, 2013
4,590
3,267
Berlin, Berlin
I'm not entirely sure that high-efficiency and high-performance are mutually exclusive things.
Good catch! They are absolutely not. Efficiency is an enabler of performance. The M1 couldn't be that fast, if it wasn't so efficient. But you can also use the same efficiency to go smaller and lighter and build for example the Apple Watch. Which is the fastest computer in its class, but not in general. As a wrist computer it allows for new kinds of health sensors, which are very interesting for medical professionals. Again the quality of data is here more important than pure performance. Apple loves to be the first to offer menstrual cycle calculations and often the competition never catches up, because what Apple is adding to its OS seems to be mere gimmicks. People pretend like Mac or Windows comes down to personal preference and not tangible benefits. But it's these little things that keep entire professions bound to Apple.
 
  • Like
Reactions: ArkSingularity
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.