Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Yes, but it is a hard area to enter, it is so much more than CPU efficiency. Interconnect, coprocessors, storage and not at least how well it handles the type of calculation made. It also must run very well on Linux. Usually it is a question of how much data the system can crunch per hour for X M$, including overhead.

I am not sure Apple is ready to invest enough resources to enter this segment, it is a new different area for them. They would also need to develop another level of support.
That's why ex-Apple Silicon engineers formed NUVIA so they can focus on servers. They were later bought up by Qualcomm to make their own laptop chips for Windows 11 on ARM.

Funny that they went all through the trouble of leaving Apple over server chips then end up doing laptop chips, again.
 
  • Like
Reactions: Varmann
PassMark shows the M2 Ultra pretty far down the list of high-end CPUs. It sits just under the i9-13980HX laptop chip, which even has a lower TDP than the M2 Ultra. It’s an improvement over M1 Ultra, but about half the performance of the top x86 chips.

 
It's worse than those benchmarks suggest. Real world scores on 7,1 with 6900xt and others benching the 6800XT show they outperform the M2ultra, and thats just on GPU. An i9 (what we used to have in our intel MacBooks level of processor) spanked the M2 ultra as well.

I had a maxed out 15" MBP i9... My 14" M1 MBP runs circles around that computer... I could get the fans going and the spinning wheel of boredom with the i9 after 15-20 minutes of photo/video editing... I've never heard the fans on the 14" nor ever seen the spinning wheel of boredom, so benchmarks are not synonymous with real world application...
 
  • Like
Reactions: Longplays
I had a maxed out 15" MBP i9... My 14" M1 MBP runs circles around that computer... I could get the fans going and the spinning wheel of boredom with the i9 after 15-20 minutes of photo/video editing... I've never heard the fans on the 14" nor ever seen the spinning wheel of boredom, so benchmarks are not synonymous with real world application...

Um, that is like 4 generations old i9 youre talking about. We're talking about a current i9. Yes, a laptop level current generation i9 runs faster than the M2 ultra. Look at the very bottom of the list below, that's where the M2ultra sits (one above it is the current laptop i9 that is faster). At the top of the list is an AMD EPYC that is ~3x the speed of the M2Ultra.

1687648527351.png
 
I had a maxed out 15" MBP i9... My 14" M1 MBP runs circles around that computer... I could get the fans going and the spinning wheel of boredom with the i9 after 15-20 minutes of photo/video editing... I've never heard the fans on the 14" nor ever seen the spinning wheel of boredom, so benchmarks are not synonymous with real world application...
Your Apple Silicon Mac's microarchitecture was designed specifically for your use case's sweet spot.

It did not have to have tech that you are least likely to use. It also does not hurt that you're using a 5nm chip than an Intel 14nm one.

Your 14" using a SoC also contributes it to your concerns as well rather than having a separate CPU, dGPU & RAM, etc that adds cost, weight, size, power consumption, latency, etc.

Many complain about the M2 Ultra... it is "too compromised". It is too focused on efficiency rather than raw performance with little mind of power consumption.

But that use case impacts less than 3% of all Macs shipped after less than 3 years of Mac chips? Generously... more or less 75,000 Macs shipped annually.

The raw performance concerns will be mostly, not entirely, addressed with the M3 Ultra/Extreme by Q1 2025. This is 21 months way.

So the complains will then be narrowed down to:

- M3 Ultra's 256GB RAM and M3 Extreme's 512GB RAM due to LPDDR5X memory capacity
- No swappable CPU, dGPU, eGPU, RAM, SSD & logicboard

I'd harshly judge the Ultra & Extreme by its 10th year.

By 2030 we'll be talking about RISC-V chips as it appears Apple is prepping for them.

I'd give this video a watch on how RISC-V will likely relegate legacy x86 chips to a niche optimistically like the audiophiles or pessimistically like mainframes.


Although the video is 3+ years old it touches upon business/tech trends, tech geopolitics and how chip makers are looking for ways to improve performance per watt further.

Many of Coreteks' videos correctly pointed out many trends in the market that eventually happened.

I am aware this isn't what very vocal Intel Mac Pro & PC gamers users want to happen but this is the direction where the whole tech industry is heading towards to as PC sales aren't sustainable. The writing's on the wall.

Screen Shot 2023-06-25 at 7.46.49 AM.png


Source: https://www.statista.com/statistics/263393/global-pc-shipments-since-1st-quarter-2009-by-vendor/

Want 4K@144Hz gaming? Next gen Playstation & Xbox with APUs and Switch with SoC will be cheaper alternatives at a price lower than the top-end desktop dGPU.
 
Last edited:
Your Apple Silicon Mac's microarchitecture was designed specifically for your use case's sweet spot.

It did not have to have tech that you are least likely to use. It also does not hurt that you're using a 5nm chip than an Intel 14nm one.

Your 14" using a SoC also contributes it to your concerns as well rather than having a separate CPU, dGPU & RAM, etc that adds cost, weight, size, power consumption, latency, etc.

Many complain about the M2 Ultra... it is "too compromised". It is too focused on efficiency rather than raw performance with little mind of power consumption.

But that use case impacts less than 3% of all Macs shipped after less than 3 years of Mac chips? Generously... more or less 75,000 Macs shipped annually.

The raw performance concerns will be mostly, not entirely, addressed with the M3 Ultra/Extreme by Q1 2025. This is 21 months way.

So the complains will then be narrowed down to:

- M3 Ultra's 256GB RAM and M3 Extreme's 512GB RAM due to LPDDR5X memory capacity
- No swappable CPU, dGPU, eGPU, RAM, SSD & logicboard

I'd harshly judge the Ultra & Extreme by its 10th year.

By 2030 we'll be talking about RISC-V chips as it appears Apple is prepping for them.

I'd give this video a watch on how RISC-V will likely relegate legacy x86 chips to a niche optimistically like the audiophiles or pessimistically like mainframes.


Although the video is 3+ years old it touches upon business/tech trends, tech geopolitics and how chip makers are looking for ways to improve performance per watt further.

Many of Coreteks' videos correctly pointed out many trends in the market that eventually happened.

I am aware this isn't what very vocal Intel Mac Pro & PC gamers users want to happen but this is the direction where the whole tech industry is heading towards to as PC sales aren't sustainable. The writing's on the wall.

View attachment 2223200

Source: https://www.statista.com/statistics/263393/global-pc-shipments-since-1st-quarter-2009-by-vendor/

Want 4K@144Hz gaming? Next gen Playstation & Xbox with APUs and Switch with SoC will be cheaper alternatives at a price lower than the top-end desktop dGPU.
great post.

I am a believer in just buy the right tools for the job, and move on.
Currently using a Studio Ultra M1 together with Parsec to utilise a PC for rendering. Works an absolute treat and really not that much more than getting a Mac Pro plus all the benefits of Nvidia and the apps it supports.
 
  • Like
Reactions: Longplays
Um, that is like 4 generations old i9 youre talking about. We're talking about a current i9. Yes, a laptop level current generation i9 runs faster than the M2 ultra. Look at the very bottom of the list below, that's where the M2ultra sits (one above it is the current laptop i9 that is faster). At the top of the list is an AMD EPYC that is ~3x the speed of the M2Ultra.

View attachment 2223181
For context

- M2 Ultra only comes in 24-core CPU
- M2 Ultra's a 2nd gen Mac chip that is designed for typical use case sweet spots. So fringe use cases like say scientific/research data sets will likely do poorly because Apple did not identify it as a typical Mac use case
- Apple does not sell M2 Ultra as a part but as a component of a turnkey system in either $3999 Mac Studio or $6999 Mac Pro
- M2 Ultra's CPU Max power consumption is <295W assuming the system is fully loaded and powering devices through all its I/O
- All the AMD & Intel chips listed are just the CPU parts they will vary in price & power consumption relative to how it is configured in as system
- Some of the CPU parts are destined for servers and not workstations or desktops
- PassMark CPU Mark does not measure operational noise. This is important for silent rooms and silent use cases hence the priority on efficiencies and Macs are designed that way
- PassMark CPU Mark benchmark is synthetic. It may not reflect the performance of the app you will actually use with your Mac.
- If your key concern is raw performance in a vacuum then this is a great way to rank the chip on a single benchmark
- The direction in the chip industry right now is specializations. If your primary concern is to beat PassMark CPU Mark benchmark then you are better off buying a $6690 AMD EPYC 9654 CPU that consumes 1kW-1.2kW.
 
Last edited:
Some of the heralded performance comes from the Media Engine components in Apple Silicon, greatly accelerating many ProRes operations on the platform. This is not pure CPU computing power, more so specific technology to accelerate tasks well beyond what normal generic CPU architecture can handle. In light of this no generic x86 technology can accommodate this level of acceleration.
 
Some of the heralded performance comes from the Media Engine components in Apple Silicon, greatly accelerating many ProRes operations on the platform. This is not pure CPU computing power, more so specific technology to accelerate tasks well beyond what normal generic CPU architecture can handle. In light of this no generic x86 technology can accommodate this level of acceleration.
This is why Intel's proposing legacy-reduced Intel x86S.

This will lower the cost of & use of Intel chips & free up die area space for more relevant components.

Anyone wanting legacy x86 chips will be forced to use past parts or future parts at a higher than Intel-average price points due to reduced economies of scale. It may become as bad as the $1k price hike of the 2023 Mac Pro.

Apple has a very good competitive analysis and performance/use case dev teams. So they keep making designs that seem to hit most sweet spots.

Mac chips may not be the best in all their intended use cases, but they are "good enough" to offer a good value proposition to their target markets.
 
Last edited:
This is why Intel's proposing legacy-reduced Intel x86S.

This will lower the cost of & use of Intel chips & free up die area space for more relevant components.

Anyone wanting legacy x86 chips will be forced to use past parts or future parts at a higher than Intel-average price points due to reduced economies of scale. It may become as bad as the $1k price hike of the 2023 Mac Pro.

Presumably legacy aspects of x86 can just be emulated in software / a VM? Especially given that any software that needs them will have targeted (very) old hardware, plus future x86 generations will only be getting faster.

Apple has a very good competitive analysis and performance/use case dev teams. So they keep making designs that seem to hit most sweet spots.

Mac chips may not be the best in all their intended use cases, but they are "good enough" to offer a good value proposition to their target markets.

I get what you're saying, but this argument is a bit circular, since the 'target market' tends to get circumscribed around what ASi happens to be good at. What about people like @richinaus (and myself) that then need to use an x86 PC in addition to a Mac, in order to get the features we need?

By this logic, ASi is always brilliant, because anything it isn't good at gets defined as outside of its target market.
 
Yes latest 13 th Gen Intel CPUs outperforms any M2 ultra chip. So 13 th Gen Intel hackintosh is a solve for all who need computer like Intel Mac Pro . Also You can use macOS or native Windows 11.
Apple Silicon has forced Intel to focus solely on closing the gap and when it does that then we see the engineering side on Intel shine. Intel's biggest nemesis is its really a marketing driven company when there is no competition which neuters any real progress. The marketing dept focuses on wringing every extra dollar from customers by constraining cores in lower cost chips and charging a premium for the higher end SKUs. Intel Marketing is only kept at bay when Intel is runover.
 
  • Like
Reactions: wegster
When it comes to workstations/raw power: NOBODY CARES ABOUT LOW POWER DRAW, and this is why Mac Silicon fails (currently).
Really they do. Power and heat are the biggest issues with the datacenter for servers and nobody wants a jet engine deafening them in a cube. It might be good as a DIY space heater in winter I guess.
 
Apple Silicon has forced Intel to focus solely on closing the gap and when it does that then we see the engineering side on Intel shine. Intel's biggest nemesis is its really a marketing driven company when there is no competition which neuters any real progress. The marketing dept focuses on wringing every extra dollar from customers by constraining cores in lower cost chips and charging a premium for the higher end SKUs. Intel Marketing is only kept at bay when Intel is runover.
From 2006-2020 Intel had the monopoly of all PC OEMs.

This likely why Intel got stuck at 14nm from 2014-2020.

They magically was able to output 10nm Intel chips in 2020... weeks before WWDC 2020.
 
These are industry standard applications used by millions of people and not developed by Apple. On Windows, all these application do not use Metal.

Looks like some are on some sort of crusade to bad mouth Apple Silicon chips. There are documented evidence that on many applications, the latest m2 ultra easily beats any combination of intel/amd cpu and AMD/nvidia gpu. These are not trivial applications but industry standard ones used by millions of people.

All I see in this thread is some discussing about some obscure passmark benchmark which is synthetic and declaring apple Silicon sucks because it doesn’t do well on that.
 
Last edited:
Maybe, but you'd need to have an electrician install a three-phase outlet to run it.
This. I'm not really sure why people still aren't getting this.

Its never been about outright performance. Otherwise they'd have built bricks that could fit the latest Intel/AMD/Nvidia powerhouses in them.

Its about performance per watt i.e. efficiency via dedicated computing units. And how they translate to real workflows.

They've been saying this for literally decades.

If you want outright power without care for efficiency then buy a Puget system or build a hackingtosh.
 
Last edited:
I get what you're saying, but this argument is a bit circular, since the 'target market' tends to get circumscribed around what ASi happens to be good at. What about people like @richinaus (and myself) that then need to use an x86 PC in addition to a Mac, in order to get the features we need?

By this logic, ASi is always brilliant, because anything it isn't good at gets defined as outside of its target market.
I crunched some numbers to ballpark how many Mac Pro users who actually have your concern on x86 and swappable parts.

It is about 3,000 units per year lost sales. This is from about 15,000 Mac Pros per year. By comparison about 60,000 Mac Studios per year are likely sold.

By comparison Apple moved an estimated 28.6 million Macs last year. They're extremely happy with efficiency improvements of more specialized Mac chips that did not introduce cores for fringe use cases.

So happy that it embolden Qualcomm & Microsoft to try again with Windows 11 on Snapdragon again.

There are number of write ups of a very rosy Windows 11 on ARM laptop future





I'd watch that 4+ year old youtube video that was made 23 months prior to the M1 release.

Almost all of their points have occurred since then.
 
Last edited:
  • Like
Reactions: wegster
The original post show some synthetic benchmark where a pc beats the latest Apple chip. Others declare how Apple Silicon sucks because of some synthetic benchmarks. In some real world benchmark of actual applications Apple Silicon demolishes the latest pcs.

What is more important, some synthetic benchmarks or real world application that millions of people use?
 
Last edited by a moderator:
The original post show some synthetic benchmark where a pc beats the latest Apple chip. Others declare how Apple Silicon sucks because of some synthetic benchmarks. In some real world benchmark of actual applications Apple Silicon demolishes the latest pcs.

What is more important, some synthetic benchmarks or real world application that millions of people use?
Synthetic benchmarks is important assuming the real world app isn't available on all platforms.

So it is somewhat neutral.

The problem with that approach is that chip makers will start designing their chips around that rather unusual & impractical use case for the purpose of improving their ranking on a app that typical people do not buy their computers for. To me that is wasted die space for the sake of marketing.

I wonder if Intel's x86S will reduce/remove that part of the microarchitectural design and copy Apple's approach of actually making design choice to hit most sweet spots.

Any niche use case can be relegated to an accelerator card. Like say a legacy x86 accelerator card.
 
At some point Apple will have to design a chip that is allowed to use more power as nobody seems to care about performance / watt on desktop computers.

Except if it's between graphic cards with nearly the same performance, then for some reason it matters.

This is my takeaway also

They need some additional designs that are ready to rip and shred at the expense of some efficiency (perf/watt)

Max efficiency just isn't a thing a power user on the desktop is prioritizing at all (understandably)
 
This is my takeaway also

They need a design that's ready to rip and shred ... and be less concerned about perf/watt

It just isn't a thing a power user on the desktop is prioritizing at all (nor should they be)

Threadripper + Nvidia dGPU would be a better fit today.


Screen Shot 2023-06-26 at 5.08.04 AM.png



Screen Shot 2023-06-26 at 5.10.27 AM.png



Screen Shot 2023-06-26 at 5.22.52 AM.png
 
Last edited:
  • Like
Reactions: HDFan
I crunched some numbers to ballpark how many Mac Pro users who actually have your concern on x86 and swappable parts.

It is about 3,000 units per year lost sales. This is from about 15,000 Mac Pros per year. By comparison about 60,000 Mac Studios per year are likely sold.

To be fair, that ignores the majority of the market, which never bought Mac workstations in the first place. At this point you’d need to be pretty dedicated to macOS to consider buying one over a PC. Swathes of Apple would clearly love to kill off the Mac Pro, and they likely will once they have solid sales figures vs the Studio.

But for laptops, sure, ASi no contest.
 
To be fair, that ignores the majority of the market, which never bought Mac workstations in the first place. At this point you’d need to be pretty dedicated to macOS to consider buying one over a PC. Swathes of Apple would clearly love to kill off the Mac Pro, and they likely will once they have solid sales figures vs the Studio.

But for laptops, sure, ASi no contest.
PC workstation market as a whole is <7.7 million annually.

If you need more generalized computing raw performance and Nvidia dGPU then macOS is a deadend.

The Mac Pro still has a market and worth servicing but if Apple has to move mountains to address the ~3,000/year potential sales from niche users... then it is not worth the worry to service. Let x86 have it as it is a net gain by letting them go.

Mac chips are satisfactorily addressing ~99.99% of all users concerns? I think that's a win. Would be cheaper and better for everyone to go Threadripper + Nvidia dGPU for that ~0.01%.

Although by the 2030s that very same problem Intel Mac Pro users have today will repeat itself when Windows 11 on ARM will succeed.
 
Really they do. Power and heat are the biggest issues with the datacenter for servers and nobody wants a jet engine deafening them in a cube. It might be good as a DIY space heater in winter I guess.
I feel like every time not caring about power efficiency is brought up it’s made to be some crazy extreme. Like “no one wants to wire up three phase outlets!” Or “no one wants a jet engine in their cube!”

There’s a lot of room between M2 Ultra and “jet engine in cube.” The old Mac Pro had a TDP of up to 1000 watts without three phase power outlets or jet engine fans.

If we “just” talked about a 500w system that would still be a significant performance jump up from M2 Ultra.
 
  • Like
Reactions: prefuse07
If we “just” talked about a 500w system that would still be a significant performance jump up from M2 Ultra.

Mac Studio M2 Ultra CPU max power consumption is <295W.

Pls point to an online review of a $3999 turn key system without display, keyboard or mouse and has the acoustic performance of 6dB at idle.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.