Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Thanks for the benchmark between the M4 Max and an RTX 4080.

Unfortunately, I come to the conclusion that this is quite disappointing for the M4, because an RTX 4080 easily outperforms any M4. And we're not even talking about a 4090 and soon a 5090!
 
Here you have 3d benchmarks comparing a Macbook Pro M4 Max Vs a Desktop PC 4080 Super
It only proves that M4 Max is still slower than mobile RTX 4080. Besides RTX 40 series are based on TSMC 5nm that M1 series used. 3 generation advantages from Nvidia.

Again, benchmarks dont really prove anything in real life. Better to bring Cyberpunk 2077 to test it in real time.
 
It only proves that M4 Max is still slower than mobile RTX 4080. Besides RTX 40 series are based on TSMC 5nm that M1 series used. 3 generation advantages from Nvidia.

Again, benchmarks dont really prove anything in real life. Better to bring Cyberpunk 2077 to test it in real time.
Are you kidding me? have you seen a 4080 super? the size of it and the amount of power it consumes?
 
  • Haha
Reactions: sunny5
Note the PCs were only using 9th / 10th gen Intel CPUs, and the Nvidia GPUs were using CUDA rather than Optix.
 
Are you kidding me? have you seen a 4080 super? the size of it and the amount of power it consumes?
Since you talked about the power, tell me the performance difference between M1 Max and RTX 4080M. You see, you are totally ignoring the advantage that Apple has with 3nm which is 3 generation ahead of anyone else. Dont forget that M4 Max's power by watt achieved by 3nm.
 
Last edited:
Since you talked about the power, tell me the performance difference between M1 Max and RTX 4080M.
If a single benchmark can really say much:

(sorry, updated with better context)

1731538750470.png


 
Last edited:
These were desktop GPUs - see the video’s description.

There’s two aspects to this. If we’re purely considering architecture, then one should factor in the M4’s process node advantage, as this has a big influence on power efficiency. OTOH, if we’re just looking at this in terms of products you can actually buy, regardless of when they came out etc., then the M4 Max is obviously impressive.

By the time the Studio version comes out, though, the nVidia 50 series will also be out, and the 5090 will almost certainly slaughter the M4 Ultra. God knows how much power it’ll draw though.
 
  • Like
Reactions: throAU
There’s two aspects to this. If we’re purely considering architecture, then one should factor in the M4’s process node advantage, as this has a big influence on power efficiency. OTOH, if we’re just looking at this in terms of products you can actually buy, regardless of when they came out etc., then the M4 Max is obviously impressive.
As a customer, I really don't care about process node vs. performance, etc. - this is the vendor's problem to solve. Or maybe an academic discussion, but not really relevant to the real world.

All I care about as an end customer is $/performance, power consumption and form factor. If apple have a process node advantage, that's because they've done what is required as a busines (i.e., spend the money to basically partner closely with TSMC) to get that and in turn get a better end product.

Trying to say Nvidia or whoever else is somehow "better" than the performance benchmarks/power metrics because they're getting what they get with an older process node is irrelevant really. The product they're shipping is what counts. That's what we can buy as a customer. The implementation isn't my concern.

The cutting edge process nodes aren't cheap and aren't risk free (see: the issues with M3 generation on TSMCs previous node). Using an older node factors into the (reduced) production cost of the product.
 
Last edited:
As a customer, I really don't care about process node vs. performance, etc. - this is the vendor's problem to solve. Or maybe an academic discussion, but not really relevant to the real world.

All I care about as an end customer is $/performance, power consumption and form factor. If apple have a process node advantage, that's because they've done what is required as a busines (i.e., spend the money to basically partner closely with TSMC) to get that and in turn get a better end product.

Trying to say Nvidia or whoever else is somehow "better" than the performance benchmarks/power metrics because they're getting what they get with an older process node is irrelevant really. The product they're shipping is what counts. That's what we can buy as a customer. The implementation isn't my concern.

The cutting edge process nodes aren't cheap and aren't risk free (see: the issues with M3 generation on TSMCs previous node). Using an older node factors into the (reduced) production cost of the product.

All good points. I guess there is a wider discussion of Mac vs PC for 3D artists though. The Mac is looking impressive, albeit various question marks remain - cost (high > very high, depending on upgrades), Apple’s commitment to the desktop, no internal upgrades etc. The M4 Max has barely been released; one has to take a step back to look at the overall trends.

The RTX 40 series has been out a couple of years and is nearing end of life; the M2 was current at the time (still is for Mac Studio users!). The 50 series may demonstrate once again the benefit of PCIe cards, if e.g. a 5070 beats an M4 Ultra.

The video discussed above omitted to mention the PC being compared to the latest Mac was running a 5 generation old CPU, and the graphics card’s RT cores weren’t being used.
 
It seems Apple’s strategy is to always use the bleeding-edge process node. This brings great advantages in terms of speed and energy efficiency, but is also the most expensive (even more so when the chips are physically large). So Macs are fast but also inherently expensive, which matches Apple’s market position.

One issue with the SoC approach is that chips can only be made so big, and the GPU can only be allocated a certain percentage of it. Apple also seems to prioritise CPU cores, likely because they benefit a wider range of (typical Mac) tasks. So Macs will never be able to contain monster PC-style GPUs.

Macs seem incredibly strong for video work. The media engines are likely much more efficient than using general purpose CUDA cores for video encode / decode. Yes, PC cards have video codecs, but are likely geared to Twitch streaming and media consumption - great for h.264/5, but won’t support ProRes etc. You need a DeckLink or similar for that.
 
It seems Apple’s strategy is to always use the bleeding-edge process node. This brings great advantages in terms of speed and energy efficiency, but is also the most expensive (even more so when the chips are physically large). So Macs are fast but also inherently expensive, which matches Apple’s market position.

One issue with the SoC approach is that chips can only be made so big, and the GPU can only be allocated a certain percentage of it. Apple also seems to prioritise CPU cores, likely because they benefit a wider range of (typical Mac) tasks. So Macs will never be able to contain monster PC-style GPUs.

Macs seem incredibly strong for video work. The media engines are likely much more efficient than using general purpose CUDA cores for video encode / decode. Yes, PC cards have video codecs, but are likely geared to Twitch streaming and media consumption - great for h.264/5, but won’t support ProRes etc. You need a DeckLink or similar for that.
They are rumors that Apple will ditch SoC design for 3D Fabric which allow them to design and manufacture each components and then combine them all together so that chips can get better GPU, easy to mass produce, and cheaper to make. This also helps Ultra and Extreme level chips since you dont need to take high risks and fees.
 
They are rumors that Apple will ditch SoC design for 3D Fabric which allow them to design and manufacture each components and then combine them all together so that chips can get better GPU, easy to mass produce, and cheaper to make. This also helps Ultra and Extreme level chips since you dont need to take high risks and fees.
Apple moving to chiplets is very much a matter of "when" not "if".

The M5 might still be monolithic, but I'll be shocked if the M6 and M7 are still monolithic.

Chiplets are coming for all big SoC.

That being said, how exactly apple chops up its CPUs, GPUs, I/O, Memory, etc, ... is unknown, as there are lots of options.
 
  • Like
Reactions: Chuckeee
Apple moving to chiplets is very much a matter of "when" not "if".

The M5 might still be monolithic, but I'll be shocked if the M6 and M7 are still monolithic.

Chiplets are coming for all big SoC.

That being said, how exactly apple chops up its CPUs, GPUs, I/O, Memory, etc, ... is unknown, as there are lots of options.
Nevertheless, monolithic SoC has proven failure for desktop grade chips so they really need to bring chaplet based design as they are wasting their time and money on Ultra chips since it's really difficult to mass produce and extremely expensive.

If they can manage to use chiplet SoC for Mac, they can increase the performance more dramatically, especially for desktop and GPU itself.
 
Nevertheless, monolithic SoC has proven failure for desktop grade chips so they really need to bring chaplet based design as they are wasting their time and money on Ultra chips since it's really difficult to mass produce and extremely expensive.

If they can manage to use chiplet SoC for Mac, they can increase the performance more dramatically, especially for desktop and GPU itself.

Chlplets are generally used for two reasons - providing flexibility for lots of SKUs, and mixing process nodes (with the cheaper / older one used for secondary stuff like IO controllers). Apple use a very limited number of SKUs, with no variation in clock speed (just a little binning). They also want the minimum possible power consumption and don't mind being expensive, so would probably prefer to just use the latest node for everything.

The vast majority (95% IIRC) of their sales are laptops, and even their desktops are essentially laptops, just with a big / no screen. Chiplets would only really benefit higher-end Studio SKUs, and Apple probably can't be bothered to do something different there, given the low number of sales.
 
This would be infinitely dumber than just discontinuing the Mac Pro.

If they don't give the Mac Pro multiple M4 chips this time... it should be discontinued. if the same hardware can fit inside a Mac Studio case, then that is where it belongs.
A dual-CPU Mac Pro would be incredible. I’m sure someone at Apple is working on it.

That being said, Apple seems to have a small, captive audience in the creative industry still willing to pay thousands of dollars for expansion slots under MacOS. I miss when Mac Pros had more mass-market appeal but it was because you could actually upgrade things yourself, and dual-CPU doesn’t change that.
 
Chlplets are generally used for two reasons - providing flexibility for lots of SKUs, and mixing process nodes (with the cheaper / older one used for secondary stuff like IO controllers). Apple use a very limited number of SKUs, with no variation in clock speed (just a little binning). They also want the minimum possible power consumption and don't mind being expensive, so would probably prefer to just use the latest node for everything.

The vast majority (95% IIRC) of their sales are laptops, and even their desktops are essentially laptops, just with a big / no screen. Chiplets would only really benefit higher-end Studio SKUs, and Apple probably can't be bothered to do something different there, given the low number of sales.
I think there is more at play here than the above suggests.

Chiplets also have the potential advantages:

- improved yields (i.e. cheaper prices) for the same huge number of transistors;

- ability to hit package transistor numbers beyond reticle size limits;

- more options for how to scale up SoC package performance.

The M3 Max was already at 92 billion transistors, and the M4 Max is probably around 110 billion. Eventually Apple will hit the limit, and it was rumoured that the <2nm processe nodes would have noticeably smaller chip size limits.

I agree that Apple doesn't really want to dedicate engineering just for high end desktop products, but I am confident that they're willing to build an architecture that they can scale all the way from small Macs to huge Macs.
 
  • Like
Reactions: Chuckeee
That being said, Apple seems to have a small, captive audience in the creative industry still willing to pay thousands of dollars for expansion slots under MacOS. I miss when Mac Pros had more mass-market appeal but it was because you could actually upgrade things yourself, and dual-CPU doesn’t change that.

Those expansion slots are pretty dumb when they can't use GPUs. Particularly when Thunderbolt keeps getting faster. Ultimately it comes down to whether a Mac Pro can have more RAM and CPU/GPU power than the best Mac Studio. If not, it's just a dumb product. Not saying it's a completely useless product, but it is a dumb compromised product.
 
A dual-CPU Mac Pro would be incredible. I’m sure someone at Apple is working on it.

That being said, Apple seems to have a small, captive audience in the creative industry still willing to pay thousands of dollars for expansion slots under MacOS. I miss when Mac Pros had more mass-market appeal but it was because you could actually upgrade things yourself, and dual-CPU doesn’t change that.
I'll be absolutely shocked if Apple went to a multi-socket motherboard.

Chipets are on the horizon and Apple can build chips right-sized for their products.

Apple doesn't even want to go to the motherboard to access RAM, let alone to talk to another SoC package.
 
Those expansion slots are pretty dumb when they can't use GPUs. Particularly when Thunderbolt keeps getting faster. Ultimately it comes down to whether a Mac Pro can have more RAM and CPU/GPU power than the best Mac Studio. If not, it's just a dumb product. Not saying it's a completely useless product, but it is a dumb compromised product.
Mainly for capture cards / interfaces which creative professionals rely on. The audience is so extremely niche, but obviously it still exists enough for Apple to keep the Mac Pro around (and at a price point high enough to make up for the lack of volume).
 
  • Like
Reactions: DavidSchaub
I'll be absolutely shocked if Apple went to a multi-socket motherboard.

Chipets are on the horizon and Apple can build chips right-sized for their products.

Apple doesn't even want to go to the motherboard to access RAM, let alone to talk to another SoC package.

They obviously have a problem building their chips out too far in a single package, otherwise they wouldn't make the Mac Pro be just a Mac Studio with slots and a lot of air.

Apple doesn't want motherboard RAM, but they absolutely need it if they aren't doing multiple SOCs. It's absurd the M2 Mac Pros are so limited on RAM when the last Intel Mac Pros could go up to 1.5TB.

There are other ARM computers that have motherboard RAM like the Ampere. There's absolutely no reason for Apple not to do fast SOC unified RAM plus slow motherboard RAM except for them being foolish and stubborn.
 
I dont know what they are doing so far but eventually, they'll need to ditch SoC design at least for desktop and workstation due to high manufacturing fee, extremely low yield, highly risky, niche market, and more.

Some say you dont need desktop or workstation but it only limits what Mac can do especially compared to high-end desktop and workstations who really need the best performance while Apple can build their own server with Mac Pro which is hard to ignore. Besides, since AI developing is important now and Apple Silicon have their own advantages while Nvidia is the only solution, it is the best opportunity to take advantage with AI which can sell tons of them and even Apple can use it for their own.

Anyway, they'll need to prove that they can make desktop and workstation grade Apple Silicon with expandable design.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.