Here you have 3d benchmarks comparing a Macbook Pro M4 Max Vs a Desktop PC 4080 SuperThis is the type of slide deck Apple would use at a presentation with the 'trust us bro' disclaimer at the bottom in tiny font
Here you have 3d benchmarks comparing a Macbook Pro M4 Max Vs a Desktop PC 4080 SuperThis is the type of slide deck Apple would use at a presentation with the 'trust us bro' disclaimer at the bottom in tiny font
It only proves that M4 Max is still slower than mobile RTX 4080. Besides RTX 40 series are based on TSMC 5nm that M1 series used. 3 generation advantages from Nvidia.Here you have 3d benchmarks comparing a Macbook Pro M4 Max Vs a Desktop PC 4080 Super
Are you kidding me? have you seen a 4080 super? the size of it and the amount of power it consumes?It only proves that M4 Max is still slower than mobile RTX 4080. Besides RTX 40 series are based on TSMC 5nm that M1 series used. 3 generation advantages from Nvidia.
Again, benchmarks dont really prove anything in real life. Better to bring Cyberpunk 2077 to test it in real time.
Since you talked about the power, tell me the performance difference between M1 Max and RTX 4080M. You see, you are totally ignoring the advantage that Apple has with 3nm which is 3 generation ahead of anyone else. Dont forget that M4 Max's power by watt achieved by 3nm.Are you kidding me? have you seen a 4080 super? the size of it and the amount of power it consumes?
If a single benchmark can really say much:Since you talked about the power, tell me the performance difference between M1 Max and RTX 4080M.
Apple always takes advantage with nanometers and RTX 40 series released 2~3 years ago. Matching M4 series with RTX 40 series is already a joke but good try.If a single benchmark can really say much:
View attachment 2451026
![]()
Mobile Graphics Cards - Benchmark List
Sort and filter through all currently available mobile graphics cards by performance or specification.www.notebookcheck.net
As a customer, I really don't care about process node vs. performance, etc. - this is the vendor's problem to solve. Or maybe an academic discussion, but not really relevant to the real world.There’s two aspects to this. If we’re purely considering architecture, then one should factor in the M4’s process node advantage, as this has a big influence on power efficiency. OTOH, if we’re just looking at this in terms of products you can actually buy, regardless of when they came out etc., then the M4 Max is obviously impressive.
As a customer, I really don't care about process node vs. performance, etc. - this is the vendor's problem to solve. Or maybe an academic discussion, but not really relevant to the real world.
All I care about as an end customer is $/performance, power consumption and form factor. If apple have a process node advantage, that's because they've done what is required as a busines (i.e., spend the money to basically partner closely with TSMC) to get that and in turn get a better end product.
Trying to say Nvidia or whoever else is somehow "better" than the performance benchmarks/power metrics because they're getting what they get with an older process node is irrelevant really. The product they're shipping is what counts. That's what we can buy as a customer. The implementation isn't my concern.
The cutting edge process nodes aren't cheap and aren't risk free (see: the issues with M3 generation on TSMCs previous node). Using an older node factors into the (reduced) production cost of the product.
They are rumors that Apple will ditch SoC design for 3D Fabric which allow them to design and manufacture each components and then combine them all together so that chips can get better GPU, easy to mass produce, and cheaper to make. This also helps Ultra and Extreme level chips since you dont need to take high risks and fees.It seems Apple’s strategy is to always use the bleeding-edge process node. This brings great advantages in terms of speed and energy efficiency, but is also the most expensive (even more so when the chips are physically large). So Macs are fast but also inherently expensive, which matches Apple’s market position.
One issue with the SoC approach is that chips can only be made so big, and the GPU can only be allocated a certain percentage of it. Apple also seems to prioritise CPU cores, likely because they benefit a wider range of (typical Mac) tasks. So Macs will never be able to contain monster PC-style GPUs.
Macs seem incredibly strong for video work. The media engines are likely much more efficient than using general purpose CUDA cores for video encode / decode. Yes, PC cards have video codecs, but are likely geared to Twitch streaming and media consumption - great for h.264/5, but won’t support ProRes etc. You need a DeckLink or similar for that.
Apple moving to chiplets is very much a matter of "when" not "if".They are rumors that Apple will ditch SoC design for 3D Fabric which allow them to design and manufacture each components and then combine them all together so that chips can get better GPU, easy to mass produce, and cheaper to make. This also helps Ultra and Extreme level chips since you dont need to take high risks and fees.
Nevertheless, monolithic SoC has proven failure for desktop grade chips so they really need to bring chaplet based design as they are wasting their time and money on Ultra chips since it's really difficult to mass produce and extremely expensive.Apple moving to chiplets is very much a matter of "when" not "if".
The M5 might still be monolithic, but I'll be shocked if the M6 and M7 are still monolithic.
Chiplets are coming for all big SoC.
That being said, how exactly apple chops up its CPUs, GPUs, I/O, Memory, etc, ... is unknown, as there are lots of options.
Nevertheless, monolithic SoC has proven failure for desktop grade chips so they really need to bring chaplet based design as they are wasting their time and money on Ultra chips since it's really difficult to mass produce and extremely expensive.
If they can manage to use chiplet SoC for Mac, they can increase the performance more dramatically, especially for desktop and GPU itself.
A dual-CPU Mac Pro would be incredible. I’m sure someone at Apple is working on it.This would be infinitely dumber than just discontinuing the Mac Pro.
If they don't give the Mac Pro multiple M4 chips this time... it should be discontinued. if the same hardware can fit inside a Mac Studio case, then that is where it belongs.
I think there is more at play here than the above suggests.Chlplets are generally used for two reasons - providing flexibility for lots of SKUs, and mixing process nodes (with the cheaper / older one used for secondary stuff like IO controllers). Apple use a very limited number of SKUs, with no variation in clock speed (just a little binning). They also want the minimum possible power consumption and don't mind being expensive, so would probably prefer to just use the latest node for everything.
The vast majority (95% IIRC) of their sales are laptops, and even their desktops are essentially laptops, just with a big / no screen. Chiplets would only really benefit higher-end Studio SKUs, and Apple probably can't be bothered to do something different there, given the low number of sales.
That being said, Apple seems to have a small, captive audience in the creative industry still willing to pay thousands of dollars for expansion slots under MacOS. I miss when Mac Pros had more mass-market appeal but it was because you could actually upgrade things yourself, and dual-CPU doesn’t change that.
I'll be absolutely shocked if Apple went to a multi-socket motherboard.A dual-CPU Mac Pro would be incredible. I’m sure someone at Apple is working on it.
That being said, Apple seems to have a small, captive audience in the creative industry still willing to pay thousands of dollars for expansion slots under MacOS. I miss when Mac Pros had more mass-market appeal but it was because you could actually upgrade things yourself, and dual-CPU doesn’t change that.
Mainly for capture cards / interfaces which creative professionals rely on. The audience is so extremely niche, but obviously it still exists enough for Apple to keep the Mac Pro around (and at a price point high enough to make up for the lack of volume).Those expansion slots are pretty dumb when they can't use GPUs. Particularly when Thunderbolt keeps getting faster. Ultimately it comes down to whether a Mac Pro can have more RAM and CPU/GPU power than the best Mac Studio. If not, it's just a dumb product. Not saying it's a completely useless product, but it is a dumb compromised product.
I'll be absolutely shocked if Apple went to a multi-socket motherboard.
Chipets are on the horizon and Apple can build chips right-sized for their products.
Apple doesn't even want to go to the motherboard to access RAM, let alone to talk to another SoC package.
And the Pages icon will still bounce in the dock for ten seconds before the app loads.![]()