Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Boil

macrumors 68040
Oct 23, 2018
3,478
3,173
Stargate Command
M3 Extreme, with hardware ray-tracing & 960GB ECC LPDDR5X RAM, will totally trounce a 7985WX (with hyper-threading off) and dual 5090s...!!! ;^p
 

leman

macrumors Core
Oct 14, 2008
19,521
19,677
I thought we didn't like Cinebench because it wasn't optimized for Apple Silicon?

R23 had a bunch of problems running on Apple Silicon (suboptimal code generation and small problem sizes — x86 could likely run entirely from cache which made it unrealistically fast). R24 was updated to use the latest Embree code (with performance fixes submitted by Apple last year), and it uses a more complex scenes. All this allows Apple Silicon to play its massive instruction- and memory-parallelism card and deemphathises x86's higher SIMD throughout.

The GPU test however seems to prefer Nvidia. We have Blender as a good litmus test. Relative performance of Apple GPUs is considerably higher in Blender tests rather than in Redshift. Since the hardware and problem statement are the same, I'd interpret it as lack of software efficiency.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,677
For traditional 3D rasterization, Apple's TBDR GPU architecture has an inherent advantage over Nvidia's tiled IMR: much less overdraw. A lot of the RTX 4080's theoretical higher performance is being consumed drawing pixels you don't end up seeing.

This varies by scene. For example, scenes with a lot of transparent or translucent surfaces tend to favor brute force, since a TBDR GPU's efficiency gains are greatest when fully opaque polygons block out everything behind them.

And another — probably more relevant in practice advantage — better utilisation of cache locality and shader resources! An IMR shades parts of triangles immediately (usually in batches of 8x4 pixels or similar). Shading along the triangle edges means that some of the ALUs are not doing any work (since a much of those 8x4 pixels will be outside the triangle). If you have a lot of small triangles (as modern games do), this can add up. Apple however (in absence of transparency etc.) always shades the entire tile at once, after all triangles in it have been rasterized (usually 32x32 pixels). This ends up with more ALUs doing useful work and increases the likelihood of cache hits.

There are also drawbacks, of course — Apple's TBDR have to bin transformed geometry first, which means writing out transformed vertex data into a memory buffer. If you have a lot of geometry, or want to generate geometry on the fly, this can become an additional bottleneck. For example, with mesh shaders Nvidia can immediately send the generated triangles to the rasteriser. Apple however has to collect the results first and potentially send them to a different GPU core for rasterisation.
 
  • Like
Reactions: caribbeanblue

Basic75

macrumors 68020
May 17, 2011
2,101
2,448
Europe
Yeah - I was thinking they could build multi-chip just for desktops and keep their monolithic approach for mobile but I don't think the desktop market is perceived to be big enough to warrant such an approach.
Or use a more Intel Meteor Lake-like approach. I believe that's designed for mobile first, unlike what AMD is doing with chiplets.
 

Pressure

macrumors 603
May 30, 2006
5,182
1,544
Denmark
It's clear that Apple so far have targeted their laptops with their SoCs and that is where they will shine the most (so far).

Here is an example comparing the MacBook Pro M2 Max vs RAZER Blade 18 (Intel Core i9-13950HX and NVIDIA Geforce RTX 4070) in photography and video tasks.


Lightroom Classis Preview 1K.jpg


Lightroom Classic Panaroma 314MP.jpg


Lightroom CC Export 1K.jpg
 

TigeRick

macrumors regular
Oct 20, 2012
144
153
Malaysia
Twitter Link

This leaker mentioned about Qualcomm sharing performance number for M3 Pro and M4, the numbers are from GeekBench5. Let's take average numbers shown below:-

GeekBench5

4P+4E:
M2 ST 1898, MT 8911
A17 Pro (2P+ 4E) ST 2200, MT 6200
M3 ST 2400, MT ?
M4 ST 2600, MT 13500

8P+4E:
M2 Pro ST 1940, MT 14965
8P+6E:
M3 Pro ST 2400, MT 19000
 
Last edited:

APCX

Suspended
Sep 19, 2023
262
337
Twitter Link

This leaker mentioned about Qualcomm sharing performance number for M3 Pro and M4, the numbers are from GeekBench5, anyone has old number from M2???
That leaker has a mixed record so I’d take it with a pinch of salt.

I’d be very surprised if Qualcomm would have this information.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,677
That leaker has a mixed record so I’d take it with a pinch of salt.

Their numbers for S24 Ultra also seem tad high to me. To reach 23xx in GB5 the Cortex X4 would be running at 4Ghz+ frequencies, which I don't find believable. Unless that's a GB6 score.
 

TigeRick

macrumors regular
Oct 20, 2012
144
153
Malaysia
Their numbers for S24 Ultra also seem tad high to me. To reach 23xx in GB5 the Cortex X4 would be running at 4Ghz+ frequencies, which I don't find believable. Unless that's a GB6 score.
He did mention the S24U's number is from GB6 :p
 

sunny5

macrumors 68000
Jun 11, 2021
1,838
1,706
It's clear that Apple so far have targeted their laptops with their SoCs and that is where they will shine the most (so far).

Here is an example comparing the MacBook Pro M2 Max vs RAZER Blade 18 (Intel Core i9-13950HX and NVIDIA Geforce RTX 4070) in photography and video tasks.


View attachment 2291872

View attachment 2291874

View attachment 2291873


Poor testings as it seems he did not connect the power cable to the Razer laptop which is too obvious just like others mentioned.
 

APCX

Suspended
Sep 19, 2023
262
337

Poor testings as it seems he did not connect the power cable to the Razer laptop which is too obvious just like others mentioned.
You‘re saying the razer performs worse without being tethered? Wow, sounds like a terrible PORTABLE computer then. No wonder the Mac is so far ahead.

Thanks for bringing this fatal flaw in Intel/Nvidia’s components to our attention.

Edit: where is the proof that the razer wasn’t plugged in? All I saw was a few incoherent commenters claiming that because their brains are too small to believe a Mac can be faster.
 
Last edited:

NT1440

macrumors Pentium
May 18, 2008
15,092
22,158

Poor testings as it seems he did not connect the power cable to the Razer laptop which is too obvious just like others mentioned.
This is the thing I don’t understand. Every Mac laptop runs at full spec on battery. All the “Mac destroyer” laptop’s performance drops off a cliff when unplugged for power. What good are the specs if they only apply when plugged in on a *portable* computer?

I thought the whole idea is these machines were performance monsters that can be taken anywhere…that an outlet is nearby?
 
  • Like
Reactions: bcortens

sunny5

macrumors 68000
Jun 11, 2021
1,838
1,706
This is the thing I don’t understand. Every Mac laptop runs at full spec on battery. All the “Mac destroyer” laptop’s performance drops off a cliff when unplugged for power. What good are the specs if they only apply when plugged in on a *portable* computer?

I thought the whole idea is these machines were performance monsters that can be taken anywhere…that an outlet is nearby?
Then why do we even talk about the performance? Even MBP requires outlet power while working with professional software. Laptops are still portable while desktops are not. Beside, the performance differences is already huge which makes it pointless to compare between Mac and PC.
 

NT1440

macrumors Pentium
May 18, 2008
15,092
22,158
Then why do we even talk about the performance? Even MBP requires outlet power while working with professional software. Laptops are still portable while desktops are not. Beside, the performance differences is already huge which makes it pointless to compare between Mac and PC.
No, the MBP line does not reduce performance based on plugged in or not. Don’t know where you got that idea.
 

APCX

Suspended
Sep 19, 2023
262
337
Then why do we even talk about the performance? Even MBP requires outlet power while working with professional software.

No, no they absolutely do not.
Laptops are still portable while desktops are not. Beside, the performance differences is already huge which makes it pointless to compare between Mac and PC.
What are you smoking? The tests show the Mac being faster and yet you cry about being plugged in, even though there is no proof that the Razer wasn’t plugged in.
 

sunny5

macrumors 68000
Jun 11, 2021
1,838
1,706
No, the MBP line does not reduce performance based on plugged in or not. Don’t know where you got that idea.
The max performance is slower than PC and that's the fact since MBP limits the power up to 100W. Beside, if you start working, even MBP will lose battery quickly which is pointless to use it as unplugged.

M2 Ultra is already slower than RTX 4060 or RTX 3060ti so I don't expect on M2 Max.
 

APCX

Suspended
Sep 19, 2023
262
337
The max performance is slower than PC and that's the fact since MBP limits the power up to 100W. Beside, if you start working, even MBP will lose battery quickly which is pointless to use it as unplugged.
No, again wrong. The video shows you are incorrect.
M2 Ultra is already slower than RTX 4060 or RTX 3060ti so I don't expect on M2 Max.
Nonsense.
 
Last edited:
  • Like
Reactions: MRMSFC and MayaUser

NT1440

macrumors Pentium
May 18, 2008
15,092
22,158
The max performance is slower than PC and that's the fact since MBP limits the power up to 100W. Beside, if you start working, even MBP will lose battery quickly which is pointless to use it as unplugged.

M2 Ultra is already slower than RTX 4060 or RTX 3060ti so I don't expect on M2 Max.
I’m sorry but what PC laptop are you making comparisons to?
 

bcortens

macrumors 65816
Aug 16, 2007
1,324
1,796
Canada
The max performance is slower than PC and that's the fact since MBP limits the power up to 100W. Beside, if you start working, even MBP will lose battery quickly which is pointless to use it as unplugged.

M2 Ultra is already slower than RTX 4060 or RTX 3060ti so I don't expect on M2 Max.
Again, the M2 Ultra is only slower than those chips in specific benchmarks that appear to be solely optimized for Nvidia chips. I pointed out before that the Radeon 7900 XTX cards (which have compete capabilities on par with the 4080) were also slower than the 4060Ti in the benchmark you provided. This demonstrates pretty clearly to my mind that the benchmark is only useful if you care only about Redshift. You ignored my last post pointing this out, and seem to ignore any post that shows any benchmarks that contradict you. I am happy to say that if you need Redshift you should buy Nvidia.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.