Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Boil

macrumors 68040
Oct 23, 2018
3,478
3,174
Stargate Command
Assuming the "there won't be m2 extreme"-gurman narrative is true (is not), Apple engineers approach should have experimented an huge issues they can't remedy, switching to an NEW node won't help solve this issue unless the whole SOC complex follows an radical redesign from day 0 instead the Lego-m2 max for something like 3D AMD Epyc or Ryzen , if said issue is related to the LEGO-like M2-max approach unlikely apple could input it's remedy until n3b m4 is sent to foundries.

So there is current output of N3B wafers since December 2022, the highest level M2-series SoC is the M2 Max, there has been zero debut of any sort of M3-series product, but M4 is going to be on N3B...?

Ah, final m2 Extreme name seems to be apple m2 ultra²

or just m2 squared

Both ridiculous; the former too Fast & too Furious, the latter indicative of multiple base Mn SoCs...


It is possible that the version A17 Bionic that Apple wanted to be fabricated on the same 3nm process was not successful in terms of power consumption and heat generation, which is why the performance target may have dropped. However, if the technology giant has reached an unscalable obstacle with this silicon, the M3 SoC meant for future Macs may also experience the same setbacks.

Maybe not ideal for mobile SoCs stuffed in an iPhone chassis, but shouldn't really be an issue for larger SoCs in a desktop chassis running off of wall power...

Indeed pray for m2-quad/squared whatever ASi Mac Pro or wait until m4 Mac Pro finally happens.

Again you are skipping right over any M3-series powered ASi Mac Pro...?

At least you are using the proper "ASi", rather than the "AS" most others use...! ;^p
 
Last edited:

ZombiePhysicist

Suspended
May 22, 2014
2,884
2,794
Sure about that? The boss of Bugatti (Mate Rimac) disagrees with you:

Yes, Im positive about that. The NEW boss, who took over only about 1.5years ago (and who took over because VW was itching to dump Bugatti PRECISELY because it was bleeding losing money), and who was not in charge of DESIGNING a custom 16 cylinder engine for the Bugatti VEYRON, and is redesigning Bugatti to be a hybrid and who BELIEVES he can get it to turn profitable. Good for him if he can. Now go back to the Veyron, you know the car where they ACTUALLY developed the V16. So yea, I'm right.





Moving on to the Chiron they started to lose less and on some one-off special edition cars actually made some money but they already sunk the cost in making the V16.

Arguments are they may now actually start to make money, maybe. I wish good things for them. But my original point stands and is correct.
 
Last edited:

ZombiePhysicist

Suspended
May 22, 2014
2,884
2,794
Sure, because the personal computer workstation market is exactly the same as the hyper-luxury supercar market. All the people here wanting more than 256GB of RAM or PCIe GPUs are clearly just rock stars, sports personalities and children of oil magnates with bottomless pockets, who want something to turn heads when they roll up at the exclusive country club, pick up important clients from the airport or occasionally get their car freighted out to the Nurburgring (sp?) for a track day. ...and designing a custom processor die and fabricating it in tiny numbers is exactly the same as hand-building a few custom engines. Not.

Rather, I'm pretty confident that the "real workstation users" have real, practical problems to solve as part of their work, and while they can justify expensive hardware in terms of their business turnover, that doesn't mean they have heaps of Daddy's money or sports sponsorship income to blow on status symbols... and maybe people see the Bugatti doing 250mph (or whatever) on Top Gear and go out and buy VW Golfs in response, maybe it's worth it for a trickle-down of new tech, patents and attracting top designers and engineers - or maybe its all some complex tax write-off scheme that only an accountant can explain.

Apple's most successful "halo products" have bean their cheaper lines, like the iPad and iPhone, without which the Mac would probably have vanished long ago, and the technology seems to trickle up from those items (the whole of Apple Silicon is basically about scaling up mobile tech to laptops and desktops). Their closest thing to a "supercar" so far is probably the 2019 Mac Pro - and ask anybody who isn't already a Mac fan about the Mac Pro and all you'll get is snide remarks about $800 wheels, so as a halo it's pretty lousy. The whole "real vs. fake workstation user" argument right here is pretty much evidence that people who don't need a Mac Pro don't get why it costs what it does.

Even the 2019 Mac Pro - folks, it's just a Xeon-W/PCIe system using existing chipsets from Intel (CPU) AMD (GPU) - who all have other, far larger markets for those chips - with an extra row of connectors to route power and Thunderbolt between PCIe slots. The one bit of custom Apple silicon is the T2 chip which had already been developed and manufactured in huge quantities for everything from the MacBook Air up. Creating a substantially new processor (rather than some sort of multi-Mx Max) would be a major departure for Apple, with no particular guarantee that it would outperform a Xeon or Threadripper driving the same AMD GPUs from the same DDR5 RAM.

Note that I wasn't saying that Apple won't/can't make some sort of new high-end Mac, just that it would be very hard & expensive to produce a direct replacement for the 2019 Mac Pro market. If Apple want a "halo" computer they could sponsor a scientific supercomputer based on grids of Mx Ultras, or make 1U Mx Ultra "compute modules" for scalable cloud computing, and maybe set some speed records - but what such systems have in common is that they'd probably be pretty unimpressive when running ProTools or Davinci....


People who buy 16-cylinder Bugattis aren't generally worried about mileage :)

Thank you for your 'super big brain' banal observations that analogies are not the same thing as the thing itself. That whoosh passing overhead was not a plane. The rest of your commentary has been weighted appropriately. Thanks for playing.
 

Mago

macrumors 68030
Aug 16, 2011
2,789
912
Beyond the Thunderdome
been zero debut of any sort of M3-series
There will be m3 series Macs but none of such a Mac Pro
Both ridiculous
Rumourland always been, don't consider it a leak or tip, just rumour, as ASi adoption of neoverse-v3 HCC, IMHO just speculation maybe based on wrong cues.
skipping right over any M3-series powered ASi Mac Pro...?
Issues with n3b and m3 are not good news for those hoping ASi Mac Pro to Skip "flawed" m2 (actually excellent soc), she should have no issues teaming as 4p System, Gurman just angry reactive rant as seem Apple kicked him from Cupertino.
 

Boil

macrumors 68040
Oct 23, 2018
3,478
3,174
Stargate Command
Well, if we are going to get higher clocked M2 Ultra / Extreme SoCs for the first gen ASi Mac Pro, skipping any M3-series SoCs, and second gen ASi Mac Pro is going to be using M4-series SoCs, the I hope Apple will be building those M4-series SoCs using N3X...
 

Mago

macrumors 68030
Aug 16, 2011
2,789
912
Beyond the Thunderdome
higher clocked M2 Ultra / Extreme SoCs
For an useful Mac Pro doesn't represents an huge deprive not getting ASi M3, maybe only if it includes dedicated ray tracing (ASi Matrix coprocessor does good RT but not close dedicated ASIC).

For developers neither an issue as according multiple sources ASi Mac Pro Will support compute/rendering on AMD dGPU (rt native support) at least for PCIe version, and those hoping for cheap GPU an ASi dedicated GPU likely joining the party but it at much Will be 30%-60% (duo/quad) as powerful as AMD rx7900xtx single.

April 4 or 11th should The Mac Pro release, read (as I predicted it's first leak but not from usual suspects): https://www.macworld.com/article/1670022/macos-13-3-ventura-mac-pro-release.html🤠
 

Boil

macrumors 68040
Oct 23, 2018
3,478
3,174
Stargate Command
For an useful Mac Pro doesn't represents an huge deprive not getting ASi M3, maybe only if it includes dedicated ray tracing (ASi Matrix coprocessor does good RT but not close dedicated ASIC).

Seems a shame if the ASi Mac Pro has to wait for the M4-series of SoCs to get dedicated hardware ray-tracing, while the M3-series of SoCs motors along with it for the next few years...?

For developers neither an issue as according multiple sources ASi Mac Pro Will support compute/rendering on AMD dGPU (rt native support) at least for PCIe version, and those hoping for cheap GPU an ASi dedicated GPU likely joining the party but it at much Will be 30%-60% (duo/quad) as powerful as AMD rx7900xtx single.

So a return to "third-party" AMD GPUs (in quotes because it would most likely be in the MPX format), but also a cheaper option in a dedicated ASi GPU...?

"Duo/Quad" as in two or four ASi dGPUs, or as in two or four GPU chips per discrete card...?

Personally, I would love to see a discrete Apple silicon GPU that can trade blows with the "almighty" RTX 4090...

Although, a solution along the lines of the Nvidia Grace/Hopper Superchip (but an Apple silicon variant, of course) might be sweet, if the cost can be kept within the lower atmosphere...

April 4 or 11th should The Mac Pro release, read (as I predicted it's first leak but not from usual suspects): https://www.macworld.com/article/1670022/macos-13-3-ventura-mac-pro-release.html🤠

Well, my birthday is April 15th ("Tax Day" here in the good ol' United States of America), so an ASi Mac Pro reveal would be a perfect gift...! ;^p
 
  • Haha
Reactions: Mago

theluggage

macrumors G3
Jul 29, 2011
8,015
8,450
April 4 or 11th should The Mac Pro release, read (as I predicted it's first leak but not from usual suspects): https://www.macworld.com/article/1670022/macos-13-3-ventura-mac-pro-release.html
Sure, we all know that the ASi Mac Pro is overdue and could be launched Real Soon Now, and since the last 2 Mac Pros and the iMac Pro were announced in WWDC keynotes that's not a bad bet. Especially if it has any novel features for developers that will need explaining in the actual developer sessions that follow the keynote...

However the arguments being rolled out to generate clickbait support these theories are getting increasingly tenuous. The reasoning in the linked article seems to be:

1. Rumours say prototype Mac Pro was running 13.3 in January
2. 13.3 will be rolled out in April, 13.4 could arrive in May
3. ???
4. Therefore Mac Pro must/might/could be launched between April and May....

...I'd like to critique the logic of that, except... there isn't any.

As the article points out, previous new Macs have released with custom, interim builds of MacOS. A new MP, if its ready, could launch with a custom build of 13.2 (with bits of 13.3 backported), 13.4 or even 14.1 (if, just as with the 6,1 7,1 and iMac Pro it gets launched at WWDC but doesn't ship until Q4). There's no reason whatsoever why the MP release should be constrained by OS release schedule (except that its probably already too late to ship with 13.2).
 

Mago

macrumors 68030
Aug 16, 2011
2,789
912
Beyond the Thunderdome
Personally, I would love to see a discrete Apple silicon GPU that can trade blows with the "almighty" RTX 4090...
nVidia's GPU supremacy will stay out of reach from apple at least by 3 years more and assuming nVidia switch to DPU/TPU market as priority as Gaming market wakens.
support these theories are getting increasingly tenuous.
of course, assuming not part of Apple's marketing tactics, which is what I guessed to happen close to launch, I wrongly wrote 'release' I should have wrote "introduction" as typical it will ship (release) two weeks later unless Apple already stacked crazy ASi MP inventory, It should be very similar to Mac Studio rollout.
 

Boil

macrumors 68040
Oct 23, 2018
3,478
3,174
Stargate Command
nVidia's GPU supremacy will stay out of reach from apple at least by 3 years more and assuming nVidia switch to DPU/TPU market as priority as Gaming market wakens.

[petulant frenzy]
I want my "top dog" ASi GPU, and I want it now...!
[/petulant frenzy]

of course, assuming not part of Apple's marketing tactics, which is what I guessed to happen close to launch, I wrongly wrote 'release' I should have wrote "introduction" as typical it will ship (release) two weeks later unless Apple already stacked crazy ASi MP inventory, It should be very similar to Mac Studio rollout.

FOMO = "Y'all best pre-order these shiny new ASi Mac Pros now...!"
 

Boil

macrumors 68040
Oct 23, 2018
3,478
3,174
Stargate Command
Seems a shame if the ASi Mac Pro has to wait for the M4-series of SoCs to get dedicated hardware ray-tracing, while the M3-series of SoCs motors along with it for the next few years...?
For developers neither an issue as according multiple sources ASi Mac Pro Will support compute/rendering on AMD dGPU (rt native support) at least for PCIe version, and those hoping for cheap GPU an ASi dedicated GPU likely joining the party but it at much Will be 30%-60% (duo/quad) as powerful as AMD rx7900xtx single.

Maybe this is how the M2 Ultra/Extreme ASi Mac Pro gets dedicated hardware ray-tracing before the M4-series of SoCs, ASi add-in cards...?

And I wonder if the ASi Mac Pro will adopt an "every-other-one" cadence; M2, M4, M6, etc. ...?
 

leman

macrumors Core
Oct 14, 2008
19,522
19,679
For an useful Mac Pro doesn't represents an huge deprive not getting ASi M3, maybe only if it includes dedicated ray tracing (ASi Matrix coprocessor does good RT but not close dedicated ASIC).

The matrix coprocessor has exactly nothing to do with raytracing. Why do you even write this nonsense?

For developers neither an issue as according multiple sources ASi Mac Pro Will support compute/rendering on AMD dGPU (rt native support) at least for PCIe version, and those hoping for cheap GPU an ASi dedicated GPU likely joining the party but it at much Will be 30%-60% (duo/quad) as powerful as AMD rx7900xtx single.

Except of course that Apple drivers do not use raytracing instructions on AMD hardware. Nor are there any ARM-native drivers for AMD hardware. And sure, these drivers can potentially be written, but why even bother? An M2 Ultra is likely going to be as fast in Blender as the 7900 XTX even without hardware RT.
 
  • Sad
Reactions: prefuse07

Mago

macrumors 68030
Aug 16, 2011
2,789
912
Beyond the Thunderdome
The matrix coprocessor has exactly nothing to do with raytracing. Why do you even write this nonsense?



Except of course that Apple drivers do not use raytracing instructions on AMD hardware. Nor are there any ARM-native drivers for AMD hardware. And sure, these drivers can potentially be written, but why even bother? An M2 Ultra is likely going to be as fast in Blender as the 7900 XTX even without hardware RT.
You can do raytracing even using STD integer and float instructions, what makes difference is on pipeline optimization, bare CPU RT not as efficient as matrix co-processor (sometimes named hexagon processor) matrix coprocessor while more efficient still way below dedicated RT.

May You elaborate why matrix coprocessor has nothing to do with RT (i never stated Apple to offer RT based on MTX, but it is usable for non-realtime RT?

About your m2 ultra performance, there are a bunch of benchmark debunking it, fwiw fp32 is 13 TF on m2 max (likely below 26 in ultra and 54 in extreme or ² as known ASi GPU in parallel don't Scale 1:1), meanwhile a single Rx 7900xtx (likely the one being ready for ASi MP, but maybe also an MI200 derivate maybe on the cards) is 61tflop and Nvidia 4090 is close or over 100tflop.
 

leman

macrumors Core
Oct 14, 2008
19,522
19,679
You can do raytracing even using STD integer and float instructions, what makes difference is on pipeline optimization, bare CPU RT not as efficient as matrix co-processor (sometimes named hexagon processor) matrix coprocessor while more efficient still way below dedicated RT.

May You elaborate why matrix coprocessor has nothing to do with RT (i never stated Apple to offer RT based on MTX, but it is usable for non-realtime RT?

I have no idea why you think a wide outer product engine is a suitable technology for doing ray primitive intersections and tree traversal but ok. Anyway, Apple uses the GPU for RT because it’s the best device for the job on their platform in terms of supported programming model and memory bandwidth.


About your m2 ultra performance, there are a bunch of benchmark debunking it, fwiw fp32 is 13 TF on m2 max (likely below 26 in ultra and 54 in extreme or ² as known ASi GPU in parallel don't Scale 1:1), meanwhile a single Rx 7900xtx (likely the one being ready for ASi MP, but maybe also an MI200 derivate maybe on the cards) is 61tflop and Nvidia 4090 is close or over 100tflop.

The 7900xtx (using the HIP backend) is about 2.2x faster than M2 Max in Blender. AMD performs much much worse using the Metal backend. I have little doubt that M2 Ultra will be faster than AMD in Blender on macOS.

P.S. And regarding those 60 TFLOPs… yes, nominally there is a large difference but the 2x FP per cycle on RDNA3 is possible to certain scenarios and only supports a limited set of inputs and outputs. That’s why none of the compute-heavy benchmarks show those kinds of improvements.
 
Last edited:
  • Sad
Reactions: prefuse07

prefuse07

Suspended
Jan 27, 2020
895
1,073
San Francisco, CA
An M2 Ultra is likely going to be as fast in Blender as the 7900 XTX even without hardware RT.

The 7900xtx (using the HIP backend) is about 2.2x faster than M2 Max in Blender. AMD performs much much worse using the Metal backend. I have little doubt that M2 Ultra will be faster than AMD in Blender on macOS.

Ugh, I am so tired of reading this same misguided ideology, and it usually comes from the Mac Studio brigade :rolleyes: (hope you're not one of them).


Here we go: M2 Max vs RX-6800XT (which is now 2 generations old, mind you)

Screenshot 2023-03-25 at 10.07.19 AM.png



Look at these results, ladies and gents..... Your beloved M2 apple silicon is still getting smoked, this time by a 2 generation old GPU.

Also Keep in mind that I have seen even higher RX-6800XT scores from other forum members, just head to the 6800/6900xt thread.


Most hilarious, your beloved M2 is also getting smoked by my Vega II Pro (which is even older than the 6800xt):

Screenshot 2023-03-25 at 10.09.12 AM.png


I'll even take this one step further and show you your beloved M2 being smoked by my RX-6800XT -- IN MY lowly 5,1 cMP.

Screenshot 2023-03-25 at 10.44.39 AM.png



Now imagine an RX-7900XTX, or even better an RTX-4090.... 🤣

This is why @Mago's solution makes 100% sense, and based on all he's given us, it sounds like apple is aware of this fact, and they seem to be working in the right direction, which is to keep supporting AMD until they can get their own GPU prowess up to par.
 
Last edited by a moderator:

Mago

macrumors 68030
Aug 16, 2011
2,789
912
Beyond the Thunderdome
Only way not going AMD GPU on the ASi Mac Pro is apple developed an massive ASi GPU only optimized SOC (which is quite expensive for such small niche ASi Mac pro market I see it an bit steep).

The guy complaining Doing RT on CPU or MTX co-processor should read about Photorealistic Renderman breakthroughs... Almost exactly 20yr ago way before there where nothing better than Hercules GPU we released Ray Tracing, of course an single VGA sized image required weeks to render. And everything was done in CPU,
 
Last edited by a moderator:
  • Love
Reactions: prefuse07

leman

macrumors Core
Oct 14, 2008
19,522
19,679
Ugh, I am so tired of reading this same misguided ideology, and it usually comes from the Mac Studio brigade :rolleyes: (hope you're not one of them).


Here we go: M2 Max vs RX-6800XT (which is now 2 generations old, mind you)

View attachment 2178608


Look at these results, ladies and gents..... Your beloved M2 apple silicon is still getting smoked, this time by a 2 generation old GPU.

Also Keep in mind that I have seen even higher RX-6800XT scores from other forum members, just head to the 6800/6900xt thread.


Most hilarious, your beloved M2 is also getting smoked by my Vega II Pro (which is even older than the 6800xt):

View attachment 2178613

I'll even take this one step further and show you your beloved M2 being smoked by my RX-6800XT -- IN MY lowly 5,1 cMP.

View attachment 2178629


Now imagine an RX-7900XTX, or even better an RTX-4090.... 🤣

RX 6800 XT is a 20 TFLOPs, 300W GPU
Radeon Pro Vega II is a 14TFLOPS, 480W GPU
M2 Max is a 14TFLOPS, 50W GPU

Your results show exactly that. As the aggregate RAM bandwidth for all these GPUs is comparable (when you take into account the cache size etc.) and their compute architecture is fairly similar as well, the FLOPs metric is a good proxy of simpler GPGPU kernels.

P.S. 7900xtx scores around 220k in GB6 compute. An M2 Ultra should be around 220-250k. The 4090 is obviously an entirely different category yet, of course.
 
Last edited by a moderator:
  • Sad
Reactions: prefuse07

leman

macrumors Core
Oct 14, 2008
19,522
19,679
Only way not going AMD GPU on the ASi Mac Pro is apple developed an massive ASi GPU only optimized SOC (which is quite expensive for such niche small ASi Mac pro market I see it an bit steep).

Yeah, it's called M2 Ultra.


The guy complaining Doing RT on CPU or MTX co-processor should read about Photorealistic Renderman breakthroughs... Almost exactly 20yr ago way before there were nothing better than Hercules GPU we released Ray Tracing, of course a single VGA sized image required weeks to render. And everything was done in CPU,


What does this bit of graphics trivia has to do with anything we are talking about here? And what's that "MTX co-processor" you keep rambling about? Apple hardware comes equipped with an outer product engine (which also supports a small subset of 512bit SIMD operations) called AMX. And it's completely irrelevant to any raytracing applications on Apple Silicon, although I suppose you can try to write a raytracer using AMX for academic purpose (it won't be very good, but hey). Metal raytracing is done using GPU compute kernels.
 
Last edited by a moderator:
  • Like
Reactions: uczcret

prefuse07

Suspended
Jan 27, 2020
895
1,073
San Francisco, CA
RX 6800 XT is a 20 TFLOPs, 300W GPU
Radeon Pro Vega II is a 14TFLOPS, 480W GPU
M2 Max is a 14TFLOPS, 50W GPU

Your results show exactly that. As the aggregate RAM bandwidth for all these GPUs is comparable (when you take into account the cache size etc.) and their compute architecture is fairly similar as well, the FLOPs metric is a good proxy of simpler GPGPU kernels.

Now go to the to the Blender benchmark database and look up the results I was actually talking about. And then estimate whether it makes sense for Apple to invest into AMD GPUs at this point.

When it comes to workstations, such as the Mac Pro, NOBODY CARES ABOUT WATTAGE. Watch, next you'll start attacking this from a CPU standpoint, and then after that portability, as this always turns out :rolleyes:


Blender benchmarks show your beloved silicon STILL being smoked by an RX-6800XT (that's right folks, 2 generations behind, same as with GB6)

Screenshot 2023-03-25 at 12.40.24 PM.png



I would pull 7900XTX into the above data, but there doesn't appear to be any within the Blender benchmarks.

Look at that 6900XT score though: Can we say..... SMOKED tri-tip? 😆


Where are you getting your data from?

P.S. 7900xtx scores around 220k in GB6 compute. An M2 Ultra should be around 220-250k. The 4090 is obviously an entirely different category yet, of course.

So, according to you, the RX-7900XTX should score weaker than an RX-6900XT in GB6, right?

Screenshot 2023-03-25 at 12.31.52 PM.png
 
Last edited by a moderator:

Boil

macrumors 68040
Oct 23, 2018
3,478
3,174
Stargate Command
Only way not going AMD GPU on the ASi Mac Pro is apple developed an massive ASi GPU only optimized SOC (which is quite expensive for such small niche ASi Mac pro market I see it an bit steep).

Huh, sounds a lot like the asymmetrical SoC configurations I have talked about in the past...

One "regular" Mn Max SoC paired with a "GPU-specific" SoC, resulting in a Mn Ultra that is heavy on the GPU cores...

Two "regular" & two "GPU-specific" for a Mn Extreme, also heavy on the GPU cores...

Those same "GPU-specific" SoCs could also be used on ASi (GP)GPUs...
 

ZombiePhysicist

Suspended
May 22, 2014
2,884
2,794
Only way not going AMD GPU on the ASi Mac Pro is apple developed an massive ASi GPU only optimized SOC (which is quite expensive for such small niche ASi Mac pro market I see it an bit steep).

The guy complaining Doing RT on CPU or MTX co-processor should read about Photorealistic Renderman breakthroughs... Almost exactly 20yr ago way before there where nothing better than Hercules GPU we released Ray Tracing, of course an single VGA sized image required weeks to render. And everything was done in CPU,

Some people should ask Google or chatGPT before exposing his ignorance.
We had real time renderman on NeXT as part of the operating system and part of the imaging system with display postscript with operating system level support of RIB files in Nextstep 3.3 back in early 90s on really lowly 68040 hardware. It worked not terrible all things considered.

One of many things that we lost in the dumbing down of nextstep for the apple crowd. I can only imagine had that continked to be developed, it would be pretty incredible today on modern hardware.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.