Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Boil

macrumors 68040
Oct 23, 2018
3,477
3,173
Stargate Command
If software is developed to take advantage of the way ASi graphics work (Metal/GPU cores/Neural Engine cores/eventual ray-tracing cores/UMA/etc.), much like software is currently tailored to Nvidia hardware, then who knows what kind of performance comparisons we might see...?
 
  • Like
Reactions: theorist9

Kimmo

macrumors 6502
Jul 30, 2011
266
318
If software is developed to take advantage of the way ASi graphics work (Metal/GPU cores/Neural Engine cores/eventual ray-tracing cores/UMA/etc.), much like software is currently tailored to Nvidia hardware, then who knows what kind of performance comparisons we might see...?

A good point and a very interesting question.

When Lloyd Chambers (a well-respected tech commentator in the photography space who publishes the Mac Performance Guide) first tested the Studio Ultra he was very disappointed.

"Flabbergasted and no idea yet what‘s going on. But if even one or two more tests go this way, it looks bad for the M1 Ultra."

Then, Adobe released Photoshop CC 23.4.1 and "the performance problems have vaporized."

I'm very open to seeing what Johny Srouji and his team have in store for the new Mac Pro. My hope (and advice to Apple, if they care) is that, when the machine is announced, they dial back the kind of marketing hyperbole we saw with the Studio release (like the questionable GPU comparison graphs).

Just give us the straight talk about the design goals for the 8,1 and what will be needed for it to reach its full potential in various workflows.

We can handle the truth. :)

 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
....

"Flabbergasted and no idea yet what‘s going on. But if even one or two more tests go this way, it looks bad for the M1 Ultra."

Then, Adobe released Photoshop CC 23.4.1 and "the performance problems have vaporized."

I'm very open to seeing what Johny Srouji and his team have in store for the new Mac Pro. My hope (and advice to Apple, if they care) is that, when the machine is announced, they dial back the kind of marketing hyperbole we saw with the Studio release (like the questionable GPU comparison graphs).

Like asking Steve Jobs to get up on stage and leave the reality distortion field behind. It is a dog-and-pony show. Just look at it for what it is. Pretty doubtful they are going to overplay any potentially short term performance glitches. They are going to tend to pick applications that do well (e.g., have been listening to Apple suggestions over last 2-3 years at WWDC sessions of the preferred design paths to follow) rather than apps stuck in the past using deprecated APIs for the benchmark graphics.

And Nvidia has pointed a juicy bullesye on their back with the mega power consuming 4000 series options. The "do anything to temporarily take the top end crown" numbers are not even being used on the Pro model

"... has a TDP rating of up to 300W power (vs. 450W in case of the GeForce RTX 4090). ..."
https://www.tomshardware.com/news/n...8gb-gddr6-ecc-memory#xenforo-comments-3778613

Apple's whole "Extreme" SoC likely will come in lower than 350W total for CPUs , NPUs, ProRes de/encoders , GPUs ,etc. which will be 100W lower than just a GPU with no CPU ( I guess can match up NPUs to tensor cores , and ProRes to AV1/etc decoders. ). Minimally, Apple will likely poke at the 3080 again.

Decent chance same thing happens for the Mac Pro. Some initial set of Photoshop benchmarks in the extremely earlier initial wave and then several months later ... substantially different results. That might be offset though if Apple does a "sneak peak" in October and doesn't ship until Feb-March 2023. That way Apple could push more Mac Pro under less hyper draconian NDAs out to several software vendors for longer periods of time. That would allow more software optimization adjustments before shipping. ( slightly less "delight and surprise" in the roll out and more broader spectrum software fixing in advance. ) . If Apple tries to rush the Mac Pro out the door to hit some technically arbitrary 2022 deadline then "same stuff , different day" ( somewhat flakey software roll out that smooths out over time that is more detached from the initial hype train in the dog and pony show. ).


Just give us the straight talk about the design goals for the 8,1 and what will be needed for it to reach its full potential in various workflows.


This isn't going to be a WWDC technical session. iPads and some other new mainstream Macs will likely roll out in the same show. Apple Tech specs pages are written by the sales marketing department (to promote BTO config sales) , not really high quality technical support documents.

If I recall correctly the "Mac Pro Technical Overview" whitepaper didn't surface out of Apple until over a month (or quarter ? ) after the Mac Pro 2019 initially shipped. if there is a sneak preview they highly likely will hold back just about any technical support documentation/ knowledge base articles / etc. Doubtful they will change their 'spots' on this time around.

For better or worse Apple can probably get several thousand folks to hit the 'buy' button in the first hour the thing goes on active preorder with zero substantive technical information at all for the next Mac Pro. That's why they don't really bother. As long as there is a somewhat pervasive mania on first day that "better order quick or won't get one for several months ", Apple will sell lots. [ The Ultra Studios being still backordered 6 months after intro is only going to feed the fire. ]

We can handle the truth. :)


If want the whole truth, then likely going to have to patience and restraint. Longer term, Apple will have to shift to selling the Mac Pro to less manic driven customers and the information will roll out.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
I think you have to look at the Pro user that Apple Wants to use the Mac Pro… so the people they are targeting rather than the entire pro industry. Apple does not care about 3D workstations that have been the domain of PCs for years. They will target high end ....… Apple didn’t pander to the PC crowd with the 2019MP, they just built a powerful expandable machine to severe the market at the time…

That is actually part of the disconnect on these forums as there is a vocal few who think that that the most influential faction that brought about the MP 2019 was the one demanding PC ecosystem commodity parts. That the MP 2019 was a capitulation to commodity parts and that Apple was expressing undying love for hyper modularity parts.

Apple really wasn't trying to make the MP 2019 the most part configurable ('best') WindowPC out on the market. (or the 'best' Linux Workstation out of that market).

The Apple MPX modules do not fit in another workstations. That MPX bay has standard PCI-e connector primarily is a side effect of minimizing the drift off the standard AMD GPU reference board designs. Apple added a second connector and ran the additional stuff they wanted through that (probably through different layers on the board where that would cause less entanglement with the reference design path layouts. ) Was Aux Power sockets a primary objective? No. Primarily design objective was to get rid of random cables via the MPX connector. Primary cooling through Apple fans (not random board fans). etc.

Other folks have twisted the MP 2019 into the notion that random off-the-shelf Windows PC GPU cards was a primary design objective. That really isn't true. A subset fell out as a side effect of more GPU cards with better UEFI boot support and dGPUs usages in other parts of the Mac line up. That wasn't Apple jumping out of bed in the morning saying "Hurray, folks can buy all their mac GPU cards off of Newegg/Amazon and a Microcenter/Fry's instead of us. " ).


Similar on the CPU side. By 2019, Apple wasn't a mega fan of UEFI. That standard W-6200 Intel reference boards boot UEFI so the Mac Pro 2019 boots it. Apple is going along there but not their primarily objective. UEFI is being validated by T2 and the whole boot process is being "transitioned" (hint T series) to Apple Silicon handling it.

The Mac Pro 2019 was a "Mac" first and a commodity parts container as it was secondarily convenient where didn't conflict with the first, primary objective.

The eight PCI-e slots of the MP 2019 bought Apple regulatory relieve "get out jail free cards" from the system power utilization levels from the CalPower regulatory rules probably as least as much as any "undying love" from apple for super high slot counts solely in and of themselves. [ If Apple could have shipped something with a 900-1,000W power supply they likely would have preferred that as an option. It mistake to presume that Apple is deeply enamored with consuming as much power out the wall as possible. MP2019 wall power consumption went way up, but doubtful Apple was "extremely happy" with that. Design decisions by parts made by other vendors mainly drove that. ]



this time around I don’t see it being different, it will have to be expandable, other wise it’ll just be a mega Mac Studio.

Not going to be any different this time either. Modern era Mac first. Whatever fits around the edges of that as convenient .

An SoC completely detached from the rest of the Mac line up? Probably not.
Giving up the modern way of implementing Thunderbolt? Highly likely not.
Giving up core principles of CPU , NPU , GPU interconnection design principles? Highly likely not.
Only macOS 'raw iron' boot technically supported ( no UEFI at all in sight at boot time)? Very Probably true.


6-pin single width ( slots 6-8) Aux power support? maybe.

16-pin ATX PCI-e v5 Aux Power support? Highly likely not. ( Apple isn't going to chase upper high end power consuming Nvidia cards if several don't fit in a Mac Pro and don't have any macOS drivers anyway even on the Intel side; let alone the macOS on Apple Silicon side. What is not supported on macOS likely gets trimmed. )

Not going to be exactly a Mac Studio duplicate. But also not going to be a 1700W powered Threadripper mega chassis 'killer' container solution either. Apple is likely going to be looking at selling into the 'higher than Studio" space more so than "keeping up with the top end Dell/HP/Lenovo chassis" space. The upper end Studio and lower end Mac Pro probably share base SoC so can drive higher economies of scale on that component (i.e., support the overall Mac ecosystem heath), but different enough in system functionality to sell into a different set of target customers. The objective mostly being to lower Mac Studio sales fratricide rather than being a Dell/HP 'killer'.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
Sure. But you don't need to go to 3-4 GPU's to see a big gap. Let's consider just dual-GPU machines. When the AS MacPro comes out, it's going to be competing with dual 4090 (est. 160 TFOPS), dual 4090 Ti (est. dual 180 TFLOPS) or dual Quadro cards with whatever the sucessor is to the A6000 (comparable to the 4090/4090Ti).

Indeed, an M2 Extreme (est. 60 TFLOPS, unless they do something fancy) isn't even going to be competitive with a single 4090 (est. 80 TFLOPS) in general GPU compute.

Now maybe it doesn't need to be. But let's be accurate in characterizing where the gap will actually be. It's won't be vs. 4 X GPU machines. That's a straw man.

So the argument you'd actually need to make isn't that only a tiny fraction of the workstation market is running 4x GPUs. Rather, you'd need to argue that only a tiny fraction of the workstation market will run anything more powerful than a single 4080 Ti (est. 60 TFLOPS) (or the Quadro equivalent).

So I'm setting up a straw man ( when I did not come up with the 4x GPU metric in the first place) and you are not even following your single GPU comparison ( 'competing with dual 4090 ...' ) advice at the top of your response. Yes, if look at where the bulk of the workstation market is now, the upcoming upper-high end range cards are mostly gong to replace single and mid-range duals. The highest upper end multiple GPU set ups are going to drift into another 'zipcode' of the more fringe workstation market. Apple doesn't need to go there to sell enough units to have a viable product.


Nvidia is chopping the power to the "Pro" / Quatra class variant of the 4090's die down 150W.

" ... has a TDP rating of up to 300W power (vs. 450W in case of the GeForce RTX 4090). ..."

Somewhat doubtful the RTX 6000 will be pulling SP 80 TFLOPs when the power budget is cut. It will not be all the way back to the Extreme's theoretical upper limit, but closer.

A large chunk of the "Pro" value add is boosting the GPU VRAM capacity up to 48GB of RAM. The 'minimal' memory for a 'Extreme' is likely going to be double that of a Ultra (min 64GB ) . So starting at 128GB. Even if split that 60:40 CPU:GPU that is 3GB more VRAM for the "Extreme". If split 50:50 an even bigger gap.

Dual 3060-3070 deployments likely greatly outnumber of the number the old dual 3080/3090 deployments. The 4090 is self restricting card. The power and volume requirements bump it out of many systems (even out of the MP 2019 chassis).

Have front and back side water blocks.

> 3 width cards.

"... Strix GeForce RTX 4090, we built it with cooling as our top priority. The card’s 3.5-slot design consists of a ...
...
... the new ROG Strix GeForce RTX 4080 16GB ...In order to do that, this 3.5-slot cooler design uses the same die-cast metal frame, shroud, and backplate as the ROG Strix GeForce RTX 4090...
...
UF Gaming GeForce RTX 4080 16GB brings next-gen performance in a sturdy design. ... with a 3.65-slot thick heatsink for next-level cooling. That’s slightly thicker than the Strix variants of this card, but also measures more than 9mm shorter for wider case compatibility. ..."

Nvidia recommendation for the 4090 is a 700W power supply. Dual 4090s would consume 100% of the whole MP 2019 power supply just by themselves. (***) It will blow out the power budget for a large fraction of Dell/HP/Lenovo workstations also. ( that is one reason why the Pro model doesn't try to go anywhere near that high. )


The 4090 is a funhouse mirror card, whose primary objective is to claim the kind of tech porn benchmark crown. Nvidia fanboys are going to throw many at it. Folks past eyeball deep in highly proprietary Nvidia libraries will throw money . But it isn't a practical card that is going to sell in relatively high numbers.

The 4080 configurations do some very substantive backoffs to get to more pragmatically more sane system constraint parameters.

4090 24GB 450W 128SM 356-bit bus 1008GB/s memory bandwidth
4080 16GB 320W 76SM 256-bit bus 736GB/s memory bandwidth
4080 12GB 285W 60SM 192-bit bus 504GB/s memory bandwidth


By time dropped back to power levels more comparable with what Apple is going to consume the memory bandwidth has dropped in half. Those theoretical peak SP TFLOPs are less likely to match on more than few apps. Nvidia has some bigger caches which will help with micro benchmarks, but for moving data back and forth; not quite as much.


The bigger SP TFLOPs on broad spectrum apps 'threat' is likely going to come from AMD not Nvidia. AMD likely will hold back at first but at some point seems likely to just top the the 4090 in those cases with a 7900 with extra Infinity cache. (same central compute die , but 'bigger' memory controller + cache chiplets ) No good reason to ship those at first ( until they can measure exactly what the 4090 does) , but it is apparent that Nvidia has clocked the 4090 way into the diminishing returns zone. The weaker underbelly of Nvidia is going to be the higher end laptop dGPU space which is where there is a much better opportunity over next 18 months to make really good progress on unit sales.

The only thing that might 'scare' Apple about the 4090 is the 3rd generation hardware RT stuff. The out of control thermals. The half memory bandwidth at reasonable card parameters . That stuff isn't going to send Apple quaking in a corner in fear for the upcoming Mac Pro. Folks eyeball deep in Nvidia proprietary software... there are no Nvidia drivers even on macOS Intel. Those folks aren't causing Apple to loose tons of sleep at night. Could be nice to have those folks, but not critical to the Mac ecosystem. Apple did the Plus/Minus trade-off evaluation a while back during their feud with Nvidia over GPU driver priorities and requirements.




What would be useful for Apple to have is something like

" ... Peak Single Precision (FP32) Performance
22.6 TFLOPs ..."
https://www.amd.com/en/products/server-accelerators/amd-instinct-mi210"...

Which could give some configurations another booth in TFLOPs without some crazy high TDP increase. It is just a 'compute' card that could fit in some expansion box across the Mac line up. Short term, I doubt it is coming because they really haven't laid the ground work for it. But slavishly tracking where there 4090 is mainly going... extremely doubtful Apple is going to follow that route.



(***) Yes part of Nvidia's 700W budget is covering some modest allocation for "&rest" of the PC that is needed to run to 4090. But is indicative that the card is trying to 'hog' as much of the budget as it can. Peak, spike loads are likely going to be relatively high over the nominal board TDP. The power supply demands when throw in a 'big budget' power CPU are going to be more problematical if stick to the historical power supply levels.
 
Last edited:
  • Like
Reactions: Boil

theorist9

macrumors 68040
May 28, 2015
3,880
3,060
I'm very open to seeing what Johny Srouji and his team have in store for the new Mac Pro. My hope (and advice to Apple, if they care) is that, when the machine is announced, they dial back the kind of marketing hyperbole we saw with the Studio release (like the questionable GPU comparison graphs).

Just give us the straight talk about the design goals for the 8,1 and what will be needed for it to reach its full potential in various workflows.

We can handle the truth. :)
Yeah, hopefully there's no repeat of the Pro Display XDR fiasco, where they foolishly and laughably claimed it was as good or better than a $43k Sony Trimaster, only to have it later revealed that it's not capable of meeting Dolby's standards for an HDR Mastering Monitor. That served no purpose other than to damage the credibility they built up with the professional video community by hiring a Pro Workflow Team and releasing the 2019 iMac.
 

l0stl0rd

macrumors 6502
Jul 25, 2009
483
417
Yeah, hopefully there's no repeat of the Pro Display XDR fiasco, where they foolishly and laughably claimed it was as good or better than a $43k Sony Trimaster, only to have it later revealed that it's not capable of meeting Dolby's standards for an HDR Mastering Monitor. That served no purpose other than to damage the credibility they built up with the professional video community by hiring a Pro Workflow Team and releasing the 2019 iMac.
They already did that with the M1 Ultra and GPU claims ;)
 
  • Like
Reactions: theorist9

theorist9

macrumors 68040
May 28, 2015
3,880
3,060
They already did that with the M1 Ultra and GPU claims ;)
Yeah, I was thinking of that as I wrote my post. And that wasn't good either. But that seemed more in the realm of the usual game-playing companies do with benchmarking claims. At least to me, that was less obviously boneheaded than their assertion that the XDR equaled or bettered a Trimaster, since it can't do the most basic thing the Trimaster is designed for (serve as an HDR Mastering Monitor). I.e., it wasn't merely wrong in degree, it was wrong in kind.

It's like marketing a camera to scuba divers and saying it's superior to the Nikonos, when it can't even be used underwater.
 
  • Like
Reactions: l0stl0rd

sunny5

macrumors 68000
Original poster
Jun 11, 2021
1,837
1,706
They already did that with the M1 Ultra and GPU claims ;)
M1 Ultra is good after more optimization from several software so I highly disagree. Furthermore, Apple Silicon Mac is only 2 years old.
 

theorist9

macrumors 68040
May 28, 2015
3,880
3,060
M1 Ultra is good after more optimization from several software so I highly disagree. Furthermore, Apple Silicon Mac is only 2 years old.
It is good, but it's simply not in the same class as the 3090 in general GPU performance, and Apple was being disengenous in indicating otherwise. If we ignore optimization, and just look at TFLOPs, we see the M1 Ultra is 21 TFLOPs, and the 3090 is 36 TFLOPs. They should have instead just been honest and said the 3090 is x times more powerful, but consumes y times more power.
 
Last edited:
  • Like
Reactions: Kimmo

sunny5

macrumors 68000
Original poster
Jun 11, 2021
1,837
1,706
It is good, but it's simply not in the same class as the 3090 in general GPU performance, and Apple was being disengenous in indicating otherwise. If we ignore optimization, and just look at TFLOPs, we see the M1 Ultra is 21 TFLOPs, and the 3090 is 36 TFLOPs. They should have instead just been honest and said the 3090 is x times more powerful, but consumes y times more power.

In video, it's already outperform 3090. Even in 3D, M1 Ultra is 3090's level.
 

theorist9

macrumors 68040
May 28, 2015
3,880
3,060
In video, it's already outperform 3090. Even in 3D, M1 Ultra is 3090's level.
As I already explained in my response to the other poster, who was saying the opposite of you, you can always find one-offs where a particular system is superior. But I'm talking about overall, average performance. He was cherry-picking in one direction to unfairly underestimate AS vs. NVIDIA, while you are cherry-picking in the opposite direction to unfairly overestimate AS vs. NVIDIA. I wish people would not do either, i.e., I wish folks would stop presenting cherry-picked results as indicative of general performance.
 
Last edited:

theorist9

macrumors 68040
May 28, 2015
3,880
3,060
you are not even following your single GPU comparison ( 'competing with dual 4090 ...' ) advice at the top of your response.
I have no idea what you mean by this.

The very top of my response was a dual-GPU comparison. Indeed, this was the start of my response: "Sure. But you don't need to go to 3-4 GPU's to see a big gap. Let's consider just dual-GPU machines."

It was only later that I added a single-GPU comparison.

So I don't see the logic in what you are saying.
 
Last edited:

alex00100

macrumors 6502
Mar 17, 2011
469
1,227
Moscow, Russia
My two cents are these.

maya, the leading 3d graphics program is still not optimized for arm and no plans have been announced to. Unreal, the industry disruptor is not available on macs at all. Same with Houdini and countless renderers. For other software makers like toon boom macs ports are an afterthough. Why would anyone even need such an expensive powerful Mac if most industries do not support macs? Macs are great for creativity but in other areas. Design, editing, etc. I see no incentive for apple to chase 4090 performance or even ray tracing. this is simply not the company’s priority. They’ll make way more by shifting priorities to optimizing battery life.
 

sunny5

macrumors 68000
Original poster
Jun 11, 2021
1,837
1,706
My two cents are these.

maya, the leading 3d graphics program is still not optimized for arm and no plans have been announced to. Unreal, the industry disruptor is not available on macs at all. Same with Houdini and countless renderers. For other software makers like toon boom macs ports are an afterthough. Why would anyone even need such an expensive powerful Mac if most industries do not support macs? Macs are great for creativity but in other areas. Design, editing, etc. I see no incentive for apple to chase 4090 performance or even ray tracing. this is simply not the company’s priority. They’ll make way more by shifting priorities to optimizing battery life.
Because there is no powerful Mac to work with 3D so far. M1 Ultra is powerful but still not enough to compete with PC which can use multiple GPUs.
 

theorist9

macrumors 68040
May 28, 2015
3,880
3,060
My two cents are these.

maya, the leading 3d graphics program is still not optimized for arm and no plans have been announced to. Unreal, the industry disruptor is not available on macs at all. Same with Houdini and countless renderers. For other software makers like toon boom macs ports are an afterthough. Why would anyone even need such an expensive powerful Mac if most industries do not support macs? Macs are great for creativity but in other areas. Design, editing, etc.
This seems to be overstating things.

According to this thread on Maya's website, these AS users aren't abandoning Macs because Maya doesn't yet run on it; rather, they are moving to alternative industry software that does, like C4D, contradicting your picture that industry alternatives aren't available ("most industries do not support macs").


And as for Houdini on AS, they've released an alpha version, which would seem to indicate they are serious about releasing a stable one:


I see no incentive for apple to chase 4090 performance or even ray tracing. this is simply not the company’s priority. They’ll make way more by shifting priorities to optimizing battery life.
The only Mac that Apple might design to chase the 4090 would be the Mac Pro, and it doesn't run on batteries. As for the Macs that do, they've already shifted priorties there—the AS laptops offer an extraordinary combination of performance and battery life.
 
Last edited:

alex00100

macrumors 6502
Mar 17, 2011
469
1,227
Moscow, Russia
I don't see how anyone would abandon Maya over its incompatibility with Macs. If you work in a studio and it uses Maya you can't install blender at home. You'll buy a pc. Switching programs works for a YouTuber, not for enterprise, who probably buys most of their Mac pros.

I just don't see apple putting so much into R&D to make a 4090 competitor. Most they can do is combine two M ultra chips.
 

theorist9

macrumors 68040
May 28, 2015
3,880
3,060
I don't see how anyone would abandon Maya over its incompatibility with Macs. If you work in a studio and it uses Maya you can't install blender at home. You'll buy a pc. Switching programs works for a YouTuber, not for enterprise, who probably buys most of their Mac pros.
I can't speak to that, since I'm not in that industry. I was just relating what those Maya users were saying. If I were to guess, I think Maya will eventually make a native AS build (since they say they want to provide support to AS owners), and that the reaction of a typical Intel Mac Pro owner to this situation wouldn't be to switch to Windows, but rather to stay with their Intel Mac Pros until Maya resolves this.

After all, just as some enterprise users might be reluctant to switch to a different application, I expect they'd also be reluctant to switch to a different OS, since that would also be disruptive to their workflow.
I just don't see apple putting so much into R&D to make a 4090 competitor. Most they can do is combine two M ultra chips.
I've wondered the same thing, but not because it wouldn't be useful to their customers, but rather because of the resources it might require:

 
Last edited:

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
maya, the leading 3d graphics program is still not optimized for arm and no plans have been announced to. Unreal, the industry disruptor is not available on macs at all.

This Unreal Engine?

"...
  • macOS Big Sur, quad-core Intel, 2.5 GHz or faster, 8 GB RAM
..."

About a month or so ago there were arm64 commits to the git repository. That isn't the current product sources but not entirely detached either. Lawsuits probably isn't helping to put the port on a fast track.



Same with Houdini

" ...
macOS:



  • Requires 64-bit Intel-based or Apple Silicon Mac with macOS 10.15 and higher
  • Note: on an M1 mac it is NOT supported yet but a technical preview is now available
..."



and countless renderers. For other software makers like toon boom macs ports are an afterthough. Why would anyone even need such an expensive powerful Mac if most industries do not support macs?

Apple only added some abstractions for Metal standard ray tracing running on the GPU with Metal 3. Developers can now create an indirectCommandBuffer ( a construct that has commands to run on the GPU directly ) with a property "SupportRayTracing = true ". Apple has been growing their ray tracing support in Metal for about 3 or so years.

Older apps which had a large, mature OpenGL (and shader engines ) and relatively mature OpenCL foundation being told by Apple to dump it all for Metal (as the others are deprecated here on macOS , but not on major other OS) is a substantive investment.

Apple introduced the M1 in 2020 but didn't do the M1 Max until late 2021. The Ultra didn't arrive until March 22. There really hasn't been a huge upside if most of the applications customer base is using very high end Macs as those have been the last to move. And Rosetta 2 works 'OK'. ( doesn't seem to work so well on apps that do lots of dynamic shader engines builds in OpenGL. ( multiple dynamic recompiles. )

For something with tons of legacy inertia like Maya it really wouldn't have made sense to commit to a port before seen a working Ultra Studio and had time to seriously try out some probative prototype code to "kick the tires". It isn't a high growth market software package and likely a fixed development budget with other ports that also are pulling on that same budget. .

It isn't just hardare that is an issue. Apple has chucked most of the more established portable APIs out the window. Some software vendors they are going to loose are going to be on higher software porting costs.

Macs are great for creativity but in other areas. Design, editing, etc. I see no incentive for apple to chase 4090 performance or even ray tracing. this is simply not the company’s priority. They’ll make way more by shifting priorities to optimizing battery life.

When TSMC has N3P or N2 fab process , better packaging, and LPDDR improvements , they will get closer to 4090 performance. ( e.g. , if could shrink the two die ultra into one die and then bind four of those together. ) Whether it is hyper critical that arrives short term or not is the more questionable part. Apple isn't shifting priorities here. They have had a Pref/Watt focus since 2016-2017 at least. M1 didn't appear out of no where. Apple was spending a ton of time/effort/money on that area before 2020 .
 
  • Like
Reactions: Boil

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,627
1,101
Older apps which had a large, mature OpenGL (and shader engines ) and relatively mature OpenCL foundation being told by Apple to dump it all for Metal (as the others are deprecated here on macOS , but not on major other OS) is a substantive investment.
Most of these 3D programs are still using OpenGL and will be forced to use something different soon. If we use Blender as a reference, Blender developers have realized that porting OpenGL to Vulkan is harder than they thought. In fact, they have stopped doing it and the Metal backend for the viewport will be the first.
 
  • Like
Reactions: alex00100

Boil

macrumors 68040
Oct 23, 2018
3,477
3,173
Stargate Command
Maya, the leading 3d graphics program is still not optimized for arm and no plans have been announced to.

Man, I remember going to Siggraph in Orlando when Maya first came out, all the modules of the software and a decently specced O2 was in the low six figures, plus the ongoing yearly maintenance fees...! ;^p


When TSMC has N3P or N2 fab process , better packaging, and LPDDR improvements , they will get closer to 4090 performance. ( e.g. , if could shrink the two die ultra into one die and then bind four of those together.) Whether it is hyper critical that arrives short term or not is the more questionable part. Apple isn't shifting priorities here. They have had a Pref/Watt focus since 2016-2017 at least. M1 didn't appear out of no where. Apple was spending a ton of time/effort/money on that area before 2020 .

Use the increased transistor budget for more cores, just change the CPU/GPU ratio and have one ASi Mac Pro specific die using N3X, then strap four of those together with a four-way UltraFusion...

64-core CPU (48P/16E)
480-core GPU (w/hardware ray-tracing)
64-core Neural Engine
Media Engine(s)
1TB LPDDR5X SDRAM
2TB/s UMA bandwidth

Now throw that into a 7.7" x 7.7" x 7.7" chassis with a 420W PSU and a 180mm exhaust fan up top, the ASi Mac Pro Cube...

Also available in a rackmount chassis with storage bays & PCIe slots via TB5 port(s) from the integrated Cube... ;^p
 
Last edited:

jmho

macrumors 6502a
Jun 11, 2021
502
996
In one of Apple's WWDC22 videos (Accelerate machine learning with Metal) they daisy chain together 4x Studio Ultras with Thunderbolt cables and use them to run a GPU accelerated machine learning workload.

WWDC video says:
For a single Mac Studio, the performance is about 200 images per second. When I add another Mac Studio connected via Thunderbolt, the performance almost doubles to 400 images per second since both GPUs are utilized to the fullest. Finally, when I connect two more Mac Studios, the performance is elevated to 800 images per second. This is almost linear scaling on your compute bound training workloads.

It's interesting, because for certain workloads like ML or rendering, which are "embarrassingly parallel" you can probably already build your own "Mac Pro" by daisy chaining Mac Studios together. 10 M1 Max's for $20k would give you 320 GPU cores.

This does obviously rely on software support though, which is currently non-existent, but part of me wonders if this is what Apple meant when they called the Mac Studio "modular", and if the Mac Pro is going to be more like a "swarm" of Mac Studios rather than a giant monolithic SoC.
 

Boil

macrumors 68040
Oct 23, 2018
3,477
3,173
Stargate Command
10 M1 Max's for $20k would give you 320 GPU cores.

It is the 24-core GPU variant that is at the $2K entry price, so you would have a total of 240 GPU cores available (as well as 160 Neural Engine cores)...

This does obviously rely on software support though, which is currently non-existent, but part of me wonders if this is what Apple meant when they called the Mac Studio "modular", and if the Mac Pro is going to be more like a "swarm" of Mac Studios rather than a giant monolithic SoC.

It will be a collection of SoCs bound together with UltraFusion...
 

jmho

macrumors 6502a
Jun 11, 2021
502
996
It will be a collection of SoCs bound together with UltraFusion...
My point is that UltraFusion is potentially unnecessary, and potentially unwanted for a lot of large tasks like rendering or ML.

For example 2x 32 core M1 Maxs doing compute workloads in parallel will be faster than 1x 64 core M1 Ultra.
 

singhs.apps

macrumors 6502a
Oct 27, 2016
660
400
My two cents are these.

maya, the leading 3d graphics program is still not optimized for arm and no plans have been announced to. Unreal, the industry disruptor is not available on macs at all. Same with Houdini and countless renderers. For other software makers like toon boom macs ports are an afterthough. Why would anyone even need such an expensive powerful Mac if most industries do not support macs? Macs are great for creativity but in other areas. Design, editing, etc. I see no incentive for apple to chase 4090 performance or even ray tracing. this is simply not the company’s priority. They’ll make way more by shifting priorities to optimizing battery life.
And yet in an video, Apple was showing off Maya’s viewport performance on AS
If Sidefx comes to AS (preview builds available already), rest assured Autodesk won’t be far behind.

Back in the day, Alias had a different build of Maya for PowerPC. Not sure if the incentive is missing today, since out of the three 3D packages Autodesk had, Maya was the only one with cross platform compatibility.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.