Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Boil

macrumors 68040
Oct 23, 2018
3,478
3,173
Stargate Command
I'm pretty sure it will be June 20th

If the ASi Mac Pro were going to be on 3nm, sure, WWDC makes sense; but if it is on 4nm M2 Ultra (Extreme) then it needs to debut before any of the M3 laptops rumored for imminent release...?

You know, so Apple can actually finish the ASi transition before announcing the third generation of Apple silicon SoCs...?!?
 

prefuse07

Suspended
Jan 27, 2020
895
1,073
San Francisco, CA
If the ASi Mac Pro were going to be on 3nm, sure, WWDC makes sense; but if it is on 4nm M2 Ultra (Extreme) then it needs to debut before any of the M3 laptops rumored for imminent release...?

You know, so Apple can actually finish the ASi transition before announcing the third generation of Apple silicon SoCs...?!?

Weren't you the one saying that it was gonna be based on the M3 chip in the What if thread?

I feel Apple could debut the ASi Mac Pro with 3nm-based M3 Ultra / M3 Extreme SoCs, of which those could definitely have a plethora of available PCIe lanes...

Why the sudden change?
 
Last edited:

Boil

macrumors 68040
Oct 23, 2018
3,478
3,173
Stargate Command
Ah, gotcha!

Well, you convinced me (in that thread) that the M3 was feasible, and it would make sense to debut it in their flagship.

*shrug*

I convinced myself, it all sounded so reasonable...!

If we get an announcement in the next month, then the M(ago)2 Ultra (Extreme) Mac Pro...!

ASi Mac Pro x AMD Instinct; what, what...! ;^p

The cost would be crazy there though...

Imagine a M2 Extreme Mac Pro; 48-core GPU (32P/16E), 156-core GPU, 384GB RAM, & 16TB SSD (4 @ 4GB NAND blades) costing LESS than the AMD Instinct add-in card...!

If we get nothing until WWDC, then maybe my M3 Extreme dreams become reality, gotta manifest that M3 Extreme Mac Cube...!
 

innerproduct

macrumors regular
Jun 21, 2021
222
353
you all know that the instinct 200 series is last gen and AMD will be releasing the massive APU 300 series this year, right? So, both amd and nvidia will basically release massive SOCs with unified mem and apple would go back in time 2 years to add an add on card when they pioneered unified socs for consumers?
I don’t buy it. Sound wrong.
 

Boil

macrumors 68040
Oct 23, 2018
3,478
3,173
Stargate Command
you all know that the instinct 200 series is last gen and AMD will be releasing the massive APU 300 series this year, right? So, both amd and nvidia will basically release massive SOCs with unified mem and apple would go back in time 2 years to add an add on card when they pioneered unified socs for consumers?
I don’t buy it. Sound wrong.

Yeah, the whole Otoy Octane X only on ASi going forward thing has been tickling in the back of my head...

And I do still prefer my whole asymmetric multi-die M3 Ultra / M3 Extreme SoCs & ASi (GP)GPU(s) combo meal...
 

Mago

macrumors 68030
Aug 16, 2011
2,789
912
Beyond the Thunderdome
you all know that the instinct 200 series is last gen and AMD will be releasing the massive APU 300 series this year,
Radeon MI200 is a 100tflop FP64 GPU, that's 12x more powerful than too M2 Max(while it's fp32 performance it's the same still at 8x that of the m2 ultra), also it's tdp is about 400W, while MI300 is 600W, i don't think AMD to offer neither the MI200 or MI300, maybe an dual RX7900XTX (ala pro duo) at Max-Q (reduced tdp) to keep the duo below 500W limit, an MI200 is an +5000$ GPU, while an dual 7900xtx should cost well below 2000$, and offer similar performance for non-science compute (fp32) and AI training.
 

Boil

macrumors 68040
Oct 23, 2018
3,478
3,173
Stargate Command
I still want the first generation ASi Mac Pro to debut with M3 Ultra & M3 Extreme SoC options, on a daughtercard would be best for (limited) future upgradability...?

I also would like to see ASi (GP)GPU solutions rather than any third-party options, it seems like Apple would be reversing course after telling devs to optimize for pure ASi GPU cores for the past two+ years...

I expect we will all find out more come WWDC at the latest...?
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
you all know that the instinct 200 series is last gen and AMD will be releasing the massive APU 300 series this year, right?


The top end MI300 entry will be ~600-700W on a OAM card that requires liquid cooling. There is a low probability that Apple is going to put that into a Mac Pro. It isn't going into mainstream Dell/HP/Lenovo workstation either.

In the MI200 series AMD released a chopped own MI-210 card that does fit in a PCI-e slot.

That card came in March 2022

The Mi200 series as a whole was press released in Nov 2021
https://videocardz.com/press-releas...up-to-220-cus-128gb-hbm2e-memory-and-560w-tdp

So there was about a 4 month gap between biggest module and workstation chopped down version. Waiting until 2024 for a MI-310 would be relatively silly for a optional add-in card. The MP 2019 didn't wait forthe W5700 to launch. The MI-310 is likely going to be very closely temperature constrained as the MI-210 is. And the more successul the full sized MI 300 units are the longer it will probably take for the chopped down version to roll out. (limited wafers ... the more higher profit margin product goes first ).




So, both amd and nvidia will basically release massive SOCs with unified mem and apple would go back in time 2 years to add an add on card when they pioneered unified socs for consumers?

Highly doubftul that the original Mac Pro on M-series plan was targeting 2023. Apple said "about 2 years " back in 2020 so extremely likely the plan was to release the Mac Pro 8,1 in 2022 or so; not 2023. Back in 2018-19 the stuff coming from AMD and Nvidia was not in 2022.

Also extremely doubtful Apple wants to sell those MI-210 at AMD standard pricing. Pretty good chance they'll want to push the pricing down to something in between the W6800 and W6800 Duo prices ( 2,800-5,000 ) range. AMD's problem is that the have made the MI210 so expensive it is hard to sell. If Apple comes in and haggles a deal where they double AMD 210 sales if AMD gives them a discount, that is possible. Similar thing happened with the Vega II Pro. (cheaper than list price for AMDs other 'Pro' cards in similar class).

In the DataCenter card space AMD does still have a unit sales problem. They are not getting the same traction in data center CPUs as they are with data center GPUs. It is a weak space where Apple still could have some leverage like back several years ago where AMD was deeply struggling in the mainstream GPU space.


I don’t buy it. Sound wrong.

Apple planning to release 2023 stuff in 2022 product sounds even more wrong.
 

innerproduct

macrumors regular
Jun 21, 2021
222
353
The top end MI300 entry will be ~600-700W on a OAM card that requires liquid cooling. There is a low probability that Apple is going to put that into a Mac Pro. It isn't going into mainstream Dell/HP/Lenovo workstation either.

In the MI200 series AMD released a chopped own MI-210 card that does fit in a PCI-e slot.

That card came in March 2022

The Mi200 series as a whole was press released in Nov 2021
https://videocardz.com/press-releas...up-to-220-cus-128gb-hbm2e-memory-and-560w-tdp

So there was about a 4 month gap between biggest module and workstation chopped down version. Waiting until 2024 for a MI-310 would be relatively silly for a optional add-in card. The MP 2019 didn't wait forthe W5700 to launch. The MI-310 is likely going to be very closely temperature constrained as the MI-210 is. And the more successul the full sized MI 300 units are the longer it will probably take for the chopped down version to roll out. (limited wafers ... the more higher profit margin product goes first ).






Highly doubftul that the original Mac Pro on M-series plan was targeting 2023. Apple said "about 2 years " back in 2020 so extremely likely the plan was to release the Mac Pro 8,1 in 2022 or so; not 2023. Back in 2018-19 the stuff coming from AMD and Nvidia was not in 2022.

Also extremely doubtful Apple wants to sell those MI-210 at AMD standard pricing. Pretty good chance they'll want to push the pricing down to something in between the W6800 and W6800 Duo prices ( 2,800-5,000 ) range. AMD's problem is that the have made the MI210 so expensive it is hard to sell. If Apple comes in and haggles a deal where they double AMD 210 sales if AMD gives them a discount, that is possible. Similar thing happened with the Vega II Pro. (cheaper than list price for AMDs other 'Pro' cards in similar class).

In the DataCenter card space AMD does still have a unit sales problem. They are not getting the same traction in data center CPUs as they are with data center GPUs. It is a weak space where Apple still could have some leverage like back several years ago where AMD was deeply struggling in the mainstream GPU space.




Apple planning to release 2023 stuff in 2022 product sounds even more wrong.
You are twisting my words. Please stop. I was just stating that I find it unlikely and wrong that Apple will go for a 2021 era compute accelerator in their new pro machines when they have invested so much in the idea of unified memory and SoCs. And as I wrote, even AMD and Nvidia are moving their upcoming systems in the SoC/APU direction it seems.
 

theorist9

macrumors 68040
May 28, 2015
3,882
3,061
I was...

Because of the latest rumors from @Mago & their anonymous source, in conjunction with the rumors from @Amethyst and their anonymous source, most of which are in this thread...
On the other thread actively discussing the M3 (https://forums.macrumors.com/threads/could-we-see-m3-before-a17.2382874/), several posters are insisting that Apple develops its M-series chips in parallel (rather than building from simplest to most complex within each generation), and that the designs for all chips in a given generation should thus be finalized at about the same time.

Under the two assumptions that (a) they're right; and (b) an M3 Air will be released by WWDC, it seems Apple should also be able to release an M3 MP at around that time (especially if they are reusing the 2019's case). I have no idea if assumptions (a) and (b) are correct—I'm just constructing a chain of speculative reasoning. Certainly N3 production volume shouldn't be an issue, given that the MP is probably Apple's lowest-volume device.
 

Serqetry

macrumors 6502
Feb 26, 2023
417
624
On the other thread actively discussing the M3 (https://forums.macrumors.com/threads/could-we-see-m3-before-a17.2382874/), several posters are insisting that Apple develops its M-series chips in parallel (rather than building from simplest to most complex within each generation), and that the designs for all chips in a given generation should thus be finalized at about the same time.

Under the two assumptions that (a) they're right; and (b) an M3 Air will be released by WWDC, it seems Apple should also be able to release an M3 MP at around that time (especially if they are reusing the 2019's case). I have no idea if assumptions (a) and (b) are correct—I'm just constructing a chain of speculative reasoning. Certainly N3 production volume shouldn't be an issue, given that the MP is probably Apple's lowest-volume device.
I think Apple SHOULD do that, but I don't think they have been doing that so far. I'm hoping they have been doing that with M3. Releasing an M2 Ultra/Extreme Mac Pro right before M3 comes out seems really dumb. The Mac Pro should definitely be based on M3 or similar, not M2... especially since historically they take a really long time to update the Mac Pro again.
 

theorist9

macrumors 68040
May 28, 2015
3,882
3,061
I think Apple SHOULD do that, but I don't think they have been doing that so far. I'm hoping they have been doing that with M3. Releasing an M2 Ultra/Extreme Mac Pro right before M3 comes out seems really dumb. The Mac Pro should definitely be based on M3 or similar, not M2... especially since historically they take a really long time to update the Mac Pro again.
...especially if M3 offers hardware RT.
 

Mago

macrumors 68030
Aug 16, 2011
2,789
912
Beyond the Thunderdome
The Mac Pro should definitely be based on M3 or similar, not M2...
I agree but doesn't seem the case, as there is UltraFusion evidence in m2 Max, neither an imperative for Mac Pro, as neither m3 represent the silver bullet to cut the GPU gap in workstation segment, neither the Mac Pro needs better thermals, maybe if Said m3 is presented in true modular multi chip approach (as I believe should be at some point), then has Sense for Apple to skip m2 and just put 8 m3 chiplets together for a 64+16 core Mac Pro.

I think apple approach to Mac Pro likely conservative and bringing clear upgrade path or flexible configuration options as the Mac Pro target the most demanding besides diverse niche, a stackable or overclocked Mac-Studio simple would backslash apple.

Those thinking about M2 as an stop Gap product, don't understand TSMC process evolution, switch an ASIC from n5 to n3 implies a lot of validation work that usually takes almost a year, while going from n5 to n5+ while neither trivial it's more straightforward, as the process shrink ASIC validation is much slower, i bet my home apple sent its n3 m3 design for validation two years ago (same day it was available) otherwise won't be ready for mass production this year, but I won't be surprised if m3 is based on n4 and n3 debuts on A16 M3pro/Max in q3/q4.
 

Mago

macrumors 68030
Aug 16, 2011
2,789
912
Beyond the Thunderdome
M3 offers hardware RT.
That's mostly gamer stuff, Apple sure wants it but big priority is AI training, as I said RT could be available earlier in the Mac Pro from AMD GPUs, multiple sources said apple besides working on its own all-in house dGPU solution, also is working with AMD at least on enable support for it's latest Radeon pro in ASi Mac Pro.

Apple also should be developing it's own dedicated NPU or TPU or working on adopting/buying that technology from someone (as I hope could be JK' Tensornet)
 

theorist9

macrumors 68040
May 28, 2015
3,882
3,061
[RT is] mostly gamer stuff...
It's also found some use in scientific visualization and computation...


Also, what about game development? Would companies developing AAA games for the Mac want to use something like a Mac Pro, or is that kind of power not necesssary? If it is used in game development, then they'd want it to have hardware RT, so it could be used to develop games that can make use of that feature.
 
  • Like
Reactions: NC12

Mago

macrumors 68030
Aug 16, 2011
2,789
912
Beyond the Thunderdome
It's also found some use in scientific visualization and computation...


Also, what about game development? Would companies developing AAA games for the Mac want to use something like a Mac Pro, or is that kind of power not necesssary? If it is used in game development, then they'd want it to have hardware RT, so it could be used to develop games that can make use of that feature.
Renderings blackholes actually can be done by CPU only, while a RT approach is quite efficient I don't see a market Apple's salivating for.

About scientific computations, it's an environment where I've worked since first Texas A&M supercomputer, is something quite evolved now, i could say revolutionized, by a single project: Julia Language, and particularly JuliaGPU, while JuliaGPU is now supporting Metal performance shaders, in science application rules CUDA (Nvidia), not just it's programming model it's deeply mature, it's hardware highly optimized and second to none, also JuliaGPU+CUDA provides easy almost trivial access to compute clusters, with few clicks an scientist can take an Pluto Notebook and sent it to an super computer with thousands GPUs (did i mention JuliaGPU also does RT stunts), science market plain simple it's way beyond Apple's reach no matter if they invests billions in a public service ASi super compute cluster and bring it for free to our scientist.

About gaming development, Metal support for RT (soft) included i think since last Metal 3.0, while not enjoyable until optimized hardware available, developers don't actually need an RT GPU to code RT, given it's almost sure ASi Mac Pro to support AMD 7900xtx it's will provide them with the test rigs until maybe ASi M3 reach consumers (likely on iMac and MacBook first).

The way Metal works you don't need kernels to target an specific GPU architecture, you only have to care on that GPU capabilities and provide your code to tune around it, no binaries headaches.

So, you need better arguments to convince John Ternus on why jeopardize r&d on prioritize ASi high performance GPUs (also is known on the pipeline, or at least some related stuff as an AI Tensor training accelerator).
 
Last edited:

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
I agree but doesn't seem the case, as there is UltraFusion evidence in m2 Max,

evidence where? Apple's initial pictures of the M2 Max do not even include a UltraFusion connector. Apple could be lying/sandbagging again. Same (or extremely similar ) interrupt supports from M1 Max could just be vestigial ( more hassle to remove than just leave for use as a subset. The extended interrupts are not used at all in the Mn MBP 14"/16" and Max Studio so vast majority of that interrupt subset is left unused anyway. UltraFusion is completely useless in probably 75% (or more) packages the Max die is placed into. So useless feature on the die... the sky is blue most days too.



neither an imperative for Mac Pro, as neither m3 represent the silver bullet to cut the GPU gap in workstation segment, neither the Mac Pro needs better thermals,

M3 on TSMC N3B would be smaller. M1 Max is already an awkward sized , too chunky chiplet. The M2 Max is even larger. The M2 Max is bigger than the full picture of the M1 Max. So if Apple photoshopped out the UltraFusion connect it would be bigger still. At some point going to 'bloat out' of being able to use the more affordable InFO-LSI because blew past reticle limits.

That isn't a 'silver bullet' it is just reversing the bloat that Apple has iterated on last two updates. They are just making bigger and bigger dies for the same class of SoC. If there is an Extreme that blows past InFO-LSI limits then just may not care. ... biggest most expensive packing technology possible.

Some reports claim the M2 generation GPU didn't get more GPU feature updates because Apple ran out of transistor budget. ( Pretty creditable given the bloat of the die without any extra stuff; just more cores. ). N3 would crank up the budget without bloating the die even more. Caches aren't going to shrink, so not going to see 'amazing' overall die size shrinkage, but some small RT accelerators and DisplayPort 2.1 (given Apple's unnormally large display controllers ) .

Some 'benchmark' number killer isn't needed as much as packing more stuff into a M1 Max aggregate area space (or slightly smaller). As large as possible packages using 2-3 year old fab tech ... that is getting onto the path Intel's been on for several years . It isn't helping them.



maybe if Said m3 is presented in true modular multi chip approach (as I believe should be at some point),

Apple could leave the I/O ( thunderbolt , PCI-e , etc ) at TMSC N5P like the rest of M2 and just decoupled the GPU/CPU/NPU cores onto N3B. ( a coin toss if display controllers make the cut or stay on cheaper N5). The M2 Max really isn't a good chiplet design. It is mainly a convenient way for Apple to make 14"/16" laptop folks pay for features they can't possibly use (leading to higher Apple margins. )

then has Sense for Apple to skip m2 and just put 8 m3 chiplets together for a 64+16 core Mac Pro.

There is zero good sense in cranking up the number of packages inside the Mac Pro. The major flaw is just doing one package with the M1 Max style die is deeply limited and flawed if going to scale up the package. The flaw is that the general I/O functional blocks don't really need to be on the same chiplet. Chasing after even smaller plain M3 dies is just a distraction from the main root cause flaw.



Those thinking about M2 as an stop Gap product, don't understand TSMC process evolution, switch an ASIC from n5 to n3 implies a lot of validation work that usually takes almost a year,

So what? if Apple's original plan in 2019 was to be on N3 at the end of 2022 then this really isn't "extra time' . And the M2 could have been mainly finished in house way back in 2021.

You seem to have some premise that Apple could not possibly start on M3 until all of the M2 versions had shipped. Shipping and finishing validation don't have to be the same. If you are Intel and you need 20 steppings to complete your server/workstation in a colossal bug fest fight.... then yeah those coincide. But M2 could have been mostly done Q2-Q3 2021 and a year later is Q2-Q3 2022 for M3. This isn't a huge show stopper if planned and allocated pipelined resources properly.

TSMC had very low yielding N3 back in Q1 2022. Don't need 10's of thousands of dies to do logic validation.


while going from n5 to n5+ while neither trivial it's more straightforward, as the process shrink ASIC validation is much slower, i bet my home apple sent its n3 m3 design for validation two years ago (same day it was available) otherwise won't be ready for mass production this year, but I won't be surprised if m3 is based on n4 and n3 debuts on A16 M3pro/Max in q3/q4.

The larger packages also save lots of time if not trying to release one every year . If do M1 , M3 , M5 (or M2 , M4 , M6) then don't have to those in between validations.

M1 Max is composed as a hack to cheat by trying to pound a round peg into a square hole to shave costs off of validation. After the Mac Studio shipped I don't really think that is as necessary as it was when there was no Mac Pro to consume a more desktop I/O optimized Ultra ( and possible Extreme).
 

PineappleCake

Suspended
Feb 18, 2023
96
252
Renderings blackholes actually can be done by CPU only, while a RT approach is quite efficient I don't see a market Apple's salivating for.
Blender which Apple heavily supports by funding them has hardware RT based support for Nvidia GPUs known as Optix and it uses the RT cores to speed up render times.

Apple is really behind in hardware based RT. Every chip company has really good(AMD,Intel, ARM, Qualcomm) or really excellent RT(Nvidia) by now.

Heck RT based GPUS hit the market in 2018. 4 to 5 years later Apple has yet support hardware based RT. Even consoles support it. Resident Evil 8(Village) had to disable RT in the game on the Mac port because there was no hardware based RT in their SoCs. How the can Mac ever become a suitable alternative for AAA games when basic features such as RT is missing.

Ray tracing is the future of GPUs and Apple like always when it comes to GPUs is behind.
t's almost sure ASi Mac Pro to support AMD 7900xtx it's will provide them with the test rigs until maybe ASi M3 reach consumers
Apple did not enable the RT cores on the 6000 series for the 7,1 Mac Pro. I bet Apple only enables RT for their own chips.
 

Boil

macrumors 68040
Oct 23, 2018
3,478
3,173
Stargate Command
Blender which Apple heavily supports by funding them has hardware RT based support for Nvidia GPUs known as Optix and it uses the RT cores to speed up render times.

Yes, Apple gives Blender some cash, just like all the other hardware partners (AMD & Nvidia, to name a few), but they also provide software engineers to get into the code and make Full Metal Blender run the best it can on Apple silicon; I am sure doing the same for a Full Metal hardware ray-tracing Blender is currently being worked on...

Gotta have something to show off at the ASi Mac Pro debut...!
 

Mago

macrumors 68030
Aug 16, 2011
2,789
912
Beyond the Thunderdome
So if Apple photoshopped out the UltraFusion connect it would be bigger still.
No, actually UF-whaever bias is close the M2 Max middle zone, indeed its size is exaclty what is in photos no hidden surprise.
The major flaw is just doing one package with the M1 Max style die is deeply limited and flawed if going to scale up the package.
we agree, but Happens when M1 Validation ended InFO-LSI was just validated, even If Apple wants an AMD-like Chiplet SOC complex, it wont start with complex setups, so I wont be surprised if M3 family just remixes two or three basic SOC along cutsom Bridges and likely N5 I/O dies, as for M2 Max seems only steps forward was moving inFO-LSI bias close the SOC center where it eases an 4x4 or 1+1+1+1 bridge, once Apple's engineers are more confident with InFO-LSi more creative approach on Multi-Chip complex likely to emerge.
Apple could not possibly start on M3 until all of the M2 versions had shipped.
at least Validation should have begin few weeks before first m2 deliveries.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.