Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

mode11

macrumors 65816
Jul 14, 2015
1,452
1,172
London
For the last 12+ years CPU silicon has been developed from the lower end to the higher and rolled out in a specific manner due to yields, as the nm size has gone down dramatically complexity has a much lower success rate so it takes a while especially when using a newly developed process on wafers. You will get failures in certain parts of the SoC, which is how we get cut down GPU variants, etc. Building the same architecture out of multiple cores or chiplets that are the same allows them to not totally waste a whole wafer on some unique complex thing, they can buiild a bunch of Max CPUs, some will have less working GPU cores, etc. and the best ones will be fully functional & linked via the communication fabric, which itself is also very complex.

Apple cannot simply just "choose to release what they want" in any sizable quantity. Since it's a Mac Pro which sells in low numbers it's theoretically possible, but Tim Cook isn't going to waste the absolutely insane amount of money on that. You'd have like $20,000 CPUs if you launched some Quad ultra first.

So many people have no damn idea what they're talking about in this thread, I don't understand how everyone has gotten so confidently incorrect over the last decade without doing any research at all into the technical aspects of how these things are made. I know this sounds like I'm being a jerk, but doing the work to learn about things will help you in life in general – don't just wing it and pretend you know what's going on, research and learn about something you don't quite know without just spouting off nonsense.
Yet when AMD and Nvidia launched their latest graphics cards, they started with the 7900XTX and RTX 4090 respectively, then worked their way down (with the lowest end models still to come).

When Intel released their 13th gen in October last year, they released their entire desktop line up at once, with the high-end i9-13900K naturally getting the lion's share of press attention.

The Ultra is just a pair of mainstream laptop CPUs. The only reason for low yields with Ultras would be manufacturing issues with the UltraFusion link.
 

mode11

macrumors 65816
Jul 14, 2015
1,452
1,172
London
launching some $6,000+ desktop 6 months before you debut a vastly improved core seems crazy to me
I expect 2019 Mac Pro buyers would agree with you there.

I think the plan WAS for Apple to leapfrog the Mac Studio / Mac Pro, but it wouldn't shock me if we get an M2 Ultra Mac Studio and they save the M3 for the Mac Pro at this point given how long it's already taken. Another 6 months won't kill them, and it will give those very expensive machines a much longer lifespan.
An M2 Mac Studio release at WWDC would be welcome.

AND it will also allow them to build in the proper driver/architecture support for PCIe cards / maybe video cards / whatever add-in apple cards (if they exist) to the next OS which will release late this year.
If this is the plan, why hasn't Apple supported eGPUs with ASi to date? Surely this would have provided a way to gradually build support for PCIe GPUs with ASi, with any bugs only affecting a relatively small group of users.

And given the MP is understood to be delayed, surely they have already had time to build proper driver / architecture support for PCIe cards? Presumably, it was supposed to be out by now.
 
  • Like
Reactions: prefuse07 and NC12

novagamer

macrumors regular
May 13, 2006
234
315
Why? You think Apple is going to go out of business if they wait a little while? Lol. They're making M3 chips right now that aren't being sold, they can hold off releasing any M3 machines until the the high end versions are ready if that's what it takes.

Also, your generalizations about the industry are wrong. Intel released the high end Raptor Lake variants in Oct 22 and the low end Jan 23.
Raptor lake is as it is currently released is not the high end. You are wrong, they are using what used to be mid range CPUs and telling consumers they are high-end. My entire angle was that the midrange / low end became the mainstream first release, which is now clocked fast and called "high end", proving my point. Where are the Raptor Lake Xeons or HEDT variants? Those are the high end. In the old days, these launched first or near-simultaneously, but like I said that hasn't happened for almost 15 years.

HEDT has been killed entirely, and now Xeon-w has taken its place.

Golden Cove are the performance cores used in the Sapphire Rapids Xeons which are just now starting to trickle out into availability for end-user purchase right now.

When were those cores released? In November 2021. https://en.wikipedia.org/wiki/Golden_Cove

Like I said above, people have become confidently incorrect and it's extremely frustrating and why I barely post here anymore. Just do like 5 minutes of Google searching and you'd have found this, don't yell at me for being right. Whatever floats your boat though, you're fee to say what you want but back it up with hard data. Maybe you aren't familiar with older process nodes & releases which is fine, but don't tell me that I'm wrong about the industry I've worked in professionally since, well, HEDT chips were mainstream new releases.

Since Memory controllers have gone integrated on the CPUs this is also a high burden on yield rates. Our "high-end" mainstream desktops now in PC-land have dual channel memory, which is a downgrade from how the cores were designed to be most performant – Xeons use quad or better, always, for probably the last 10 years, but this is more difficult to get good yields with at high frequencies so they don't start with the Xeon / HEDT anymore, because as I stated the process node shrinks have had a dramatic affect on how many of the "best" cores they get that can support all those features.
 
Last edited:
  • Like
Reactions: ZombiePhysicist

novagamer

macrumors regular
May 13, 2006
234
315
Yet when AMD and Nvidia launched their latest graphics cards, they started with the 7900XTX and RTX 4090 respectively, then worked their way down (with the lowest end models still to come).

When Intel released their 13th gen in October last year, they released their entire desktop line up at once, with the high-end i9-13900K naturally getting the lion's share of press attention.

The Ultra is just a pair of mainstream laptop CPUs. The only reason for low yields with Ultras would be manufacturing issues with the UltraFusion link.
For AMD I kind of agree, and it was a big part of why AMD switched their GPUs to a chiplet design a few years ago because they just weren't making progress with the monolithic approach.

In Nviida's case though, the 4090 is not the highest-end, just like the 4080 is not priced according to what it would have been in previous generations (it's really more like what we would have called a 4070). Nvidia are using DLSS 3.0 to make it seem like a giant leap, when in reality if you look at pure rasterization performance and compare the 4070/4080 especailly you'll see that, especially in the case of the 4080 and below, they are what would have previously been the 3060/3070, and in the 4090s case it's sot of a mix of what a 4080 would have been, just clocked much higher & including more memory.

For the 4090, Nvidia's specs even give this away: the Lovelace architecture supports 96MB of L2 Cache on the GPU, but they have only released an AD102 core with 72MB of L2 Cache in the 4090, or 33% less than what the architecture supports. The "fully enabled" GPU may come in the 4090Ti or Titan, but historically there have been minor differences with Ti variants. This time that won't be the case, because the L2 Cache increase dramatically benefits raw RayTracing performance - so if we get a fully-enabled AD102 it will be a monster performer, that much more cache is a big deal. I would guess that we won't even see a consumer GPU with the fully enabled chip, and that the TI will also be cut down slightly just to improve yields and save Nvidia cost.

Regarding CPUs, see my post above, the mainstream desktop CPUs are not the high-end of the architecture and haven't been for a long, long time. We haven't had a high-end full CPU launch first since 2008-9. Maybe one of AMD's bulldozer cores was the best they could do but they were so slow and hot it's kind of irrelevant for this discussion.

I agree with you about the ultrafusion being tricky to manufacture though, which calls into even more question whether we'll actually see a quad variant of the M2 or M3. I think they have some stacked design in a patent someplace so it's possible they could do something there down the road but we'll have to wait and see. Apple could introduce the M2 Ultra Studio, M2 15" Macbok Pro, and start making M3s and sit on them for months in order to construct enough Mac Pro CPUs, but that isn't how Tim Cook rolls.

You're right that the chiplet-style design does help the architecture advance faster, which we've seen AMD and now Apple have success with over the past few years. My thing is that it would require Apple to sit on a pile of "less good" CPUs that could be going into Macbook Airs or Mac Minis, and that's why I don't think we'll ever see a Mac Pro CPU release to the public with a newer core than other machines.

If there's one thing modern Apple is known for it is perfecting their supply chain, and I don't see any scenario where they have months of partially working M3s sitting around costing them money to store without earning them anything by selling in a machine. It wouldn't completely shock me to see an M3 Air released this summer/fall, but I think delays and having to re-architect the M2 have slightly upset their year-over-year cadence. WWDC machine announcements will tell us more, if they introduce a 15" air with an M2 I would bet almost anything we won't see an M3 Mac Pro until 2024, or they'll launch with an M2 something.
 
Last edited:

jscipione

macrumors 6502
Mar 27, 2017
429
243
Raptor lake is as it is currently released is not the high end. You are wrong, they are using what used to be mid range CPUs and telling consumers they are high-end. My entire angle was that the midrange / low end became the mainstream first release, which is now clocked fast and called "high end", proving my point. Where are the Raptor Lake Xeons or HEDT variants? Those are the high end. In the old days, these launched first or near-simultaneously, but like I said that hasn't happened for almost 15 years.

HEDT has been killed entirely, and now Xeon-w has taken its place.

Golden Cove are the performance cores used in the Sapphire Rapids Xeons which are just now starting to trickle out into availability for end-user purchase right now.

PC OEMs including Apple, Dell, HP and Lenovo aren’t even shipping the Ice Lake Xeon-W’s from 2021 in their Intel Workstations, they are still shipping Skylake or Cascade Lake Xeon-W’s. I’m afraid that nobody cares about Intel’s workstation chip offerings, not only Apple.
 

innerproduct

macrumors regular
Jun 21, 2021
222
353
Wow, the level of speculation hehe! If you want to join the current AI race get an m1 air and access to some cloud…. When this mac pro comes out there will be no need for the services it provide. We are in a paradigm shift. Right now.
Or better yet, learn to farm, get some land and build a fardays cage to live in and hibernate until John Connor calls 😂
 

steve123

macrumors 65816
Aug 26, 2007
1,155
719
If there's one thing modern Apple is known for it is perfecting their supply chain, and I don't see any scenario where they have months of partially working M3s sitting around costing them money to store without earning them anything by selling in a machine.
In 2023, Apple is only paying for Known Good Die from N3. So, Apple is not sitting around with inventory of partially working M3's they cannot use. Quite the opposite, they have been taking delivery of KGD HPC components for months now. They are well positioned to make a HPC announcement and back it up with deliveries. Base M3 will not be available initially precisely because the yield ramp limits production volumes. Same thing for A17. A17 will only be available in Pro Max iPhones this fall because TSMC cannot produce enough of them to meet the additional demand Of the base model.
 

steve123

macrumors 65816
Aug 26, 2007
1,155
719
When this mac pro comes out there will be no need for the services it provide
Privacy issues with cloud based AI are a growing concern. Apple and other corporations have banned the use of ChatGPT. I speculate there will continue to be strong demand for stand alone high performance computers.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
PC OEMs including Apple, Dell, HP and Lenovo aren’t even shipping the Ice Lake Xeon-W’s from 2021 in their Intel Workstations, they are still shipping Skylake or Cascade Lake Xeon-W’s. I’m afraid that nobody cares about Intel’s workstation chip offerings, not only Apple.

For much of 2021 and 2022 that was true. Now not so much.

HP Z8 Fury

Dell 5860

are both shipping. Primarily, those two are not shipping the W-3300 ( Ice Lake) processors because the follow on solutions ( W-2400 / W-3400 ) are available. Skipping the W-3300 is entirely different that completely walking away from Intel. The W-x400 options may not be a thread ripper 'killer' , but they are far more competitive. And the fact that Intel has both 2400 and 3400 series means they cover a broader price range than AMD.

Lenovo is dragging, but the mid-sized players at the next tier ( Puget Systems , Boxx, etc ) are not.

It helped to make the Mac Pro 2019 more competitive because the big three Dell/HP/Lenovo were also mostly comatose during 2021. However, it is just about mid 2023 at this point. Most of the other folks have moved on while Apple is in Rip-van-Winkle mode.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
Raptor lake is as it is currently released is not the high end. You are wrong, they are using what used to be mid range CPUs and telling consumers they are high-end. My entire angle was that the midrange / low end became the mainstream first release, which is now clocked fast and called "high end", proving my point. Where are the Raptor Lake Xeons or HEDT variants? Those are the high end. In the old days, these launched first or near-simultaneously, but like I said that hasn't happened for almost 15 years.

HEDT has been killed entirely, and now Xeon-w has taken its place.

HEDT really hasn't been 'killed' as much as subsumed. HP's best selling workstation is the Z4 ( HP label's it as being best selling )



G4 generation has Core i X series or a W-2200

The current G5 version has a

W-2400

The whole "consumer" like up is going to get a name make over


Intel is going to start throwing "Ultra" at the top end of the 'consumer' line up where have > 16 cores. Sure 'Ultra' is just a marketing name, but when it comes right down to it so is HEDT. HEDT is a "more expensive than that other stuff" naming category that shifts over time to maintain 'exclusivity' properties.


AMD's top end desktop is at 16 cores ( Ryzen 9 7950X3D @ $600 ). If get a decent motherboard, very good cooler/heatsink for CPU, and DDR5 memory to match that is not a "budget" overall system cost. That is 'high end" . There is an even bigger spend for an 'even higher end" but if not trying to pinch pennies and cut corners on the system build it is already expensive.


The other major factor is that the server market is different now. The CPU package sizes are getting much better (which run into conflict with fixed sized ATX and smaller motherboards). And AMD/Intel can generally sell all the service dies they can make. At one point Intel somewhat dumped excess server dies into quirky HEDT subregions to just get rid of extra inventory. That isn't really a problem. How the CPU packages are evolved going forward is just as likely to 'fork' from desktop constraints as it is to be aligned with them.

so what was the lower 'half' of HEDT is being subsumed back into mainstream desktop and the upper half of the HEDT is going to have more overlap with "limited logic board footprint server" boards than desktops.



Golden Cove are the performance cores used in the Sapphire Rapids Xeons which are just now starting to trickle out into availability for end-user purchase right now.

When were those cores released? In November 2021. https://en.wikipedia.org/wiki/Golden_Cove

There is a whole lot more in the Xeon SP gen 4 ( Sapphire Rapids) CPU package that just Golden cove cores. There is a bucket load of specialized server accelerators that are not in the desktop version. There is actually can use AVX-512. There are both UPI and inter-tile connectivity . Xeon SP gen 4 went through a DOUBLE digit number of steppings before it even got out the door. ( had bugs up to its eyeballs. ) That is a major reason why it is just shipping now. There is gobs of die space on those dies that "joe , big time, HEDT gamer" is never , ever going to use.

As stated above there is no huge excess of working dies in inventory to ship out to HEDT.

The tweaked Emerald Rapids ( somewhat equivalent to Raptor Cove ) are coming in the Fall but those too will probably skipped in many cases like Gen 3 (Ice Lake ) was. Different reason somewhat in the SP Xeon 6 (Granite Rapids) shouldn't be that far behind. So again the likelihood of gobs of excess Emerald Rapids dies dumped into HEDT is likely very low, the far more competitive die is the follow on.





Since Memory controllers have gone integrated on the CPUs this is also a high burden on yield rates. Our "high-end" mainstream desktops now in PC-land have dual channel memory, which is a downgrade from how the cores were designed to be most performant – Xeons use quad or better, always, for probably the last 10 years, but this is more difficult to get good yields with at high frequencies so they don't start with the Xeon / HEDT anymore, because as I stated the process node shrinks have had a dramatic affect on how many of the "best" cores they get that can support all those features.

But that is extremely exacerbated by the folks in the HEDT/Enthusiast category wanting to include manual overclocking settings in the product features. Not only does it have to meet the baseline spec but have to be able to 'abuse' the die also and still work.
 
  • Like
Reactions: novagamer

novagamer

macrumors regular
May 13, 2006
234
315
In 2023, Apple is only paying for Known Good Die from N3. So, Apple is not sitting around with inventory of partially working M3's they cannot use. Quite the opposite, they have been taking delivery of KGD HPC components for months now. They are well positioned to make a HPC announcement and back it up with deliveries. Base M3 will not be available initially precisely because the yield ramp limits production volumes. Same thing for A17. A17 will only be available in Pro Max iPhones this fall because TSMC cannot produce enough of them to meet the additional demand Of the base model.

Based on your previous replies in this thread I think your head is in the clouds.

Not being mean, I wound be thrilled if you wind up correct, but I just don’t see it happening. We’ll know soon. I would expect an announcement possibly but not availability.

I just don’t see how they’re going to have some massive HPC software story at this year’s WWDC which there’s virtually consensus it will be dominated by the headset.

Again though, I’d love to be wrong and I take no joy in pointing these things out.

I want to be blown away by some out of nowhere M3 Mac Pro available immediately that has more compute power than an Nvidia H100 or even A100, but I don’t see it happening. They aren’t going to leap from trailing far behind to leading the pack, and especially not without a mature software stack behind them.

All that nonsense about how the dual Vegas provided unprecedented performance for compute tasks and the software never materialized for it. They released an FPGA card and didn’t even make developer tools for it for God’s sake, there is no indication so far that Apple is moving in this direction at all.

I saw you or someone else mentioning Tensofow earlier, and that isn’t even in play for the latest work being done in ML/AI research, people have moved on.
 
Last edited:
  • Like
Reactions: prefuse07

novagamer

macrumors regular
May 13, 2006
234
315
HEDT really hasn't been 'killed' as much as subsumed. HP's best selling workstation is the Z4 ( HP label's it as being best selling )



G4 generation has Core i X series or a W-2200

The current G5 version has a

W-2400

The whole "consumer" like up is going to get a name make over


Intel is going to start throwing "Ultra" at the top end of the 'consumer' line up where have > 16 cores. Sure 'Ultra' is just a marketing name, but when it comes right down to it so is HEDT. HEDT is a "more expensive than that other stuff" naming category that shifts over time to maintain 'exclusivity' properties.


AMD's top end desktop is at 16 cores ( Ryzen 9 7950X3D @ $600 ). If get a decent motherboard, very good cooler/heatsink for CPU, and DDR5 memory to match that is not a "budget" overall system cost. That is 'high end" . There is an even bigger spend for an 'even higher end" but if not trying to pinch pennies and cut corners on the system build it is already expensive.


The other major factor is that the server market is different now. The CPU package sizes are getting much better (which run into conflict with fixed sized ATX and smaller motherboards). And AMD/Intel can generally sell all the service dies they can make. At one point Intel somewhat dumped excess server dies into quirky HEDT subregions to just get rid of extra inventory. That isn't really a problem. How the CPU packages are evolved going forward is just as likely to 'fork' from desktop constraints as it is to be aligned with them.

so what was the lower 'half' of HEDT is being subsumed back into mainstream desktop and the upper half of the HEDT is going to have more overlap with "limited logic board footprint server" boards than desktops.





There is a whole lot more in the Xeon SP gen 4 ( Sapphire Rapids) CPU package that just Golden cove cores. There is a bucket load of specialized server accelerators that are not in the desktop version. There is actually can use AVX-512. There are both UPI and inter-tile connectivity . Xeon SP gen 4 went through a DOUBLE digit number of steppings before it even got out the door. ( had bugs up to its eyeballs. ) That is a major reason why it is just shipping now. There is gobs of die space on those dies that "joe , big time, HEDT gamer" is never , ever going to use.

As stated above there is no huge excess of working dies in inventory to ship out to HEDT.

The tweaked Emerald Rapids ( somewhat equivalent to Raptor Cove ) are coming in the Fall but those too will probably skipped in many cases like Gen 3 (Ice Lake ) was. Different reason somewhat in the SP Xeon 6 (Granite Rapids) shouldn't be that far behind. So again the likelihood of gobs of excess Emerald Rapids dies dumped into HEDT is likely very low, the far more competitive die is the follow on.







But that is extremely exacerbated by the folks in the HEDT/Enthusiast category wanting to include manual overclocking settings in the product features. Not only does it have to meet the baseline spec but have to be able to 'abuse' the die also and still work.
Good post, well thought out reasoning. I don’t have much to disagree with on anything you said, and you make a good argument that splitting the top and bottom tiers makes sense due to the complexity rising so much in the server space. I think removing AVX was a bridge too far though.

I’m just lamenting how things used to be I guess and I’m sick of being told that 16/24 pci express lanes is enough. With PCIE5 this is much less of an issue, but for the past decade it’s felt like Intel has been trying to sell us second tier platforms outside of Broadwell-E, which did have pretty good timing and was competitive performance wise when it launched due to the ring bus and massive cache. I also paid $1700 for a CPU then, though, so yeah…. not ideal.
 

mode11

macrumors 65816
Jul 14, 2015
1,452
1,172
London
For AMD I kind of agree, and it was a big part of why AMD switched their GPUs to a chiplet design a few years ago because they just weren't making progress with the monolithic approach.

In Nviida's case though, the 4090 is not the highest-end, just like the 4080 is not priced according to what it would have been in previous generations (it's really more like what we would have called a 4070). Nvidia are using DLSS 3.0 to make it seem like a giant leap, when in reality if you look at pure rasterization performance and compare the 4070/4080 especailly you'll see that, especially in the case of the 4080 and below, they are what would have previously been the 3060/3070, and in the 4090s case it's sot of a mix of what a 4080 would have been, just clocked much higher & including more memory.

For the 4090, Nvidia's specs even give this away: the Lovelace architecture supports 96MB of L2 Cache on the GPU, but they have only released an AD102 core with 72MB of L2 Cache in the 4090, or 33% less than what the architecture supports. The "fully enabled" GPU may come in the 4090Ti or Titan, but historically there have been minor differences with Ti variants. This time that won't be the case, because the L2 Cache increase dramatically benefits raw RayTracing performance - so if we get a fully-enabled AD102 it will be a monster performer, that much more cache is a big deal. I would guess that we won't even see a consumer GPU with the fully enabled chip, and that the TI will also be cut down slightly just to improve yields and save Nvidia cost.

Regarding CPUs, see my post above, the mainstream desktop CPUs are not the high-end of the architecture and haven't been for a long, long time. We haven't had a high-end full CPU launch first since 2008-9. Maybe one of AMD's bulldozer cores was the best they could do but they were so slow and hot it's kind of irrelevant for this discussion.

I agree with you about the ultrafusion being tricky to manufacture though, which calls into even more question whether we'll actually see a quad variant of the M2 or M3. I think they have some stacked design in a patent someplace so it's possible they could do something there down the road but we'll have to wait and see. Apple could introduce the M2 Ultra Studio, M2 15" Macbok Pro, and start making M3s and sit on them for months in order to construct enough Mac Pro CPUs, but that isn't how Tim Cook rolls.

You're right that the chiplet-style design does help the architecture advance faster, which we've seen AMD and now Apple have success with over the past few years. My thing is that it would require Apple to sit on a pile of "less good" CPUs that could be going into Macbook Airs or Mac Minis, and that's why I don't think we'll ever see a Mac Pro CPU release to the public with a newer core than other machines.

If there's one thing modern Apple is known for it is perfecting their supply chain, and I don't see any scenario where they have months of partially working M3s sitting around costing them money to store without earning them anything by selling in a machine. It wouldn't completely shock me to see an M3 Air released this summer/fall, but I think delays and having to re-architect the M2 have slightly upset their year-over-year cadence. WWDC machine announcements will tell us more, if they introduce a 15" air with an M2 I would bet almost anything we won't see an M3 Mac Pro until 2024, or they'll launch with an M2 something.
The point was that the chips I mentioned are big pieces of silicon. It doesn’t matter whether they are the very biggest that Intel etc. makes.

The Ultra is 2x Max chips. The Max has significantly fewer transistors than even an i9 13900K, so no need to compare to a 56 core Xeon etc.
 

Mago

macrumors 68030
Aug 16, 2011
2,789
912
Beyond the Thunderdome
I saw you or someone else mentioning Tensofow earlier, and that isn’t even in play for the latest work being done in ML/AI research, people have moved on.
Lmfao 🤣😆, FYI tensorflow enjoy Apple prime support and (along my favorite pytorch) priority in metal3 (ASi/AMD) compute support, you can train or infere models with a MacBook Pro, even there are people running Llama -gpt 65 on MacBook pro m2 with 96gb ram because unified RAM allow those huge LLM running.

WWDC will be busy but not because the Glasses (it's API is borrowed from iphone, it's ar/XR important but most building blocks are in iOS), even The New watchOS , neither MacOS iOS etc regular updates, apple is working hard to: improve own (not user) AI ML inference/training capabilities, developers cloud and local training/inference, and of course VR development which requires huge GPU power.

I'm confident apple to introduce some kind of dedicated GPGPU or compute device, either as an dGPU or eGpu (not like the ones using tb3), as for the Mac Pro, even Mac rumor's sacred cow Gurman has actual clues on its configuration, he just writes Click bait, his sources are the guys that sold that UltraFusion was actually two M1 max SOC uncut from the waffer and baked as a single Chip Apple laters decide to split in two, that's untrue UltraFusion uses an silicon interposer bridge Chip on top a pair m2 Max by TSMC inFO-LSI.

There are more cues as UltraFusion 2 buses where widened.

The next week Apple should launch that 15" MacBook m2, and at WWDC opening the New Mac Pro, maybe in trashcan-chessegrater form factor or some mini tower cube barely upgradeable no PCIe SATA etc dGPU eGPU possible either as internal AIC or as eGPU ok custom or std Occulink cables (as gpd and others are doing given tb4 is not fit for GPUs).
 

novagamer

macrumors regular
May 13, 2006
234
315
Lmfao 🤣😆, FYI tensorflow enjoy Apple prime support and (along my favorite pytorch) priority in metal3 (ASi/AMD) compute support, you can train or infere models with a MacBook Pro, even there are people running Llama -gpt 65 on MacBook pro m2 with 96gb ram because unified RAM allow those huge LLM running.

WWDC will be busy but not because the Glasses (it's API is borrowed from iphone, it's ar/XR important but most building blocks are in iOS), even The New watchOS , neither MacOS iOS etc regular updates, apple is working hard to: improve own (not user) AI ML inference/training capabilities, developers cloud and local training/inference, and of course VR development which requires huge GPU power.

I'm confident apple to introduce some kind of dedicated GPGPU or compute device, either as an dGPU or eGpu (not like the ones using tb3), as for the Mac Pro, even Mac rumor's sacred cow Gurman has actual clues on its configuration, he just writes Click bait, his sources are the guys that sold that UltraFusion was actually two M1 max SOC uncut from the waffer and baked as a single Chip Apple laters decide to split in two, that's untrue UltraFusion uses an silicon interposer bridge Chip on top a pair m2 Max by TSMC inFO-LSI.

There are more cues as UltraFusion 2 buses where widened.

The next week Apple should launch that 15" MacBook m2, and at WWDC opening the New Mac Pro, maybe in trashcan-chessegrater form factor or some mini tower cube barely upgradeable no PCIe SATA etc dGPU eGPU possible either as internal AIC or as eGPU ok custom or std Occulink cables (as gpd and others are doing given tb4 is not fit for GPUs).
ok

edit.: I concede the unified memory allows large model training.

The rest of this post sounds like word salad to me but maybe that insanity will come out as a product and shock us all haha.
 
Last edited:

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
The Ultra is 2x Max chips. The Max has significantly fewer transistors than even an i9 13900K, so no need to compare to a 56 core Xeon etc.

The Max is smaller than the Intel desktops ? What are you smoking ?

“ …Intel's latest top-of-the-range 13th Generation Raptor Lake for desktops has a die size of 252.65 mm^2, up from 215.25 mm^2 in the case of Alder Lake CPU for desktops…..”


Whereas

“…
The M1 Max is truly immense – Apple disclosed the M1 Pro transistor count to be at 33.7 billion, while the M1 Max bloats that up to 57 billion transistors. AMD advertises 26.8bn transistors for the Navi 21 GPU design at 520mm² on TSMC's 7nm process; Apple here has over double the transistors at a lower die size thanks to their use of TSMC's leading-edge 5nm process. Even compared to NVIDIA's biggest 7nm chip, the 54 billion transistor server-focused GA100, the M1 Max still has the greater transistor count.

In terms of die sizes, Apple presented a slide of the M1, M1 Pro and M1 Max alongside each other, and they do seem to be 1:1 in scale. In which case, the M1 we already know to be 120mm², which would make the M1 Pro 245mm², and the M1 Max about 432mm². …”

Die-Sizes.jpg





the 12900k/13900k are mainly a CPU die with a smaller than entry level GPU tacked on. The Pro/max dies are closer to entr-midsize GPU dies with a few U tacked on !



pretty sure The M2 Pro is bigger than the Intel desktop Chips ! ( M2 had die bloat from M1 ) The Max ‘smokes’ the Intel solution in terms of die size . Note the Max picture above doesn’t even include the UltraFusion connector ( and associated transistors )




Even Intel‘s Xeon SP Gen 4 ( sapphire rapids )
“ …. Intel has stated that each of its silicon tiles are ~400 mm2,


are not significan bigger than the Max .

And if want to hardwave as that is just die size , not transistors . Apple is on N5 and Intel on their variant of 7 .

The Maxi is so large the Ultra is pretty much at the reticle limit of the InFO-LSI packaging technology . TSMS can’t make anything signicantly bigger than the Ultra with that tech . It is slightly dubious the bloated. 2 max die will fit ( or just wnnt to the more expensive CoWoS-LSI because any attempted quad would have to anyway. )


the only chip dies tha make th e Max look “small” are the very top end Nvidia dies tha come in around 600mm2 range.
 
  • Like
Reactions: novagamer

IconDRT

macrumors member
Aug 18, 2022
84
170
Seattle, WA
Wow, the level of speculation hehe! If you want to join the current AI race get an m1 air and access to some cloud…. When this mac pro comes out there will be no need for the services it provide. We are in a paradigm shift. Right now.
Or better yet, learn to farm, get some land and build a fardays cage to live in and hibernate until John Connor calls 😂
The most sensible post in months… although, as steve123 points out, the Faraday cage will limit John Connor to smoke signals and/or carrier pigeon for the call that Skynet is ready to clean house.
 
  • Haha
Reactions: steve123

novagamer

macrumors regular
May 13, 2006
234
315
The Max is smaller than the Intel desktops ? What are you smoking ?

Exactly. People are just making stuff up without doing any research at all. I have no idea what someone has to benefit out of posting nonsense on a technical forum in their free time but, well, here we are.

pretty sure The M2 Pro is bigger than the Intel desktop Chips ! ( M2 had die bloat from M1 ) The Max ‘smokes’ the Intel solution in terms of die size . Note the Max picture above doesn’t even include the UltraFusion connector ( and associated transistors )





the only chip dies tha make th e Max look “small” are the very top end Nvidia dies tha come in around 600mm2 range.Bingo
Yep. The 13900K is estimate to have ~26 billion tranissotrs. The M2 Max is 67 billion. Yes, it's on a smaller node, but being on the bleeding edge doesn't exactly make fabrication easier.

WWDC will be interesting, and I want an M3 mega Mac Pro, but:

- The Mac Pro is a niche market

- WWDC will introduce an entirely new platform in the VR/AR headset. This will be the main focus, and is the first entirely new platform that's going to run full, specialized Apps in a unique way since the iPhone / arguably the iPad. People are delusional if they think Apple will focus on some gold magic interconnects to support an ML accelerator when...

- Apple's recent history has not pointed to them allowing development for leading-edge technology. See: no programmability for regular developers with afterburner's FPGA. which would have been *awesome* and not a big ask, since almost certainly they had those tools internally. Such a wasted opportunity and it points to Apple's dysfunction in the non-Mac/iOS developer space.

- Most HPC haven't been using Macs for years. I've reviewed many procurement orders and Mac Pros have only come up a few times. Like, less than a percent. Far less.

- Regarding what I said about Tensorflow, yes Apple contributed and helped make it more performant on Apple Silicion. That is great. It is also a sign that they aren't on the leading edge of ML/AI development, because Tensorflow is old-hat. I'd say it's gone from double-digit percentage importance a couple years ago to low-single digits now. Source: I literally review new research papers nearly every day and am very tapped-in to what is going on in that space. The people in here have no idea what they're talking about. Running & training an open source LLM locally has nothing to do with actually developing the technology. Training an existing model, maybe. Not the same thing as HPC development which is almost universally happening on Linux. Even setting up VS Code on a Mac is a pain compared to Windows or Linux. Nobody is using XCode for HPC work, unless it's for some weird experiment.

I'm going to bow out of this thread until after WWDC. If we don't get that magic Mac Pro I hope some people just admit they were making things up, and get the hell out of the forum. If I'm wrong and we get some insane HPC box I'll gladly admit I was wrong, and I'll be happy about it. But I'd put those odds very, very low.
 
Last edited:

iPadified

macrumors 68020
Apr 25, 2017
2,014
2,257
Exactly. People are just making stuff up without doing any research at all. I have no idea what someone has to benefit out of posting nonsense on a technical forum in their free time but, well, here we are.
MR is not exactly a scientific community, I grant you that. There is a good mix of feelings, opinions, knowledge, wishful thinking and silo thinking.
 
  • Like
Reactions: AAPLGeek

mode11

macrumors 65816
Jul 14, 2015
1,452
1,172
London
The Max is smaller than the Intel desktops ? What are you smoking ?

“ …Intel's latest top-of-the-range 13th Generation Raptor Lake for desktops has a die size of 252.65 mm^2, up from 215.25 mm^2 in the case of Alder Lake CPU for desktops…..”


Whereas

“…
The M1 Max is truly immense – Apple disclosed the M1 Pro transistor count to be at 33.7 billion, while the M1 Max bloats that up to 57 billion transistors. AMD advertises 26.8bn transistors for the Navi 21 GPU design at 520mm² on TSMC's 7nm process; Apple here has over double the transistors at a lower die size thanks to their use of TSMC's leading-edge 5nm process. Even compared to NVIDIA's biggest 7nm chip, the 54 billion transistor server-focused GA100, the M1 Max still has the greater transistor count.

In terms of die sizes, Apple presented a slide of the M1, M1 Pro and M1 Max alongside each other, and they do seem to be 1:1 in scale. In which case, the M1 we already know to be 120mm², which would make the M1 Pro 245mm², and the M1 Max about 432mm². …”

Die-Sizes.jpg





the 12900k/13900k are mainly a CPU die with a smaller than entry level GPU tacked on. The Pro/max dies are closer to entr-midsize GPU dies with a few U tacked on !



pretty sure The M2 Pro is bigger than the Intel desktop Chips ! ( M2 had die bloat from M1 ) The Max ‘smokes’ the Intel solution in terms of die size . Note the Max picture above doesn’t even include the UltraFusion connector ( and associated transistors )




Even Intel‘s Xeon SP Gen 4 ( sapphire rapids )
“ …. Intel has stated that each of its silicon tiles are ~400 mm2,


are not significan bigger than the Max .

And if want to hardwave as that is just die size , not transistors . Apple is on N5 and Intel on their variant of 7 .

The Maxi is so large the Ultra is pretty much at the reticle limit of the InFO-LSI packaging technology . TSMS can’t make anything signicantly bigger than the Ultra with that tech . It is slightly dubious the bloated. 2 max die will fit ( or just wnnt to the more expensive CoWoS-LSI because any attempted quad would have to anyway. )


the only chip dies tha make th e Max look “small” are the very top end Nvidia dies tha come in around 600mm2 range.
You’re right, posted too quickly (from my phone). The site I’d looked at was actually describing a Ryzen 7 chip (which was 6.5bn, not 65bn). Whoops!

The M1 Max is indeed a huge chip at 57bn transistors.
 

dgdosen

macrumors 68030
Dec 13, 2003
2,817
1,463
Seattle
Was UltraFusion a stop gap solution? Do you all think it will be used again? Could it ever be used on a 'Jade4C' die?

If UltraFusion isn't in future plans, what is Apple's most logical choice of TSMC packaging for it's beefier chips?
 

ZombiePhysicist

Suspended
May 22, 2014
2,884
2,794
Saw these recently and thought they were interesting. Sounds like nm numbers are really marketing numbers and not real physical measures anymore (maybe loosely associated to fin cross section measurement at most):


 
Last edited:
  • Like
Reactions: AAPLGeek and Mago

mode11

macrumors 65816
Jul 14, 2015
1,452
1,172
London
Replying to @dgdosen.

UF allows Apple to make their top end chip from two MBP SoCs; it probably wouldn’t be a cost-effective product otherwise.

To make a 4-way chip, the Max would need (at least) two UF connectors - or some type of central hub to link the SoCs together.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.