Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

mr_roboto

macrumors 6502a
Sep 30, 2020
856
1,866
False, it does double the risk because of ultra fusion and the die size. Bigger die = higher the risk. Do you really think it's as simple as just connecting two dies? Hell no. The YouTuber I posted already mentioned that.
Random youtube videos are not definitive resources. Also, there's a high chance you have misunderstood whatever they said.

Die size doesn't change from M2 Max to M2 Ultra. Same die, same manufacturing process. Ultra literally is just connecting two known good dies. The packaging process they use to assemble two M2 Max die together has some dropout (as all manufacturing processes do, including the packaging of single die into Max chips), but I bet it's nowhere near as bad as your baseless claim.

If they ARE interested in AI, then they should make their own AI chip and workflow just like Amazon, Tesla, Microsoft, and more. They already have their own. The only problem is GPU is way more common and widely used which is 90% of all AI servers. That's how Nvidia dominate AI with CUDA workflow. But other than that, Nvidia sucks at performance by watt.

But Apple can't even make a workstation grade CPU and GPU with upgradability and expandability so it's almost impossible for them to compete in AI markets.
You're revealing here that you don't understand the first thing about so-called "AI" and what Apple's strategy is.

By choice, Apple more or less exclusively designs and builds silicon and systems intended for end users to buy. As such, they are quite appropriately focused on the kind of compute resources required to run pre-trained models on-device. They're doing great at it, they've been pretty much at the cutting edge of this since iPhone X, and the M-series chips have continued it. The Apple Neural Engine is not only quite fast, it's insanely power efficient.

Apple doesn't compete against Nvidia's giant supercomputerish stuff. That's fine. They do not need to, because that isn't a market they want or need to be in. All that matters is whether they can run trained models fine on-device, and they're doing great at that.
 

sunny5

macrumors 68000
Jun 11, 2021
1,835
1,706
Random youtube videos are not definitive resources. Also, there's a high chance you have misunderstood whatever they said.

Die size doesn't change from M2 Max to M2 Ultra. Same die, same manufacturing process. Ultra literally is just connecting two known good dies. The packaging process they use to assemble two M2 Max die together has some dropout (as all manufacturing processes do, including the packaging of single die into Max chips), but I bet it's nowhere near as bad as your baseless claim.


You're revealing here that you don't understand the first thing about so-called "AI" and what Apple's strategy is.

By choice, Apple more or less exclusively designs and builds silicon and systems intended for end users to buy. As such, they are quite appropriately focused on the kind of compute resources required to run pre-trained models on-device. They're doing great at it, they've been pretty much at the cutting edge of this since iPhone X, and the M-series chips have continued it. The Apple Neural Engine is not only quite fast, it's insanely power efficient.

Apple doesn't compete against Nvidia's giant supercomputerish stuff. That's fine. They do not need to, because that isn't a market they want or need to be in. All that matters is whether they can run trained models fine on-device, and they're doing great at that.
He is not a random YouTuber. He even had interview with Apple executive. And the source is from Apple.

Like I said, it's NOT just connecting two chips, you still need to use silicon wafers just like making new chips. If it was easy, how come they dont make Extreme chips with 4x Max chips? Your logic failed since Apple is not even thinking about making Extreme chip which supposed to be easy to make based on your logic.

There is no strategy when Apple is falling behind.
 
  • Haha
Reactions: Romain_H

sunny5

macrumors 68000
Jun 11, 2021
1,835
1,706
I see your response to my serious post was to laugh at it. That's hardly polite or collegial. But if you want to be that way, fine. Let me reply in kind:

I can't respond substantively to you for three reasons:

1) You would need to understand math, which you don't. And I can't teach you enough math for us to have a substantive discussion.

2) You would need to have basic English reading comprehension skills, which you don't (in my first paragraph I was clearly referring to the change in risk in going to a monolithic Ultra die, not an Ultra formed by fusing two Max chips, yet you thought it was the latter). And I can't teach you enough English for us to have a substantive discussion.

3) You would need to have sufficient social skills not to act like a child in response to something you disagree with. And I can't teach you enough social skills for us to have a substantive discussion.
Because you failed to prove it with misinformation. Bigger die = higher the risk. It does not change the fact that Max itself is already big and Ultra is double the risk due to its size. Is it really hard to understand?
 

mr_roboto

macrumors 6502a
Sep 30, 2020
856
1,866
Further suppose the critical defect rate is uniform, and that 70% of the Max chips are critical-defect-free (CDF). Then to get a CDF monolithic Ultra, you'd need both "Max halves" to be CDF, and the chance of that is 70% x 70%= 49%.
FYI, there's one very important thing missing in your analysis - by the time they try to assemble two Max die into one Ultra, they already know those die have no critical defects. Defect testing takes place before singulating die from the wafer or packaging them.

So, there's no need to analyze the risk of assembling an Ultra from die with a critical defect. By that time, they've already screened out the bad die. There's a small chance that singulation (typically sawing the wafer apart with a thin-kerf diamond blade) and other handling done during packaging breaks a die which tested OK while it was still part of a wafer, but overall those processes are relatively low risk to the silicon. The main yield impact with something like Ultra is the advanced packaging - do they manage to successfully make the ~10,000 connections that compose an Ultra Fusion interconnect?

On that note, I would not be terribly surprised if Apple used redundancy to improve Ultra Fusion interconnect yield. Repair by providing redundancy is a very common practice in modern silicon design. The classic example is SRAM - any medium to large SRAM array will be structured as an assembly of slices, with at least one more slice than is required. If a defect takes out one of the slices, that slice can be disabled and the spare slice can substitute for it. (If there's no defect in the array at all, the spare slice is just dark silicon. Or, in some cases, it becomes a superior and more expensive version of the chip with a larger cache - Intel's quite fond of this.)
 
  • Like
Reactions: Chuckeee

Pet3rK

macrumors member
May 7, 2023
57
34
Apple Silicon itself has proven that it's not good for professionals with high specs just like workstation. Mac Pro will die and I dont think Apple is interested in professional markets at all. Sadly, it will affect Mac's major markets such as video and music due to the hardware limitation.

Truth be told, Apple CAN create their own markets but their GPU performance is dramatically poor and many software aren't even interested in Mac at all. There are so many issues with macOS and Mac itself so as long as Apple is being stubborn, I dont think it's gonna be solved. They are limiting themselves too much and it won't work like iPhone's iOS.

At this point, the pro market will die slowly.
GPU isn’t the measurement to be used if it’s professional use even in high end. Their high CPU count and high RAM capacity is such an attractive hardware. I’m just waiting for them to go more insane on RAM size on the Pro chips variant.

macOS is now mature at this point. And the current setup of it being native *nix machine with popular mainstream apps is good. A warning from my professor: Mac/Linux are supported and there are tools to help except for Windows which they are on their own now.
 
Last edited by a moderator:

theorist9

macrumors 68040
May 28, 2015
3,880
3,059
Further suppose the critical defect rate is uniform, and that 70% of the Max chips are critical-defect-free (CDF). Then to get a CDF monolithic Ultra, you'd need both "Max halves" to be CDF, and the chance of that is 70% x 70%= 49%.
FYI, there's one very important thing missing in your analysis - by the time they try to assemble two Max die into one Ultra, they already know those die have no critical defects. Defect testing takes place before singulating die from the wafer or packaging them.

So, there's no need to analyze the risk of assembling an Ultra from die with a critical defect. By that time, they've already screened out the bad die...
Nope, you didn't understand my analysis. The section you were quoting was, as clearly specified, for a hypothetical monolithic Ultra (where you put the features of two Max's onto a single die), not for the actual fused Ultra.

I address the fused Ultra separately. There, I'm clearly assuming exactly what you state: That defect testing takes place prior to fusion.
 
Last edited:

mr_roboto

macrumors 6502a
Sep 30, 2020
856
1,866
Nope, you didn't understand my analysis. The section you were quoting was, as clearly specified, for a hypothetical monolithic Ultra, not for the actual fused Ultra.

I address the fused Ultra separately. There, I'm clearly assuming exactly what you state: That defect testing takes place prior to fusion.
Ah, didn't catch that you were analyzing a counterfactual. Never mind then.
 

theorist9

macrumors 68040
May 28, 2015
3,880
3,059
Ah, didn't catch that you were analyzing a counterfactual. Never mind then.
I did that because the poster to whom I was responding was talking about how the risk changes when you double the die size, so I wanted to address that first. I then compared and contrasted the defect risk between that approach, and Apple's approach of doubling the area while keeping the die size the same (by, as you know, bridging two dies).

Though now, in re-reading what the poster wrote, I think they actually meant doubling the area, not doubling the die size, even though the latter was the langugage they used. Oh well...
 

mr_roboto

macrumors 6502a
Sep 30, 2020
856
1,866
He is not a random YouTuber. He even had interview with Apple executive. And the source is from Apple.
Which claim is sourced from Apple? I can't read your mind. And being a big enough media personality to land an interview with an Apple executive is not the same thing as actually knowing much about engineering.

Speaking of which, I have now spent some time watching through part of that video. I don't think he's even supporting the claims you're making in this thread. He's also spending a lot of time talking about a thing which doesn't exist, and likely never could - a supposed "M2 Extreme" made from four M2 Max die. Thanks to a variety of information, there's no reason to believe Apple even designed M2 Max to support this, but he seems to be talking about it as if Apple had planned for it but then decided not to build it because it would be disappointing. Just another youtuber who takes rumors too seriously.

Like I said, it's NOT just connecting two chips, you still need to use silicon wafers just like making new chips. If it was easy, how come they dont make Extreme chips with 4x Max chips? Your logic failed since Apple is not even thinking about making Extreme chip which supposed to be easy to make based on your logic.

There is no strategy when Apple is falling behind.
Falling behind what? Products designed for markets they aren't even trying to be in? You're mad at Apple because they don't make the exact same things Nvidia does. Not very smart.

As for using silicon, yes, Apple is using an interconnect based on silicon. Here, let me show you what a real supporting link looks like. People have disassembled Apple's "Ultra" packages and imaged them, so we have some idea of what's going on with Ultra Fusion.


It's a narrow bridge somewhat like Intel EMIB. Does manufacturing this bridge pose the same yield risks as the base Max die itself? No, because these bridges don't contain active circuitry, just wires and bump sites. The bump pitch of the bridge (the spacing between the solder microbumps which connect it to the Max chips) is 25µm, or 25000nm. That means the wires in the bridge don't need to be ultra fine pitch, an older lithography node like 65nm will do just fine.

They're also much less area than the active silicon - just 18.8x2.88mm for the M1 Ultra's bridge, about 54mm^2. No transistors, small area, and coarse metal feature size all mean it's way cheaper to make than the M2 Max die.
 

hovscorpion12

macrumors 68040
Sep 12, 2011
3,043
3,122
USA
It is difficult to spe late about these things since costs can be hard to predict. For example, one way to increase performance would be to boost the clocks, but we don’t know what the realistic clock limits would be or how the rest of the system (caches, memory) would play with it.

Hypothetically though? A theoretical M3 Extreme using 4x M3 Max without any changes in clock or tech should be roughly on par or faster than 4090 (in all workloads except ML training) while consuming under 200W. A hypothetical higher clocked M3 Extreme at 300W would be 20-30% faster than 4090. But that chip would also be very large and extremely expensive.

A hypothetical Ultra-series Apple GPU using higher clocks and faster RAM and architectural improvements I mention in my previous post could be 50% faster than 4090. The big question is whether such a design would be possible without sacrificing the efficiency and performance of mobile GPUs, which are Apples primary market. So far they have been focusing on lower-clocked products and reusing mobile tech for desktop.



Not really. Fast memory interconnect is the cornerstone of Apple GPU tech, and dedicated GPUs traditionally have a problem with that. It would be possible to build a very wide GPU interconnect, but that would be an extremely expensive and power-hungry enterprise. Nvidia uses this kind of tech in their data center GPUs, but we are talking about systems that cost over 100k.

What I could see for Apple is using an additional interconnect board to link together two boards (each containing an SoC) and presenting them as a single device to the system, but that stuff is tricky to do right, and also very expensive. I doubt that the low volume of Mac Pro Can justify the R&D investment.

The M2 Ultra draws less power than traditional PCs while still delivering impressive performance. In stress tests, the M2 Ultra peaked at around 331 watts of power consumption, which is still lower than what a Core i9 or an RTX 4090 can reach individually. The Apple Mac Studio using the most powerful M2 Ultra has a Thermal Design Power (TDP) of 90W, which is 35W lower than the base power consumption of the desktop Intel Core i9-13900K
 

sunny5

macrumors 68000
Jun 11, 2021
1,835
1,706
The M2 Ultra draws less power than traditional PCs while still delivering impressive performance. In stress tests, the M2 Ultra peaked at around 331 watts of power consumption, which is still lower than what a Core i9 or an RTX 4090 can reach individually. The Apple Mac Studio using the most powerful M2 Ultra has a Thermal Design Power (TDP) of 90W, which is 35W lower than the base power consumption of the desktop Intel Core i9-13900K
Pointless since M2 Ultra is NOT even close to RTX 4090's performance. Where is your proof?
 

AlastorKatriona

Suspended
Nov 3, 2023
559
1,029
And that's why Mac is too restricted and yet, failed to expand. Why do you justify being restrictive when it's bad for Mac?
Who said it was bad for Mac? Being focused on designing machines for specific workflows is fantastic. No one cares if that's not your workflow.
 
  • Haha
Reactions: sunny5

sunny5

macrumors 68000
Jun 11, 2021
1,835
1,706
Who said it was bad for Mac? Being focused on designing machines for specific workflows is fantastic. No one cares if that's not your workflow.
If no one cares, then that's why Mac is being failing. Why do you keep justifying restrictions for?
 

sunny5

macrumors 68000
Jun 11, 2021
1,835
1,706
Because it's not. This is a delusion in your own mind that you're hoping someone else is dumb enough to validate on the internet.
No proofs, no thanks. You are only justifying restrictions by all means necessary and that's why Mac has too limited uses especially after the transition from Intel Mac to AS Mac.
 

MRMSFC

macrumors 6502
Jul 6, 2023
371
381
Those links have NO INFORMATIONS about the comparison between M2 Ultra and RTX 4090 in terms of performance. You brought meaningless links after all. Where is the proof that M2 Ultra's GPU performance is close to RTX 4090?
“Where is the proof?!?”

“Right here”

“No, that’s not proof!”

Ridiculous.
 

hovscorpion12

macrumors 68040
Sep 12, 2011
3,043
3,122
USA
Those links have NO INFORMATIONS about the comparison between M2 Ultra and RTX 4090 in terms of performance. You brought meaningless links after all. Where is the proof that M2 Ultra's GPU performance is close to RTX 4090?

Nowhere in my post did I even MENTION performance to an RTX 4090. [I recommend re-reading my post].

My post references power wattage to performance of the M2 Ultra Mac Pro.

Post #77 Asked in reference to Apple GPU performance with a 300W power budget with Post #78 replied.

MY post indicates that since the M2 Ultra can already hit a peak of 300W+ at the current performance, [which is half of RTX 4090] Apple could certainly push the Ultra chips even higher given that they already have the headroom.
 

sunny5

macrumors 68000
Jun 11, 2021
1,835
1,706
Nowhere in my post did I even MENTION performance to an RTX 4090. [I recommend re-reading my post].

My post references power wattage to performance of the M2 Ultra Mac Pro.

Post #77 Asked in reference to Apple GPU performance with a 300W power budget with Post #78 replied.

MY post indicates that since the M2 Ultra can already hit a peak of 300W+ at the current performance, [which is half of RTX 4090] Apple could certainly push the Ultra chips even higher given that they already have the headroom.
You responded to my post about the proof and you posted 3 links. Now you are saying that you never mentioned? Why do you even bring 77,78 while we are at 300? It was you who directly responded about the proof with 3 links and you are now saying something else.
 

hovscorpion12

macrumors 68040
Sep 12, 2011
3,043
3,122
USA
You responded to my post about the proof and you posted 3 links. Now you are saying that you never mentioned? Why do you even bring 77,78 while we are at 300? It was you who directly responded about the proof with 3 links and you are now saying something else.

Absolutely nothing has changed in my response. There is NO reference in my post to the RTX 4090 performance to the M2 Ultra.

My initial response [post #311], never quoted a post you made.

My response, was to continue to the discussion between user posts #77 and #78. Hence why my quote reply was to those users who posed the discussion of power wattage to performance.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.