Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Mac16,10 is M4 mini
Mac16,11 is M4 Pro mini (full / binned)

Don't think 16,15 has been used yet.
The Apple support page still hasn’t changed, it still lists Mac16,15. But yes, I guess it’s probably an error.


Note: It would be funny if this apparent error turns out to be a leak, and Mac16,15 is something else!
 
Last edited:
he doesnt have evidence but for sure M3 could be more expensive to make for no good reasons, fact that Apple moved for M4 more quickly than they did with the M1-2-3 with still good gains, he is an confused user, so dont take it too much personally
@Confused-User doesn’t mention costs, but that difference is also relative. It isn’t “huge” or “much more,” just like yields aren’t “terrible.” Both N3 and N3E are leading-edge technology, they are both expensive. N3E is lower in cost relative to N3 because it has fewer layers and it is second-generation, so it’s a better deal, and that adds up over millions of chips, but it’s not like it is cheap. It uses the same equipment and fabs that N3 does.

The argument that the difference in production costs between N3 and N3E is so large that it would affect Apple’s architectural decisions (made more than a year before volume production) is not quite as unlikely as the idea that a relative improvement in production yields (which would only be certain long after those decisions were made) affect them, but it’s still unlikely.
 
Last edited:
  • Like
Reactions: Antony Newman
Loosening up the aggressive dimensional spec of N3B allowed for increased yields in N3E.
Reducing EUV layers increased Wafers Per Hour rate - and reduced client design & simulation cost.

TSMC knows what is most cost effective on a global scale. If you look at backside power delivery, TSMC were due to release it in N2P - only for them to pull it when a more efficient (and cost effective solution) was found.

Intel pushed forward with version of this (PowerVia). TSMC are due to bring their improved solution (SuperPowerRail) to the masses in A16.

 
The Apple support page still hasn’t changed, it still lists Mac16,15. But yes, I guess it’s probably an error.


Note: It would be funny if this apparent error turns out to be a leak, and Mac16,15 is something else!
Well Apple has been rather chaotic with its Mac info in their website lately. The iMac listed itself being 8k120Hz capable with external display. Then the 70W type-C charger lists compatibility for iMac and Mac mini. And now this.

Though I share your line of thinking: 16,15 must have been in use for at least a brief while, it is too off from 10/11 to be just a typo.
 
Even if it has no "terrible yield", it can still be much more expensive and if you have evidence against this claim please elaborate.
I don't know about "much", but clearly it's more expensive. It has more layers and more EUV and it uses double-patterning EUV layers, which N3E does not. My point was that the claims of terrible yields (which BTW went hand-in-hand with claims of terrible performance, which were also false) were nonsense.

I highly doubt this because silicon developments has to take some time and it is very unlikely to make such decisions based on the actual sales data because the time frame will be way too tight. Or you can say that Apple designed two plans for the Pro and they can make a decision to pick one to mass production in the last minute, this is more realistic but why wasting efforts for a silicon that will never bing released at all? I'm confused and it looks like waste of R&D money to me.
You're right, when I said that they would make choices for M5 based on their market intel, I should have said M6. ...which makes it an interesting question, what will they choose to do with M5?

It does not make much difference with just fuse off a core and physically remove a core, that is why I'm calling both a 5-core cluster, because it is very unlikely to redesign the cache and the interconnect just for this 5-core cluster.
I suppose it depends on your perspective. From a user's perspective, it doesn't matter. If you're interested from an engineering standpoint, it's different.

He said that hes speculating while you are saying with confidence that it wasnt, please provide with your proof otherwise we let this just an attempt of your love to argue with almost everybody
We've been through this, in this very forum, before. It's not my job to do your homework. However I did mention the sources you can look at. TSMC's financials and public statements, which can to a reasonable extent be relied upon since they are legally liable to their stockholders for lying. Had N3B been a failure, you'd have been able to see it in their numbers. (And also Intel wouldn't be using it for Lunar Lake, though that's a less-strong argument as you could imagine a scenario where yields were bad but eventually improved... though then you'd have to invent an explanation for why the yield curve for N3B were so atypical.)
 
Well Apple has been rather chaotic with its Mac info in their website lately. The iMac listed itself being 8k120Hz capable with external display. Then the 70W type-C charger lists compatibility for iMac and Mac mini. And now this.

Though I share your line of thinking: 16,15 must have been in use for at least a brief while, it is too off from 10/11 to be just a typo.
So I guess that means we can now expect to pair Mac16,15 with an 8K 120Hz Pro Display XDR!

On the subject of support-documentation errors, if you want to see one of Apple’s all-time train wrecks in that regard, see the discussion here: https://www.earlymacintosh.org/cd.html

(In short, in 1989, “Phil and Dave” gave the job of compiling the first developer CD archive of Mac system software to someone with no institutional knowledge, with predictable results, mistakes that were never corrected.)
 
Has anyone seen a video or something about how the new M4 Max chip is build ?
I mean does it have the same connector like M1 and M2 Max chips or will there be a M4 Ultra standalone chip.
I'm curious :)
They didn’t use the “die-shot” graphics that were used at launch to show the relative sizes in M1-M2-M3, which also showed the “connector” and its lack. As a result, we’re waiting for actual die shots, and I have no idea when or where those will appear, or even if they will be useful. Maybe others can guess.

Why Apple eliminated the graphics can be argued both ways, I tend to think it’s because there is a connector and they didn’t want its return to steal the show, to take the spotlight away from the Max itself… On the other hand, the lack of a visible connector would set off a firestorm of speculation, and no good can come from that. So better just to kill the graphic.
 

M4 Max is now better than mobile RTX 4070 which is 30% slower than mobile RTX 4090 so I believe M5 series will be better than mobile RTX 4080 and desktop 4080.
Don't forget about M4 Ultra next year! (I'm assuming it's M4 based.)
 
Don't forget about M4 Ultra next year! (I'm assuming it's M4 based.)
Lets hope it is not just 2x Max but something better with more GPU power, i would love to ditch Nvidia on my desktop even if i trade it for few less P-cores.
 
Lets hope it is not just 2x Max but something better with more GPU power, i would love to ditch Nvidia on my desktop even if i trade it for few less P-cores.
I doubt it since adding two chips didnt really improve twice than Max chip due to the limitation of ultrafusion and McM. They would probably need to make a whole new chip just for Ultra and Extreme.
 
I doubt it since adding two chips didnt really improve twice than Max chip due to the limitation of ultrafusion and McM. They would probably need to make a whole new chip just for Ultra and Extreme.

M2 Max - 1688 points in Blender
M2 Ultra - 3268 points in Blender

Thats 193%, as close to linear scaling as it gets.

Meanwhile:

RTX 4070 (5888 shading units @2Ghz) - 5130 points in Blender
RTX 4090 (16384 shading units @ 2.2Ghz) - 11000 points in Blender

Thats roughly 2x higher performance, despite almost 3x higher compute capability.

From the looks of it, Apples achieves better scaling from MCM and UltraFusion than Nvidia does from monolithic chip.
 
M2 Max - 1688 points in Blender
M2 Ultra - 3268 points in Blender

Thats 193%, as close to linear scaling as it gets.

Meanwhile:

RTX 4070 (5888 shading units @2Ghz) - 5130 points in Blender
RTX 4090 (16384 shading units @ 2.2Ghz) - 11000 points in Blender

Thats roughly 2x higher performance, despite almost 3x higher compute capability.

From the looks of it, Apples achieves better scaling from MCM and UltraFusion than Nvidia does from monolithic chip.
Are we sure blender isn’t bandwidth sensitive? The 4090 has 2x the bandwidth of the 4070 which matches the performance increase. (I guess I am asking if the 4090 has 3 times the bandwidth would the scores go up all else being the same).
 
Are we sure blender isn’t bandwidth sensitive? The 4090 has 2x the bandwidth of the 4070 which matches the performance increase. (I guess I am asking if the 4090 has 3 times the bandwidth would the scores go up all else being the same).

That is certainly one reason for Nvidia’s poor scaling. My main point was different though - it was to refute that UktraFusion has inherently bad scaling. M1 family has bad scaling- but that was because of how the work was distributed. It got fixed in M2.
 
It looks like the M5 will be a minor performance update over the M4, as the M5 will use N3P next year. The bigger update will need to wait for the N2 in 2026.
How do you know, people thought N3E would be worse than N3B but here were are with the M4, and also Apple could improve the architecture so we will see next year.
 
Last edited:
Lets hope it is not just 2x Max but something better with more GPU power, i would love to ditch Nvidia on my desktop even if i trade it for few less P-cores.
The rumors are a little confusing. The best I can make out parsing the oracle of Bloomberg is that there will indeed be an Ultra that is 2x Max (has any 3rd party die shots with Ultra Fusion happened yet?), but there is also a desktop specific Hidra chip in the works maybe destined for the Mac Pro only, maybe by the end of 2025.

It's also all but officially confirmed that Nvidia themselves will be doing an ARM SOC next year (or more than one, rumors aren't certain), but not what its (their) specifications will be.

It looks like the M5 will be a minor performance update over the M4, as the M5 will use N3P next year. The bigger update will need to wait for the N2 in 2026.
How do you know, people though N3E would be worse than N3 but here were are with the M4, and also Apple could improve the architecture so we will see next year.
As @Mac_fan75 said, it's true that the improvements from node alone will be relatively small compared to 2026 and 2027, that doesn't mean Apple won't make larger architectural changes that may or may not allow immediate performance improvements (and may certainly improve with future software).

Not really. N3E may be a bit less dense but most here didn’t think it was worse than N3B.
True, there were a couple of voices that did imply that, but not many. Most kept ragging on N3B being a "failure". It may not have been as successful a node as TSMC had initially hoped, but we should all be so lucky to have such failures in our work.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.