Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Still can’t believe no-one cared enough to take a dieshot of the m4max.
Wouldn’t that get enough exposure for the costs involved? Or is it very hard to do?
I don’t know of any third-party die shots of M2 Max or M3 Max either. I think there was one of M1 Max, but I can’t find it (Twitter?) — I seem to recall it wasn’t very useful. The Apple image was far more informative.

I doubt it has anything to do with cost. It may have more to do with skill and access to specialized equipment, but I really don’t know. Are third-party die shots a thing in the Intel - Nvidia - AMD world?
 
I don’t know of any third-party die shots of M2 Max or M3 Max either. I think there was one of M1 Max, but I can’t find it (Twitter?) — I seem to recall it wasn’t very useful. The Apple image was far more informative.

I doubt it has anything to do with cost. It may have more to do with skill and access to specialized equipment, but I really don’t know. Are third-party die shots a thing in the Intel - Nvidia - AMD world?

High yield published M3 Max die shots (in fact the whole M3 family) and I think so did Tech Insights (though the latter is paywalled or at least you need to sign up for it, not sure). The M3 Max die shots confirmed the lack of interconnect. Yes the same groups publish die shots of other chips as well. However, you never know what group will do one and which will release data publicly (e.g. semi-analysis did an in-depth die shot of the M2 but not M3 while a Chinese Baidu user leaked/performed the M4/Snapdragon X Elite die shots). The main purpose is that they allow one to analyze how big function blocks are relative to the rest of the die as well as transistor estimates and therefore cost estimates (although TSMC and other fabs are fairly secretive about prices, especially with individual partners and especially with partners like Apple). It also gives one a sense of how much silicon die area a particular design needs to achieve its performance compared to competitors.
 
Last edited:
High yield published M3 Max die shots and I think so did Tech Insights (though the latter is paywalled). The M3 Max die shots confirmed the lack of interconnect. Yes the same groups publish die shots of other chips as well. However, you never know what group will do one and which will release data publicly (e.g. semi-analysis did an in-depth die shot of the M2 but not M3).
I think the die shots come from Apple and others annotate them.
 
I think the die shots come from Apple and others annotate them.
Looking back, that was true for the semi-analysis one and maybe the High Yield, but not the others. I believe Apple stopped releasing die shots maybe by the M4 and I'm not sure if they ever did for the A-series, but people did them anyway, see Chipwise:


Chipwise also did the X Elite die shot analyzed by Piglin on Baidu, but I'm not sure where the M4 die shot came from though. It looks like a Chipwise shot but I don't see that on the Chipwise website.

I'm also not sure where High Yield got their die shots from as they don't look like the ones I remember. Those might be Apple shots. And I was wrong they aren't the ones that confirmed the lack of interconnect. It was 3rd party die shots from https://www.techanalye.com/ since they showed the interconnect on the M1/M2 Max (which Apple did not on their released ones) and did not show them on the M3 Max. Techanalye does not post in English (only Japanese I think), makes them hard to search for online and I no longer have the links. This the best I could find was Vadim's post about them:
 
Last edited:
According to this https://wccftech.com/m4-ultra-launching-in-h1-2025-for-the-update-mac-studio/ article there is a chance of a Mac Studio with max and ultra and a Mac Pro with something even more powerful. (references Gurman as source). If that comes through, it could be the best case scenario many of us are hoping for.

This makes the 'three main varieties’ in the original leak story from Gurman much less ambiguous. It’s clear from this latest update, Apple is likely using the Brava codename in the same way Jade and Rhodes covered the same 3 variants in the M1 and M2 lineups . . .

Tonga - M1
Jade Chop - M1 Pro
Jade 1C - M1 Max
Jade 2C - M1 Ultra

Straten - M2
Rhodes Chop - M2 Pro
Rhodes 1C - M2 Max
Rhodes 2C - M2 Ultra

Ibiza - M3
Lobos - M3 Pro
Palma - M3 Max

Donan - M4
Brava Chop - M4 Pro
Brava - M4 Max

https://theapplewiki.com/wiki/Codenames

——————————
Brava 2C - M4 Ultra
Hidra (2x+) - ?????

. . . if the last 2 chips line up with Gurman’s outlook, there's 5 configurations from 4 chip variants based on the 3 codenames. Apple having 4 variants is the bit that wasn’t particularly obvious at the outset.

A die shot would help, but I suspect Gurman’s info is a pretty safe bet at this stage. All he is missing now, is some specifics on the Hidra configuration.
 
Looking at nvidia’s struggles, how many GPU cores can Apple add before they run into diminishing returns?
 
Still can’t believe no-one cared enough to take a dieshot of the m4max.
Wouldn’t that get enough exposure for the costs involved? Or is it very hard to do?
People who can do so may not be interested in it. On the contrary, there are already images of Nvidia's latest GPU.

NVIDIA-Blackwell-GB202-GeForce-RTX-5090-GPU-GDDR7-Memory-Die-Shot-_1.png

 
This makes the 'three main varieties’ in the original leak story from Gurman much less ambiguous. It’s clear from this latest update, Apple is likely using the Brava codename in the same way Jade and Rhodes covered the same 3 variants in the M1 and M2 lineups . . .

Tonga - M1
Jade Chop - M1 Pro
Jade 1C - M1 Max
Jade 2C - M1 Ultra

Straten - M2
Rhodes Chop - M2 Pro
Rhodes 1C - M2 Max
Rhodes 2C - M2 Ultra

Ibiza - M3
Lobos - M3 Pro
Palma - M3 Max

Donan - M4
Brava Chop - M4 Pro
Brava - M4 Max

https://theapplewiki.com/wiki/Codenames

——————————
Brava 2C - M4 Ultra
Hidra (2x+) - ?????

. . . if the last 2 chips line up with Gurman’s outlook, there's 5 configurations from 4 chip variants based on the 3 codenames. Apple having 4 variants is the bit that wasn’t particularly obvious at the outset.

A die shot would help, but I suspect Gurman’s info is a pretty safe bet at this stage. All he is missing now, is some specifics on the Hidra configuration.
Okay, I just broke down and subscribed to the Bloomberg tech newsletters bundle, so you don’t have to. There is nothing new in the January 12 edition, but he does imply (without actually saying so outright) the Mac Studio refresh will include both M4 Max and M4 Ultra:

“During the first half of the year, Apple is aiming to refresh the Mac Studio, ... The Mac mini recently got an M4 Pro chip that bests the Mac Studio’s M2 Ultra processor in some scenarios and benchmarks. So Apple needs to get the higher-end Studio model back on track with the speedier M4 Max and M4 Ultra chips.”

Later, he does say outright that, “A new Mac Pro is in development as well, and it will feature a high-end Hidra chip.”

That’s it. Make of it what you will.

There is no source cited for “Brava Chop” in that wiki, but I think it comes from the Asahi Linux GitHub documentation:


So it seems likely to be definitive, and perhaps we can interpret it as evidence that M4 Pro/Max is like M1/M2 Pro/Max, and thus there will be an M4 Ultra. Gurman seems to be banking on that. I’m not so sure.
 
… “Brava Chop” … comes from the Asahi Linux GitHub documentation:


So it seems likely to be definitive, and perhaps we can interpret it as evidence that M4 Pro/Max is like M1/M2 Pro/Max, and thus there will be an M4 Ultra. Gurman seems to be banking on that. I’m not so sure.
Thinking about it, I’m even less sure. There is no 1C/2C indicated in the Asahi Linux GitHub documentation — it’s just “Brava Chop” and “Brava.”

If you look at Apple’s press release graphics, you can see that M4 Pro design is indeed a “chop” of the M4 Max design:


The difference between this and M1/M2 is that it’s not just GPU cores that are being added for the Max, it’s also CPU cores, at least that’s how I interpret those graphics. The M4 Max graphic shows the “chop” line, it runs straight across. I don’t think that is an accident, even though obviously the actual layouts don’t have labels in their upper left corners!

For now, the Sequoia model identifiers list still holds. There’s still no indication there is more than one remaining M4 model (Mac16,9), probably the M4 Max Mac Studio.

At this point, given Gurman’s confidence in this regard, we can be almost certain that Hidra is in production, and it isn’t 2x Max. Most likely it is M4-based and on N3E. The natural marketing for it would be UltraFusion 2.0 and M4 Ultra. However, from a technical standpoint, it will be different enough that, internal Apple-wise, it will be its own generation, the first true workstation/server M-series silicon, with its own cadence, thus Mac17,1 (M4 Ultra Mac Pro) and Mac17,2 (M4 Ultra Mac Studio).

In short, I think @crazy dave is right, this is basically his “M4.5” theory.

While M5 and N3P are possible in terms of TSMC’s production timeline, it’s hard to imagine Apple rushing such a major development into production on a cutting-edge node — far more likely Hidra has been carefully, extensively, exhaustively tested and it will be rock-solid.
 
Last edited:
If you look at Apple’s press release graphics, you can see that M4 Pro design is indeed a “chop” of the M4 Max design:


The difference between this and M1/M2 is that it’s not just GPU cores that are being added for the Max, it’s also CPU cores, at least that’s how I interpret those graphics. The M4 Max graphic shows the “chop” line, it runs straight across. I don’t think that is an accident, even though obviously the actual layouts don’t have labels in their upper left corners!
It's pointless to analyze these PR images this way as they're never representations of physical chip layout. Every time we've had a real annotated die shot to compare to, it's been completely clear that the graphical artist was just told "fit in X CPU boxes, Y GPU boxes, Z NPU boxes, etc" and then they made something up.

Here's the most recent example, using TechInsight's M4 die shot and annotations. Note that Apple presents us four versions of the M4 PR image, so we can tell which boxes the artist meant to represent CPU, GPU, NPU, or display engine.


TechInsights didn't annotate the display engines, but just looking at the CPU, GPU, and NPU cores, there's absolutely no way to reconcile the PR image with the real chip. (note that TechInsights labels P cores as "CPU1" and E cores as "CPU2" in their image)
 
Looking at nvidia’s struggles, how many GPU cores can Apple add before they run into diminishing returns?

We’ll see when (if?) we get there ¯\_(ツ)_/¯
Unlike with the CPU, The nice thing about graphics workloads is that they tend to be embarrassingly parallel (or close enough) such that adding more cores generally results in linear performance gains - there are, in theory, no diminishing returns in terms of performance gains to adding more cores. Even recently updated benchmarks don't always show this well because benchmarks often have to be on the smaller side to run the gamut of all the various GPUs they have to work on (3D Mark split its latest benchmark into Steel Nomad and Steel Nomad Light to alleviate this issue, even though both are pretty intensive - only Steel Nomad Light is AS Mac native).

What Nvidia has run into is a little different. While in theory you can just keep adding cores, just like with clock speed, eventually you run into excessive power requirements (though adding more cores is linear rather than exponential like clock speed) and of course die size/cost issues. The 4090 was already big. Without a node shrink or new architecture, Nvidia needed to differentiate the 5090 and so, while not the biggest GPU Nvidia has ever made, the 5090 (and the 5000 series in general) just added lots more cores making the dies much bigger, more expensive (though maybe not as expensive as moving to a new node? also possible a new node wasn't available in the numbers Nvidia needed), and more power hungry.
 
Last edited:
It's pointless to analyze these PR images this way as they're never representations of physical chip layout. Every time we've had a real annotated die shot to compare to, it's been completely clear that the graphical artist was just told "fit in X CPU boxes, Y GPU boxes, Z NPU boxes, etc" and then they made something up.
Point taken, but it’s apparent that, in this case, the graphic artist was also told to make the bottom ~ two-thirds of the M4 Max image exactly the same as the M4 Pro image. So that’s something.
 
Point taken, but it’s apparent that, in this case, the graphic artist was also told to make the bottom ~ two-thirds of the M4 Max image exactly the same as the M4 Pro image. So that’s something.
You need to look more carefully at the bottom right corners of these images, because they are far from exactly the same. Aside from the layout differences, assuming that the pattern of thin-lined details inside each thick-lined box identifies a particular SoC block type, there's one block type present in Pro but not in Max, another present in Max but not in Pro, and three types where Max has two copies to Pro's single copy.

Please understand that I think it's possible (likely, even) that Apple went back to the "chop Max to make Pro" design methodology in the M4 generation. However, these PR diagrams can't be taken as evidence for a cut line, they just aren't meaningful in that way.
 
You need to look more carefully at the bottom right corners of these images, because they are far from exactly the same. Aside from the layout differences, assuming that the pattern of thin-lined details inside each thick-lined box identifies a particular SoC block type, there's one block type present in Pro but not in Max, another present in Max but not in Pro, and three types where Max has two copies to Pro's single copy.

Please understand that I think it's possible (likely, even) that Apple went back to the "chop Max to make Pro" design methodology in the M4 generation. However, these PR diagrams can't be taken as evidence for a cut line, they just aren't meaningful in that way.
Ugh, thanks for handling it the way you did. Wishful thinking on my part, I guess. Embarrassing. I remember wondering why it hadn’t been pointed out previously, but I was so damn sure I was right I didn’t bother to stop and look more closely.
 
They cater to professionals plenty. You are correct however that this is a fairly niche area, albeit with tremendous growth potential. At any rate, while Digits is a niche product, it spells bad news for Apple's main advantage in the amount of RAM available for professional applications. AMD is also encroaching on this territory. To stay relevant, Apple needs to step up their performance and software game. Of course, they might also choose not to play and stay in the software dev/creative sector, but that would make products like Studio or Mac Pro irrelevant.

Definitely.

I'm a professional (not video editor, but network engineer/architect/etc.) and run a MacBook. Why? Because I get the best damn laptop on the market in terms of overall build, plenty of RAM, blistering performance, great battery life and home-brew for the unix network tools, etc. that I want.

The MacBook Pro truly is a portable workstation and not just portable from desk to desk, to plug into an external display, mouse and keyboard. It's usable without external displays and peripherals because the built in stuff is actually decent.

It's not cheap, but it's worth it.
 
It comes with 128GB LPDDR5X (unified memory) for $3,000.00 which once again makes Apple's pricing look ridiculous in comparison.

The same from Apple is going to cost $4,799.00 in the Mac Studio or $4,699.00 from the MacBook Pro.

I would slow down on that criticism just a bit. The 128GB of RAM will probably not cost that much for the Studio once the M4 comes out - I take it you took the M2 Studio 128GB price? Remember Apple makes you upgrade to the Ultra to get that in the M2 generation and won't for the M4 generation as M4 Max goes up to 128GB which the M2 Max did not. So yes the M2 Studio is a bad buy right now for many, many reasons. No question. And the MacBook Pro is full laptop with a screen. I mean the mini costs $800 to get the same specs as the starting $1600 price of the 14" MBP and M2 Studio was a $900 - $1100 cheaper than the base M2 Max laptop when they were both new (again if you want to match SSDs). Of course we don't know when the M4 Studio will be out - probably summer a month or so after DIGITS releases in May.

Of course the M4 Studio will doubtlessly be more expensive than $3K still to get 128GB of RAM never mind the 4TB SSD. I'd say $3700 - $4K depending (again not counting hard drive)...
From my recollection of comparing an equally-spec'd M2 Max Studio & 16" M2 Max MBP, the MBP was about $1,000 more (consistent with @crazy dave's recollection).

A 16" M4 Max MBP (16‑core CPU, 40‑core GPU) with 128 GB RAM, 4 TB SSD, and a glossy display is $6,000, which implies ≈$5,000 for an equally-spec'd M4 Max Studio.

Though, as has been disucussed, the Studio they announce this year may not be exactly an M4 Max....
 
  • Like
Reactions: crazy dave
Like back when the M1 Pro/Max were a bit of an M1.5 Pro/Max with their DDR5 and Thunderbolt 4?
A better parallel would be the M1 Ultra. UltraFusion was a step beyond M1, beyond monolithic architecture, and if we’re right about Hidra, it is another, even bigger, step in that direction.
 
Last edited:
I missed this from a few days ago, there’s a video recording worth watching from an Apple Machine Learning researcher:


Deepseek R1 full model at 4 bit running on 3 Mac Studios via MLX. I was ignorant of MLX’s ability to run distributed, I thought that wasn’t working yet and was clearly very wrong.

I am quite excited for Hidra, this is seriously impressive stuff. If Apple keeps their current pricing strategy it will be a bargain for anyone wanting to run these things at home or a small business (or a large business for minimal cost).
 
From my recollection of comparing an equally-spec'd M2 Max Studio & 16" M2 Max MBP, the MBP was about $1,000 more (consistent with @crazy dave's recollection).

A 16" M4 Max MBP (16‑core CPU, 40‑core GPU) with 128 GB RAM, 4 TB SSD, and a glossy display is $6,000, which implies ≈$5,000 for an equally-spec'd M4 Max Studio.

Though, as has been disucussed, the Studio they announce this year may not be exactly an M4 Max....
Also: It should be noted that NVIDIA says Project Digits will start at $3,000, and include up to a 4 TB SSD.

So if we assume its starting config has a 1 TB SSD: A comparably-spec'd (128 GB RAM/1 TB SSD) 16" M4 Max MBP is $5,000, giving an estimated $4,000 for a hypothetical 1 TB/128 GB M4 Max Studio.

Of course, those really aren't comparable devices—Project Digits will have far more GPU power than an M4 Max, and the M4 Max may have more CPU power. Not sure how their memory bandwidths compare. The M4 Max has 546 GB/s.

Here's an estimate that Digits will have 825 GB/s:

"From the renders shown to the press prior to the Monday night CES keynote at which Nvidia announced the box, the system appeared to feature six LPDDR5x modules. Assuming memory speeds of 8,800 MT/s we'd be looking at around 825GB/s of bandwidth."
https://www.theregister.com/2025/01/07/nvidia_project_digits_mini_pc/"

I've seen competing claims saying it will be much lower. But 825 GB/s seems strong so, if that's correct, why didn't NVIDIA include it in their announcement along with all the other specs?

I wonder if the Project Digits machine will be of interest to gamers, or if the type of GPU performance it offers won't fully translate to gaming. E.g., you will probably be able to buy or build a $3k gaming PC with a 5080 (MSRP $1k, street price TBD), and that will offer 960 GB/s memory bandwidth. [The 5090's memory bandwidth is 1,792 GB/s!]
 
Last edited:
Also: It should be noted that NVIDIA says Project Digits will start at $3,000, and include up to a 4 TB SSD.

So if we assume its starting config has a 1 TB SSD: A comparably-spec'd (128 GB RAM/1 TB SSD) 16" M4 Max MBP is $5,000, giving an estimated $4,000 for a hypothetical 1 TB/128 GB M4 Max Studio.

Of course, those really aren't comparable devices—Project Digits will have far more GPU power than an M4 Max, and the M4 Max may have more CPU power. Not sure how their memory bandwidths compare. The M4 Max has 546 GB/s.

Here's an estimate that Digits will have 825 GB/s:

"From the renders shown to the press prior to the Monday night CES keynote at which Nvidia announced the box, the system appeared to feature six LPDDR5x modules. Assuming memory speeds of 8,800 MT/s we'd be looking at around 825GB/s of bandwidth."
https://www.theregister.com/2025/01/07/nvidia_project_digits_mini_pc/"

I've seen competing claims saying it will be much lower. But 825 GB/s seems strong so, if that's correct, why didn't NVIDIA include it in their announcement along with all the other specs?

I wonder if the Project Digits machine will be of interest to gamers, or if the type of GPU performance it offers won't fully translate to gaming. E.g., you will probably be able to buy or build a $3k gaming PC with a 5080 (MSRP $1k, street price TBD), and that will offer 960 GB/s memory bandwidth. [The 5090's memory bandwidth is 1,792 GB/s!]
From the other place, just reposting here:

The register article is out of date and wrong information on several fronts. The CPU cores aren’t V2, they’re newer. Also the bandwidth calculation is almost certainly wrong. While I suppose it could have six RAM modules in a 2x32GB and 4x16GB configuration, that would be unusual but I can't think of a reason it wouldn't work, then the bandwidth would be similar to a binned M4 Max at just above 400GB/s. The lowest it could be is 256-bit bus using 4x32GB modules. That would put it around M4 Pro/Strix Halo bandwidth (~270-300GB/s depending). If using 8, like 8x16, modules, then it would be similar to the full M4 Max roughly 550GB/s. They could go higher than that too but it would require many more smaller modules. I don't remember how small the LPDDR modules go but eventually you run into issues where the smallest RAM you can offer is pretty large (the full M4 Max starts at 48GB - 8x6GB modules). Then again, they may be planning on offering only a 128GB variant.
 
Here's an estimate that Digits will have 825 GB/s:

"From the renders shown to the press prior to the Monday night CES keynote at which Nvidia announced the box, the system appeared to feature six LPDDR5x modules. Assuming memory speeds of 8,800 MT/s we'd be looking at around 825GB/s of bandwidth."
https://www.theregister.com/2025/01/07/nvidia_project_digits_mini_pc/"

I've seen competing claims saying it will be much lower. But 825 GB/s seems strong so, if that's correct, why didn't NVIDIA include it in their announcement along with all the other specs?

Damn, I didn’t know there was speculation that it could offer 825GB/s, would be wild if the case and yeah I can’t imagine NVIDIA having such an advantage and not making it front and centre in their announcement.

However based on this and this it’s looking like 273GB/s 😐
 
Last edited:
  • Like
Reactions: crazy dave
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.