Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

crazy dave

macrumors 65816
Sep 9, 2010
1,450
1,219
Regarding the bolded part, I think it's unlikely. I could be wrong but the M3 Max die size is 600 - 700mm². TSMC's reticle limit is 800mm². In other words, you can't make an Ultra that is twice the size of M3 Max on a single die.

It was surprisingly hard to get a definitive size for the M2 Max die size. I guess Apple stopped giving official size info after the M1 family The best I could do was this article Why Intel and AMD don't make chips like the M2 Max and M2 Ultra:

"At an estimated die size of 550mm2 (assuming Apple's side-by-side comparison above is to scale as no actual measurements seem to exist), the M2 Max is super big, and the M2 Ultra is the largest consumer chip ever made at over 1,000mm2"

I believe the answer to how could Apple make such a huge chip is the "Ultra Fusion Connector" on the M1 and M2 chips. It connects two separate chips together (the manufacturing/installation must be mind-bogglingly precise) to create what is in effect a single larger chip.

Part of the "evidence" for the idea that the M3 Ultra would be a single die was the absence of a Fusion Connector on the M3 Max die. Perhaps Apple, already knowing that they would not be producing an M3 Max, just eliminated the connector on the M3 Max leaving the possibility that it could return on the M4 Max allowing for the creation of an M4 Ultra out of two M4 Max chips.
I agree that the lack of M3 UltraFusion is most likely because we're not getting an M3 Ultra and for the below I based my calculations off a similar die size. Given that the reticle limit is about 840mm2 I think and the M3 Max is somewhere a little smaller than 550mm2, let's say 500mm2 (very hard to know, looks similar on 3rd party die shots to the M2 Max), you'd have to shrink the monolithic Ultra by about 75% - more than the reticle limit requires just to even make it economical to produce never mind possible. Too close to the reticle limit and the chip is simply too expensive for an Ultra Studio never mind two for an Extreme. In a different forum as a thought experiment just for fun, I did posit such a chip:

CPU: Max: 12 P-cores / 4 E-cores -> Ultra: 18 P-cores / 4 E-cores -> Extreme: 36 P-cores / 8 E-cores
GPU: Max: 40 cores -> Ultra 60 cores -> Extreme 120 cores
NPU is relatively small and you could fit 2 (or more if you want to emphasize it in the desktop) in the Ultra as it has currently

Given the apparent affect of the interconnect on GPU performance (unclear if it so) you might not actually lose much GPU performance (could be as high as 15% lost due to interconnect) for the monolithic design and boosting clocks to compensate would result in great power but it's a desktop and would be doable. Compensating for CPU performance completely with boosted clocks however might get power (and noise) to unacceptable levels even for a desktop (though you could still boost clocks a little). That said, you'd still have a lot of CPU throughput with the possibility for more with an Extreme chip.

One advantage is that the Max chip could be a little smaller, doesn't need as much IO, while IO could increase on the monolithic Ultra.

Pricing would be interesting. I could see such a chip resulting in savings to being more expensive depending exactly how much is cut and yields. TSMC is not exactly forthcoming about what they charge but we do know chips produced near the reticle limit are quite expensive on average so getting the die size down would be critical. Die shrinks from new nodes would useful, though that adds expense too and SRAM cache and IO is sadly more resistant to shrinks. But if the former, i.e. savings, such an Ultra might even be a better deal.

Anyway, again, that was just for fun. Not an actual prediction.
 
Last edited:

senttoschool

macrumors 68030
Nov 2, 2017
2,626
5,482
I agree that the lack of M3 UltraFusion is most likely because we're not getting an M3 Ultra and for the below I based my calculations off a similar die size. Given that the reticle limit is about 840mm2 I think and the M3 Max is somewhere a little smaller than 550mm2, let's say 500mm2 (very hard to know, looks similar on 3rd party die shots to the M2 Max), you'd have to shrink the monolithic Ultra by about 75% - more than the reticle limit requires just to even make it economical to produce never mind possible. Too close to the reticle limit and the chip is simply too expensive for an Ultra Studio never mind two for an Extreme. In a different forum as a thought experiment just for fun, I did posited such a chip:

CPU: Max: 12 P-cores / 4 E-cores -> Ultra: 18 P-cores / 4 E-cores -> Extreme: 36 P-cores / 8 E-cores
GPU: Max: 40 cores -> Ultra 60 cores -> Extreme 120 cores
NPU is relatively small and you could fit 2 (or more if you want to emphasize it in the desktop) in the Ultra as it has currently

Given the apparent affect of the interconnect on GPU performance (unclear if it so) you might not actually lose much GPU performance (could be as high as 15% lost due to interconnect) for the monolithic design and boosting clocks to compensate would result in great power but it's a desktop. Compensating for CPU performance completely with boosted clocks however might get power (and noise) to unacceptable levels even for a desktop (though you could still boost clocks a little). That said, you'd still have a lot of CPU throughput with the possibility for more with an Extreme chip.

One advantage is that the Max chip could be a little smaller, doesn't need as much IO, while IO could increase on the monolithic Ultra.

Pricing would be interesting. I could see such a chip resulting in savings to being more expensive depending exactly how much is cut and yields. TSMC is not exactly forthcoming about what they charge but we do know chips produced near the reticle limit are quite expensive on average so getting the die size down would be critical. Die shrinks would useful, though SRAM cache and sadly is sadly more resistant to shrinks. But if the former, i.e. savings, such an Ultra might even be a better deal.

Anyway, again, that was just for fun. Not an actual prediction.
It's interesting to think about a monolithic Ultra. But one optimism is that Apple was able to drastically improve the M2 Ultra in performance scalability. I would expect that by M4 Ultra, scalability would be further improved. It's hard to imagine walk back on their chiplet approach after all the hard work of getting it to work. Every chip maker is going chiplets as well.
 
  • Like
Reactions: Adult80HD

crazy dave

macrumors 65816
Sep 9, 2010
1,450
1,219
It's interesting to think about a monolithic Ultra. But one optimism is that Apple was able to drastically improve the M2 Ultra in performance scalability. I would expect that by M4 Ultra, scalability would be further improved. It's hard to imagine walk back on their chiplet approach after all the hard work of getting it to work. Every chip maker is going chiplets as well.
Oh they'd still be doing chiplets, just two to make an Extreme ;)

Don't get me wrong 3-4 Maxes for an Extreme would be cool - a multifunction build your SOC from component legos like Intel is doing for Meteor Lake/Lunar Lake would be even better. But the current rumor is that Apple wouldn't be going more than 2 dies for a while. That rumor could be wrong, but multiple sources (including Gurman, but others as well) heard that. Still could be wrong. Of course, that could also just mean no "Extreme" chip, but Gurman also said that there was a Mac Pro specific chip in the works although it wouldn't be until next year. Unfortunately, this far out from that point, Gurman's track record is pretty shaky - he's pretty good a few weeks out, but longer than that and not only could Apple's plans change but he often seems to get garbled or out of date info. So I take it as something just to think about for fun rather than a serious consideration about what might happen.
 

Confused-User

macrumors 6502a
Oct 14, 2014
850
983
You're right. I did miss that. However, if they're unveiling the Max, they'd likely want to unveil the Pro as well. It'd make logical sense because unveiling the Max months before the Pro would make any Pro announcement look silly later on. "Today, we're excited to announce the M4 Pro that is much slower than the M4 Max we announced 5 months ago". If they unveil the Pro, the only logical device to put it in is the Mac Mini.
I strongly disagree with this. Over the last 20+ years, there have been two primary strategies for rolling out new generations of processors, by Intel, AMD, and nVidia (and others, like IBM, but that's somewhat less relevant): Bottom-up and top-down. That is, you introduce the highest-end products early, or the lowest-end ones, and then over time move through your product stack.

The advantages to rolling out the lowest-end products first are that you can build smaller chips early on (great for dealing with the ramp on a new process - smaller chips -> less loss of silicon due to defects). There were also other benefits that have become less meaningful (or nonexistent) over time as process costs have skyrocketed.

The advantage to rolling out the high-end chips first is that you can skim off the top of the market: charge a huge premium to those willing/needing to get the fastest (biggest memory, newer instructions, whatever) chips. Then once sales cool a little you can roll out the mid-range and eventually low end, sucking up sales that simply weren't feasible earlier. And meanwhile, some months in, you gradually reduce the high-end chip prices a bit. Well, unless there's a crypto craze going on, then all bets are off. :)

There are other strategies that have been employed as well, like "notebooks first", but that maps to some extent to the low-end-first strategy.

Point is, introducing the Pro months after the Max doesn't look silly at all. It's a common type of behavior, and is probably the most advantageous from Marcom's perspective, if you can't simply sell them all at once. It might even be a little better, as selling the Pro a few months after the Max might push a few sales to the Max that might otherwise go to the Pro were it available. "Today, we're extending our exciting line of M4 processors down into new territory, making them affordable even to people who have already sold one of their kidneys!"

If they unveil the M4 Pro for Mac Mini and M4 Max/Ultra for the Studio/Mac Pro, it'll tank M3 Pro/Max MBP sales until those laptops get an update.
No, it won't tank. It might go down fractionally. Apple won't care that much as that's demand deferral, not demand destruction. My guess is they won't announce the Pro Mini right away, but that's only a hunch, I don't have anything to back it up, and I could easily imagine it going the other way.
 
Last edited:

Confused-User

macrumors 6502a
Oct 14, 2014
850
983
Regarding the bolded part, I think it's unlikely. I could be wrong but the M3 Max die size is 600 - 700mm². TSMC's reticle limit is 800mm². In other words, you can't make an Ultra that is twice the size of M3 Max on a single die.
@leman and I discussed this in another thread here a few months back. I can't recall which offhand though. The reticle limit is ~860mm^2. The M3 Max is most likely around 550mm^2.

You probably can make an Ultra twice the size of the Max, at least in CPU core counts, maybe even GPU, if you're careful with it. (We really have no idea how well Apple can use smaller FinFlex transistors in various parts of their chips to reduce size and fit more total transistors in.) But the cost is probably too high.

However, there is no rule that says that the Ultra has to remain exactly double the size of the Max. A monolithic chip with 80% of the resources of a two-chip Ultra might get very very close on performance. And even if it doesn't, Apple is not required to offer that exact performance profile.

It's entirely plausible that we'd get an 18- or 20-P core M4 Max monolithic chip (depending on whether or not they stick with 6-core clusters, or go back to 4-core). Then they could put two of those back-to-back for an Extreme.

Truly, I have no idea if they'd do anything like this. I don't think anyone has any evidence at all that they're doing it, though we don't have any evidence that they're not, either. The big driver of this supposition originally was that the M3 Max had no UltraFusion, suggesting that an even bigger chip would get the UltraFusion. But now we know the most likely reason for no M3 Max UltraFusion is simply that they were skipping right to the M4 gen for the Ultras. So we're back to no evidence.

If I had to bet, I would bet against the monolithic Ultra. It's a lot of money and dev time for a fairly small market. If I were Apple I think I'd put my bets on a larger multi-way solution (4X M4 Max, or maybe possibly mix-and-match CPU and GPU chiplets). But there's way too much we don't know to feel any confidence about such a prediction - for example, what is Apple's strategy for developing their own servers for their own internal use for AI (and iCloud more generally)? The answer to that could have a huge impact on their chip design strategy.
 
  • Like
Reactions: crazy dave

crazy dave

macrumors 65816
Sep 9, 2010
1,450
1,219
@leman and I discussed this in another thread here a few months back. I can't recall which offhand though. The reticle limit is ~860mm^2. The M3 Max is most likely around 550mm^2.

You probably can make an Ultra twice the size of the Max, at least in CPU core counts, maybe even GPU, if you're careful with it. (We really have no idea how well Apple can use smaller FinFlex transistors in various parts of their chips to reduce size and fit more total transistors in.) But the cost is probably too high.

However, there is no rule that says that the Ultra has to remain exactly double the size of the Max. A monolithic chip with 80% of the resources of a two-chip Ultra might get very very close on performance. And even if it doesn't, Apple is not required to offer that exact performance profile.
Yup - I guess you guys came to the same conclusion as I did above although I tried to shave a bit more off - I went for 75%. :)

It's entirely plausible that we'd get an 18- or 20-P core M4 Max monolithic chip (depending on whether or not they stick with 6-core clusters, or go back to 4-core). Then they could put two of those back-to-back for an Extreme.

Truly, I have no idea if they'd do anything like this. I don't think anyone has any evidence at all that they're doing it, though we don't have any evidence that they're not, either. The big driver of this supposition originally was that the M3 Max had no UltraFusion, suggesting that an even bigger chip would get the UltraFusion. But now we know the most likely reason for no M3 Max UltraFusion is simply that they were skipping right to the M4 gen for the Ultras. So we're back to no evidence.

If I had to bet, I would bet against the monolithic Ultra. It's a lot of money and dev time for a fairly small market. If I were Apple I think I'd put my bets on a larger multi-way solution (4X M4 Max, or maybe possibly mix-and-match CPU and GPU chiplets). But there's way too much we don't know to feel any confidence about such a prediction - for example, what is Apple's strategy for developing their own servers for their own internal use for AI (and iCloud more generally)? The answer to that could have a huge impact on their chip design strategy.

Yeah I think AI could change the equation ... if Apple is interested. Right now every AI company under the sun is buying as many TOPs as they can get their hands on from wherever they can get them and Apple will needs lots themselves if they are interested in providing those services and unique machine learning models. A monolithic Ultra and dual Extreme with more TOPs than they currently supply could be a mini-Grace-Hopper and given the market, that would still sell like hotcakes. Hell I'd even argue the market for Apple 3D graphics with ray tracing is bigger than you might expect too - even just ray tracing cores in a workstation with massive VRAM could be huge. I know Apple loves the prosumer market and I am not suggesting that they try to go toe-to-toe with Nvidia in Big Iron AI or massive blades of ADA GPUs, but even with what they could offer, I think they're sitting on a goldmine and missing out. To me, the markets, especially AI, dramatically change the calculation on how big an Apple workstation could be if executed well.

Likewise, I'm not saying Apple has to pursue the above strategy, monolithic Ultra + dual Extreme, exactly but they should offer something and if they wait too long, they'll miss the boat. Rumors are that AMD, Nvidia, even Intel and others all see the potential of giant APUs and Apple's advantages here, effectively giant workstation APUs done right with giant pools of VRAM, might be lost. I say rumors, but Nvidia is already offering such products effectively coming the other direction (i.e. the huge processor side) with Grace Hopper and now Blackwell super chips. I watched a couple of Nvidia presentations on how important it was to offer a unified CPU/GPU memory and workload and how beneficial that could be. Hell AMD tried for years to get it Heterogeneous computing to work with their APUs, they couldn't really offer a compelling solution because they didn't have all the pieces, but now that Apple showed how to do it right, AMD still knows how important it is. They'll try again.

That's why I was a little disappointed that the M4 didn't contain any more hardware announcements for AI as in adding tensor cores to the GPU or somehow making its TOPs go up another way or making the NPU giant (at the moment it seems similar just a bit bigger, we'll know for sure). Don't get me wrong, SME is very nice, the CPU is a fine upgrade, the GPU's clocks going up is nice, but if this WWDC is all about AI, I didn't see anything in the M4 that really justified a new chip so soon. Unless they're holding out on us for a big WWDC reveal. Very possible. Truthfully, I think that's one reason why @leman is predicting the M5 ASAP. But I'll let @leman speak to that. :)
 
Last edited:
  • Like
Reactions: altaic and DrWojtek

senttoschool

macrumors 68030
Nov 2, 2017
2,626
5,482
That's why I was a little disappointed that the M4 didn't contain any more hardware announcements for AI as in adding tensor cores to the GPU or somehow making its TOPs go up another way or making the NPU giant (at the moment it seems similar just a bit bigger, we'll know for sure). Don't get me wrong, SME is very nice, the CPU is a fine upgrade, the GPU's clocks going up is nice, but if this WWDC is all about AI, I didn't see anything in the M4 that really justified a new chip so soon. Unless they're holding out on us for a big WWDC reveal. Very possible. Truthfully, I think that's one reason why @leman is predicting the M5 ASAP. But I'll let @leman speak to that.
Apple stated that they start planning chips 3-4 years in advance. So the M4 was first conceived back in 2020 likely. There is no way for them to adjust the design to focus on AI more.

Since AI has blown up, Apple likely decided to market the M4 chip for AI even though it really wasn’t designed with that focus.

Maybe M5 or M6 is when we will see massive NPUs that 4x or 8x the current TOPS.
 

MrGunny94

macrumors 65816
Dec 3, 2016
1,148
675
Malaga, Spain
WWDC 2024 is all about AI...
M4-series of SoCs are supposed to be all about AI...
Gotta have something more than a M4 iPad Pro for the developers to get all about AI...
M4 Max & (monolithic) M4 Ultra Mac Studios will be all about AI...
M4-series Mac Studios @ WWDC 2024...
Because it's all about AI...
AI...!
I'm still hoping some slight updates to iPadOS, just to improve the damm thing for background apps at least lol.
 

crazy dave

macrumors 65816
Sep 9, 2010
1,450
1,219
Apple stated that they start planning chips 3-4 years in advance. So the M4 was first conceived back in 2020 likely. There is no way for them to adjust the design to focus on AI more.

Since AI has blown up, Apple likely decided to market the M4 chip for AI even though it really wasn’t designed with that focus.

Maybe M5 or M6 is when we will see massive NPUs that 4x or 8x the current TOPS.
While chip design certainly starts early, you can adjust designs over the course of development (how soon before launch depends on your willingness to wing it - a former AMD engineer who used to be around these parts described making changes right up to the last possible minute, but I would hazard a guess that willingness is darn low at Apple, but I don't truly know when their cutoff is) and even if you didn't see the AI boom happening this quickly ... there were voices, inconsequential ones like mine as well as more consequential and knowledgeable ones though that said when the M1 Ultra launch that Apple's solution was great but needed ray tracing and tensor cores. Three years later and they added one and that's better than 0! (mesh shading and especially dynamic caching too, M3 was niiiice) But depending on when the M5 and M6 launches and what the market is like by then they may have missed their golden window where their unique design philosophy really could've mattered and that's a shame. I'm probably exaggerating here - it's not the end of world, but still it does feel like a missed opportunity where Apple could've leaped into the fray with something unique. Maybe they still will!
 

leman

macrumors Core
Oct 14, 2008
19,516
19,662
Apple stated that they start planning chips 3-4 years in advance. So the M4 was first conceived back in 2020 likely. There is no way for them to adjust the design to focus on AI more.

Since AI has blown up, Apple likely decided to market the M4 chip for AI even though it really wasn’t designed with that focus.

Maybe M5 or M6 is when we will see massive NPUs that 4x or 8x the current TOPS.

It’s more like that these things take time. Apple Silicon team seems to operate on long-term, meticulously planned schedules, preferring incremental improvements to complete redesigns. By studying the evolution of their IP one can reasonably guess where they are going next.

That is also why I don’t fully agree with your statement about M4 being in development for four years. It’s not that I think you are wrong, it’s more that I don’t believe that this is the best way to look at this. Apple focuses their design efforts on IP blocks, the SoC is just a product of that. They likely have multiple teams working on different versions of the IP blocks for different nodes and specs. I believe this is what gives them considerable agility when it comes to releasing hardware.

So to predict the next SoC we should look at which IP is a likely refinement target, and how long would it take for Apple to get it ready. I think the next GPU improvement could be ready by the end of the year, which is the main reason why I consider M5 to be realistic for 2024. Or maybe we just fitnesses the result of a great spring and the teams need more time to recover and get bank on track.
 

crazy dave

macrumors 65816
Sep 9, 2010
1,450
1,219
It’s more like that these things take time. Apple Silicon team seems to operate on long-term, meticulously planned schedules, preferring incremental improvements to complete redesigns. By studying the evolution of their IP one can reasonably guess where they are going next.
Whereas I am sitting here, not contributing at all to the massive engineering effort that must be required to bring a new SOC generation up, being incredibly impatient :)
 

Basic75

macrumors 68020
May 17, 2011
2,095
2,446
Europe
It's entirely plausible that we'd get an 18- or 20-P core M4 Max monolithic chip (depending on whether or not they stick with 6-core clusters, or go back to 4-core). Then they could put two of those back-to-back for an Extreme.
How do Apple's E-cores compare to their P-cores for perf/area? If the ratio is similar to Intel's they could get more total performance per area with fewer P-cores and more E-cores.
 

leman

macrumors Core
Oct 14, 2008
19,516
19,662
How do Apple's E-cores compare to their P-cores for perf/area? If the ratio is similar to Intel's they could get more total performance per area with fewer P-cores and more E-cores.

If I remember correctly, the perf/area is similar. Apple optimizes E-cores for perf/watt.
 

mr_roboto

macrumors 6502a
Sep 30, 2020
856
1,866
While chip design certainly starts early, you can adjust designs over the course of development (how soon before launch depends on your willingness to wing it - a former AMD engineer who used to be around these parts described making changes right up to the last possible minute, but I would hazard a guess that willingness is darn low at Apple, but I don't truly know when their cutoff is)
The scope of the changes you can make narrows as the project gets closer to completion. At some point there is always a design freeze, meaning that logic designers are no longer free to make changes to the source code. This doesn't mean all changes are impossible, you can still fix bugs as you discover them, but after a freeze, instead of fixing them at what amounts to the source code level you fix them at the gate level. Handcrafted gate/wire level fixes designed to have the least possible impact on the physical layout and layer artwork of the chip.

I have heard war stories about adding a significant feature really late in the game. It did not sound like fun at all. They could not change the base layer (transistors and gates) or the overall layout of the chip, so they chose a victim feature which marketing determined customers could live without, deleted all metal layer connections to its gates, and created new metal layer art to repurpose a subset of those gates into the function they needed to add.

That's the exception, not the rule, and that kind of process does not create high performance circuits. If I understood the prior conversation right, you were discussing the possibility of deciding to change the AI TFLOPs target late in the game? That's more or less impossible. I would guess that in a typical Apple SoC, performance numbers like that are frozen a full calendar year before the first product with that SoC will launch. They could maybe tweak them a little by boosting clocks, but redoing layout to add more NPU cores is out of the question.
 

TigeRick

macrumors regular
Original poster
Oct 20, 2012
144
153
Malaysia

What? Apple not going to launch any Mac in upcoming WWDC, the most important event of the year?

How could Apple which have launched M3, M3 Pro and Max notebooks at the same time last year NOT launching any M4 family? Didn't Apple just launch iPad Pro with M4 SoC?

It does not make sense at all, Mark Gurman must be wrong about Mac Studio and Mac Pro being launched in 2025. /s

I have jumped the gun by saying Apple is NOT going to launch any MAC in WWDC before MG's newsletter. I have given my reasons with leaked information and technical analysis. Of course there are always few people won't believe so; they want to wait until WWDC to confirm. It's all right, let's wait for WWDC....
 
Last edited:

Tagbert

macrumors 603
Jun 22, 2011
6,254
7,280
Seattle
Whereas I am sitting here, not contributing at all to the massive engineering effort that must be required to bring a new SOC generation up, being incredibly impatient :)
When Final Cut Pro for iPad is rendering a video, if you switch to another app, Final Cut Pro is suspended. You have to keep it in the foreground to let it finish rendering. That does not happen on a MacBook Air with Mac OS.
 
  • Like
Reactions: AdamBuker

TigeRick

macrumors regular
Original poster
Oct 20, 2012
144
153
Malaysia

Mark Gurman has doubled down on the release of new Mac Studio and Mac Pro. He claimed Apple would only release them by mid of next year, most likely in WWDC 2025. I am not surprised at all, Apple should be following the same pattern by releasing M4 Max by the end of the year. Then it takes half a year to "combine" two Max dies to become Ultra SoC.
 

altaic

macrumors 6502a
Jan 26, 2004
710
484

Mark Gurman has doubled down on the release of new Mac Studio and Mac Pro. He claimed Apple would only release them by mid of next year, most likely in WWDC 2025. I am not surprised at all, Apple should be following the same pattern by releasing M4 Max by the end of the year. Then it takes half a year to "combine" two Max dies to become Ultra SoC.
Nonsense. The referenced Gurman article literally says:
The company is speeding up its hardware upgrades, though. Earlier this month, Apple rolled out a new iPad Pro with an M4 chip that promises to vastly enhance AI processing. And the M4 is headed to every Mac in an end-to-end overhaul of the lineup by 2025.
MR just made up the thing about the Mac Studio and Mac Pro 😒
 

MrGunny94

macrumors 65816
Dec 3, 2016
1,148
675
Malaga, Spain
Looks like this conversation about the 12GB is reaching Reddit and some other places after finding physical 12GB memory on the iPad Pro..

I suppose the first set of Macs coming out with M4 will be the Pros around October… Let’s wait and see what happens
 

TigeRick

macrumors regular
Original poster
Oct 20, 2012
144
153
Malaysia
Looks like this conversation about the 12GB is reaching Reddit and some other places after finding physical 12GB memory on the iPad Pro..

I suppose the first set of Macs coming out with M4 will be the Pros around October… Let’s wait and see what happens
Too bad for different reason. If you checked the first table in the frontpage, you should know why Apple is using 48Gb memory die as 4GB by disabling 2GB die. The symbol M indicates Micron memory.
 

MrGunny94

macrumors 65816
Dec 3, 2016
1,148
675
Malaga, Spain
Too bad for different reason. If you checked the first table in the frontpage, you should know why Apple is using 48Gb memory die as 4GB by disabling 2GB die. The symbol M indicates Micron memory.
Yep I just read that now after your message, thanks have been a bit out of the loop during travels :D

Let's wait and see what happens though
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.