Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Confused-User

macrumors 6502a
Oct 14, 2014
852
987
I'd say that if Eliyan offer functionality that Apple wants AND the cheapest way to get that functionality is through them rather than replicating in-house, they will do so. Why not? How's it different from using TSMC, or Micron, or LG?
I think you've hit the nail on the head there - but that is a question of psychology, and not technology. That is, we're trying to figure out strategy decisions rather than technical ones (though they're deeply informed by the ground truths of the technology).

So FWIW, my take on this is that they see "manufacturing" as a special case. It comes with a huge number of risks and costs that are best left to those with deep experience, which Apple does not have. It also insulates them, at least partially, from certain unavoidable problems (bad PR due to labor issues). Of course, this isn't entirely black & white - they apparently do small-scale pathfinding work, as for example with microLEDs (and miniLEDs? I don't recall). Or years back, I think, MEMS.

So they will buy what they need to manufacture things, but they'd prefer to have product IP (as opposed to manufacturing IP, which they're more agnostic about) in house.

If Eliyan is way ahead of where they and TSMC have gotten, then that may indeed cause them to take a practical view of matters. (And if so I'll bet they've at least tried to get an option to acquire Eliyan.) But I think it's only stating the obvious to acknowledge that there will be some pressure in the other direction.

So is Eliyan that far ahead? I have no idea about that, and if there was anyone here who did I'd have bet it was you...
 

treehuggerpro

macrumors regular
Oct 21, 2021
111
124
So is Eliyan that far ahead? I have no idea about that ...

TheNextPlatform 👀

Eliyan Press 👀

UltraFusion = ~1.0 Tbps/mm on Advanced Packaging 👀


Eliyan-FoM.jpg


Eliyan = 4.0 Tbps/mm on Standard Packaging - Low Power/Low Latency/20-30 mm Reach (Green Column in Red Box)

Eliyan = >10.0 Tbps/mm on Advanced Packaging (Green Column)

TSMC/GUC GLink = 5Tbps/mm on Advanced Packaging
 
  • Like
Reactions: Antony Newman

name99

macrumors 68020
Jun 21, 2004
2,410
2,318
TheNextPlatform 👀

Eliyan Press 👀

UltraFusion = ~1.0 Tbps/mm on Advanced Packaging 👀


View attachment 2388409

Eliyan = 4.0 Tbps/mm on Standard Packaging - Low Power/Low Latency/20-30 mm Reach (Green Column in Red Box)

Eliyan = >10.0 Tbps/mm on Advanced Packaging (Green Column)

TSMC/GUC GLink = 5Tbps/mm on Advanced Packaging

Remember: often these sorts of articles tell you more about the PR team in a company than the technology...

A similar version of this is when we hear endlessly about how Intel "must be" ahead of TSMC because they have sexy names like EMIB and Foveros delivered at sexy conferences via sexy slides. Meanwhile TSMC spends their budget on engineering, not marketing, and gives the tech names like InFO and CoWoS – but ships earlier than Intel and usually technically superior.
Or, for a recent version, Apple "must be" behind on AI – imply because until this week they didn't spend their PR budget on it.

Another version of this is the HW AI company Groq, (not to be confused with Twitter's Grok - both read too much Heinlein in their youth...).
If you believe the advertising Groq have some sort of miracle AI product (as they repeat endlessly over and over again on ten thousand damn podcasts and youtube channels); but what they actually seem to have is a bog-standard AI chip, just like ANE, just like Dojo – low power because the developer/compiler is doing all the low-level grunt work instead of relying on "auto"-hardware like caches.

Unless you're a specialist, it's hard to know if the PR represents something genuinely new, or hype about something standard.
I'm not enough of a packaging specialist to have an opinion (the only opinions I have are pattern-matching like "if it's from Intel, based on ten years experience it's hype over something everyone else already has" or "if it's from TSMC it's probably much more impressive than it sounds"). So maybe there's something legit here. Or maybe Eliyan are just trying the Intel route to quick riches.
 

Confused-User

macrumors 6502a
Oct 14, 2014
852
987
Remember: often these sorts of articles tell you more about the PR team in a company than the technology...
[...]
Unless you're a specialist, it's hard to know if the PR represents something genuinely new, or hype about something standard.
I'm not enough of a packaging specialist to have an opinion (the only opinions I have are pattern-matching like "if it's from Intel, based on ten years experience it's hype over something everyone else already has" or "if it's from TSMC it's probably much more impressive than it sounds"). So maybe there's something legit here. Or maybe Eliyan are just trying the Intel route to quick riches.

This restates my earlier point, which is that we don't really know if Eliyan has any advantage over tech that Apple and TSMC are developing (or have already developed). The only hard data point we have is Apple's initial Ultrafusion. But we do know that Apple is reluctant (at least) to use outside tech these days, in most circumstances.

We can't conclude anything with certainty from all this. But I would bet against it, on balance.

I think it's more likely that we're not seeing M4x in Macs because Apple has decided that the M4 for the iPP is all the silicon it can spare for now, while building giant stocks of A18s for release in the iPhone 16 this fall.

One may hope that, if N3E ramps as smoothly as is being rumored, once they get into July or August they'll be comfortable enough with those stocks that they'll be ready to announce new Studios, and maybe Minis. The laptops, which are MUCH higher volume will probably wait until after the phone is released. :-(

I am still very skeptical of the reports that the new studio will wait until 2025. That seems insane.
 

treehuggerpro

macrumors regular
Oct 21, 2021
111
124
Remember: often these sorts of articles tell you more about the PR team in a company than the technology...

Yes, and in many respects the material is quite repetitive, but it’s available for anyone interested. Collectively there is enough strewn about to get a good grasp of the key technologies, specifications and benefits that will come from MCMs moving to Standard Packaging.

Eliyan effectively has five key specs to market, that doesn’t leave wiggle room! Their specs weren't the basis of my speculation though. The speculation was built on an expectation that Apple will move to Standard Packaging combined with Eliyan's timing. Eliyan's PHY was taped out on 3nm earlier this year and the first silicon is due Q3. They have a customer of scale.

As said in previous posts, Standard Packaging is cheaper, higher yielding and faster to manufacture. When it comes to manufacturing and new processes, Apple is typically a first mover gobbling supply.
 
Last edited:

NT1440

macrumors Pentium
May 18, 2008
15,092
22,158
Regarding Eliya, their secret sauce is they can send signal bidirectionally at the same time per lead.

That's a massive difference from the current interposer-style solutions out there, Ultrafusion included.

I had no idea about this company until this thread, but looking into some theories here that Apple is holding off until they can use this makes a lot of sense to me. It may be the piece that was missing to make the long rumored “Ultra” chip a viable product.
 

tenthousandthings

Contributor
May 14, 2012
276
323
New Haven, CT
This restates my earlier point, which is that we don't really know if Eliyan has any advantage over tech that Apple and TSMC are developing (or have already developed). The only hard data point we have is Apple's initial Ultrafusion. But we do know that Apple is reluctant (at least) to use outside tech these days, in most circumstances.

We can't conclude anything with certainty from all this. But I would bet against it, on balance. [...]
To expand on this, we do know (now) that Apple UltraFusion = TSMC InFO-LSI (Local Silicon Interconnect). InFO-LSI was new when Apple introduced it in the M1 Ultra. We also know that Nvidia's Blackwell GPUs use TSMC CoWoS-L. This is relevant because CoWoS-L also uses LSI, "...combining the merits of CoWoS-S and InFO technologies to provide the most flexible integration using interposer with LSI (Local Silicon Interconnect) chip for die-to-die interconnect..." [TSMC]

Thus, the How To Build A Better “Blackwell” GPU Than Nvidia Did interview/article is relevant with regard to Apple and UltraFusion, in that it's a critique of a TSMC technology used in both products (Blackwell GPU and M1/M2 Ultra SoC).

But this is skating to where the puck is now, not to where it will be. There are a number of indicators (beyond just common sense and tens of billions of $) in the TSMC press-release tea leaves (with all the caveats about PR tea-leaf reading that @name99 noted) that may show movement beyond LSI.

First is the fact TSMC has scrubbed the old InFO-R, InFO-L and InFO-LSI diagrams (dating to 2020) from their PR site. The InFO coverage is more generalized and less specific than it used to be. The last entry in "The Chronicle of InFO" dates to August 2021, about six months before InFO-LSI was used for the first time in a consumer product in March 2022 (M1 Ultra). [TSMC]

A second suggestive element is a specific goal set in a September 2023 press release about the 3DFabric Alliance, from the OIP Ecosystem Forum, in Santa Clara, California: "...[TSMC] initiated a three-way collaboration with substrate and EDA partners with the goal to deliver 10x productivity gains from automatic substrate routing..." Given Eliyan's claims of NuLink being "5x-10x better than competing technologies," perhaps we can read this as a response.

Also, note that we can look forward to the same event (same name, same month, same place) this year: [TSMC Events] So prepare your crystal ball...

The third element isn't a press release, but the lack of something: the M3 Ultra. I think we can now say that the lack of an interposer (or whatever it is called, interposer or interconnect, I'm not sure which is the right term) on the M3 Max means that M3 Ultra was never in the cards. It wasn't "cancelled," it was never part of the plan. It makes perfect sense in retrospect. But that is all we can say. It doesn't tell us anything about Apple's plans for M4 Pro, M4 Max and M4 Ultra.

My guess? There won't be an interposer on M4 Max, but not because there will be no M4 Ultra. Everyone will freak out before Apple announces UltraFusion 2.0 and the desktops, Mini, Studio, and Pro, are all updated together. The freakout could literally be for ten minutes during an October/November 2024 Mac event, or it could be for months (March 2025, exactly three years after the M1 Ultra launch). Either way, Apple will have skated to where the puck was going, not to where it was.
 
Last edited:

Confused-User

macrumors 6502a
Oct 14, 2014
852
987
Interesting post.

The third element isn't a press release, but the lack of something: the M3 Ultra. I think we can now say that the lack of an interposer (or whatever it is called, interposer or interconnect, I'm not sure which is the right term) on the M3 Max means that M3 Ultra was never in the cards. It wasn't "cancelled," it was never part of the plan. It makes perfect sense in retrospect. But that is all we can say. It doesn't tell us anything about Apple's plans for M4 Pro, M4 Max and M4 Ultra.

An interposer is an actual piece of silicon. The missing bits on the M3 would be, generically, an interconnect.

I thought everyone understood at this point that the M3 Ultra was never in the cards, as you say, because the interconnect (UltraFusion) was missing. This was clearly a choice, not a last-minute change in plans, which became much clearer when the M4 was released.

My guess? There won't be an interposer on M4 Max, but not because there will be no M4 Ultra. Everyone will freak out before Apple announces UltraFusion 2.0 and the desktops, Mini, Studio, and Pro, are all updated together. The freakout could literally be for ten minutes during an October/November 2024 Mac event, or it could be for months (March 2025, exactly three years after the M1 Ultra launch). Either way, Apple will have skated to where the puck was going, not to where it was.

You're not going to get an Ultra without an interconnect, unless you think it'll be a monolithic chip (as some were speculating). Or are you saying that the Ultra will be made from a different chip than the Max?

That's looking somewhat more likely in the wake of the announcement of their datacenter with customer AS chips.
 
  • Like
Reactions: tenthousandthings

name99

macrumors 68020
Jun 21, 2004
2,410
2,318
You're not going to get an Ultra without an interconnect, unless you think it'll be a monolithic chip (as some were speculating). Or are you saying that the Ultra will be made from a different chip than the Max?

That's looking somewhat more likely in the wake of the announcement of their datacenter with customer AS chips.
Apple have a whole bunch of patents (which I talked about in an earlier post) for precisely this functionality, an RDL without an interposer. In a way the goal is a "synthetic" monolithic chip, larger than reticle limit.

The basic technology is (I think I'm remembering this correctly)
- singulate chips
- form known good dies into a synthetic wafer (bond them via organics or silicon ink)
- I think there's a flip chip stage at this point, possibly also with die thinning?
- build the RDL on top of the synthetic wafer as additional metal layers

There are various variants on this, for example you can build pairs of Max's appropriately aligned, build the RDL on top of/between Max pairs, and then singulate, which has more risk of bad die, but less risk of the synthetic wafer step.

The patents below discuss some of these ideas, but I'm no expert so I can't tell if they are aspirational pie in the sky, or perfectly reasonable given today's tech.

https://patents.google.com/patent/US20180294230A1 and

https://patents.google.com/patent/US20220013504A1
 
  • Like
Reactions: tenthousandthings

komuh

macrumors regular
May 13, 2023
126
113
To expand on this, we do know (now) that Apple UltraFusion = TSMC InFO-LSI (Local Silicon Interconnect). InFO-LSI was new when Apple introduced it in the M1 Ultra. We also know that Nvidia's Blackwell GPUs use TSMC CoWoS-L. This is relevant because CoWoS-L also uses LSI, "...combining the merits of CoWoS-S and InFO technologies to provide the most flexible integration using interposer with LSI (Local Silicon Interconnect) chip for die-to-die interconnect..." [TSMC]

Thus, the How To Build A Better “Blackwell” GPU Than Nvidia Did interview/article is relevant with regard to Apple and UltraFusion, in that it's a critique of a TSMC technology used in both products (Blackwell GPU and M1/M2 Ultra SoC).

But this is skating to where the puck is now, not to where it will be. There are a number of indicators (beyond just common sense and tens of billions of $) in the TSMC press-release tea leaves (with all the caveats about PR tea-leaf reading that @name99 noted) that may show movement beyond LSI.

First is the fact TSMC has scrubbed the old InFO-R, InFO-L and InFO-LSI diagrams (dating to 2020) from their PR site. The InFO coverage is more generalized and less specific than it used to be. The last entry in "The Chronicle of InFO" dates to August 2021, about six months before InFO-LSI was used for the first time in a consumer product in March 2022 (M1 Ultra). [TSMC]

A second suggestive element is a specific goal set in a September 2023 press release about the 3DFabric Alliance, from the OIP Ecosystem Forum, in Santa Clara, California: "...[TSMC] initiated a three-way collaboration with substrate and EDA partners with the goal to deliver 10x productivity gains from automatic substrate routing..." Given Eliyan's claims of NuLink being "5x-10x better than competing technologies," perhaps we can read this as a response.

Also, note that we can look forward to the same event (same name, same month, same place) this year: [TSMC Events] So prepare your crystal ball...

The third element isn't a press release, but the lack of something: the M3 Ultra. I think we can now say that the lack of an interposer (or whatever it is called, interposer or interconnect, I'm not sure which is the right term) on the M3 Max means that M3 Ultra was never in the cards. It wasn't "cancelled," it was never part of the plan. It makes perfect sense in retrospect. But that is all we can say. It doesn't tell us anything about Apple's plans for M4 Pro, M4 Max and M4 Ultra.

My guess? There won't be an interposer on M4 Max, but not because there will be no M4 Ultra. Everyone will freak out before Apple announces UltraFusion 2.0 and the desktops, Mini, Studio, and Pro, are all updated together. The freakout could literally be for ten minutes during an October/November 2024 Mac event, or it could be for months (March 2025, exactly three years after the M1 Ultra launch). Either way, Apple will have skated to where the puck was going, not to where it was.
What biggest gain will be from UltraFusion 2.0 (what was biggest pain for ultra fusion 1?)
 

Confused-User

macrumors 6502a
Oct 14, 2014
852
987
Apple have a whole bunch of patents (which I talked about in an earlier post) for precisely this functionality, an RDL without an interposer. In a way the goal is a "synthetic" monolithic chip, larger than reticle limit.

The basic technology is (I think I'm remembering this correctly)
- singulate chips
- form known good dies into a synthetic wafer (bond them via organics or silicon ink)
- I think there's a flip chip stage at this point, possibly also with die thinning?
- build the RDL on top of the synthetic wafer as additional metal layers

There are various variants on this, for example you can build pairs of Max's appropriately aligned, build the RDL on top of/between Max pairs, and then singulate, which has more risk of bad die, but less risk of the synthetic wafer step.

The patents below discuss some of these ideas, but I'm no expert so I can't tell if they are aspirational pie in the sky, or perfectly reasonable given today's tech.

https://patents.google.com/patent/US20180294230A1 and

https://patents.google.com/patent/US20220013504A1
Ah, we're talking about two different things here - when I said "interconnect" I meant the IP on the shoreline - UltraFusion, or whatever comes next, on the die. I was specifically distinguishing that from any interposer or other additional silicon.

That aside, that's pretty slick. Isn't the second option (building an RDL between Max pairs directly on initial wafer) similar to what Cerebras is doing, just on a much smaller scale? That does seem to be a good choice for a 2x Ultra. You could tessellate a 4x design too, *if* the center is the same process and dimensions as the the four compute tiles surrounding it - but that seems like a less good bet.
 

tenthousandthings

Contributor
May 14, 2012
276
323
New Haven, CT
You're not going to get an Ultra without an interconnect, unless you think it'll be a monolithic chip (as some were speculating). Or are you saying that the Ultra will be made from a different chip than the Max?

That's looking somewhat more likely in the wake of the announcement of their datacenter with customer AS chips.
Sorry, I wish I was saying that, but no, the source of that prediction is the difference in the diagrams between InFO-LSI versus InFO-oS ("assembly-on-substrate"), where there's no LSI bridge. I realize InFO-oS went into production before InFO-LSI, and they have different purposes, so I've got it backwards and/or I'm going in the wrong direction (indeed, you might call me a confused user...), but with all the research into substrates, it just made me think of what the worst case would be in terms of people freaking out, and it was no visible "shoreline" interconnect on M4 Max.

For those who don't know what I'm referring to, see the 2020 TSMC slides preserved in this article (Anandtech): TSMC's Version of EMIB is 'LSI'
 

treehuggerpro

macrumors regular
Oct 21, 2021
111
124
One last thing. This discussion has resulted in an unfortunate fixation of sorts that I think fundamentally misconstrues what UltraFusion is. As I see it, the significant part, the more Apple centric IP, is a chip architecture that allows two or more connected chips to appear and act as a single entity in the OS and software.

The send / receive interconnects and the substrate for the signal paths are essentially defined by creating balanced configurations (as per the patent) and meeting key specifications like bandwidth. Both are necessary for UltraFusion to exist and function, but their underpinning are a combination of substrate manufacturing and what has become well spec’d off-the-shelf components. Attaching [UltraFusion] to the physical bit visible in the marketing, overlooks the significance of the architecture making it happen.
 
Last edited:

theorist9

macrumors 68040
May 28, 2015
3,880
3,060
They'll likely maintain the economics of their existing three chip strategy (as Gurman’s code names suggests) via the following format:

....
M4 Pro (Brava)
M4 Max (2x Brava)
....

But note that a simple doubling of the Pro to get the Max would be a departure from Apple's current segmentation where (using the M3 as an example), while the Max is ≈2 x Pro when it comes to the GPU cores (40 cores/18 cores) and the CPU performance cores (12P/6P), it's only 1 x Pro for the Neural Engine and 2/3 x Pro for the efficiency cores (4E/6E), but 3.5 x Pro for max RAM (128 GB/36 GB).
 
Last edited:

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,627
1,101
Apple seems to consider M4 as an Armv9.2-A core.
"Technically apple-m4 is ARMv9.2a, but a quirk of LLVM defines v9.0 as requiring SVE, which is optional according to the Arm ARM and not supported by the core. ARMv8.7a is the next closest choice."

The Armv9 specifications appear to be so unclear that LLVM has made erroneous assumptions that need to be corrected to accurately define the M4.
 

name99

macrumors 68020
Jun 21, 2004
2,410
2,318
That aside, that's pretty slick. Isn't the second option (building an RDL between Max pairs directly on initial wafer) similar to what Cerebras is doing, just on a much smaller scale?
Probably! I didn't think of that, but yeah, makes sense.

That does seem to be a good choice for a 2x Ultra. You could tessellate a 4x design too, *if* the center is the same process and dimensions as the the four compute tiles surrounding it - but that seems like a less good bet.
 

name99

macrumors 68020
Jun 21, 2004
2,410
2,318
But note that a simple doubling of the Pro to get the Max would be a departure from Apple's current segmentation where (using the M3 as an example), while the Max is ≈2 x Pro when it comes to the GPU cores (40 cores/18 cores) and the CPU performance cores (12P/6P), it's only 1 x Pro for the Neural Engine and 2/3 x Pro for the efficiency cores (4E/6E), but 3.5 x Pro for max RAM (128 GB/36 GB).
The segmentation details already changed with the M3 Pro (6+6)! Then again with the M4 (4+6).

I think it would be very foolish to assume that the Pro/Max relationship will continue as it was for M1..M3 just because.
Far more important issues will be
- how many people buy a Max?
- how much money can be saved by a new production scheme that avoids separate Max masks (or variants on this idea)?
- what's the competition doing at that level? QC is shipping 12 P cores, and while the current implementation is disappointing (maybe playing at below M3 Pro level, though depends on exactly what you care about), that doesn't mean the next one will be. Likewise a desperate Intel is running out of moves except to add more cores (same as the AMD playbook when they were in the same position).

Apple won't DIRECTLY respond to these competitors, and certainly won't reference them. But they will have to ensure that people willing to pay for a Max don't feel they're being drastically underserved (in CPU AND GPU) relative to competition.
 
  • Like
Reactions: Confused-User

name99

macrumors 68020
Jun 21, 2004
2,410
2,318
Apple seems to consider M4 as an Armv9.2-A core.


The Armv9 specifications appear to be so unclear that LLVM has made erroneous assumptions that need to be corrected to accurately define the M4.
Unfortunately the spec has become so riddled with "optional" features that it's become hard to know what a claim *.7 or 9.2) actually implies...
Do we get the atomic 64B stuff? The new timeouts stuff? The new branch profiling stuff? MTE?

It's basically a tech version of Goodhart's law. As soon as people started measuring their dick-length via "what ARMv level have you reached" it became all about reaching that level (in some legal sense) regardless of whether anything useful was now conveyed by that fact...


More interesting, in terms of the very murky roadmap going forward, the "official" features list includes all the SME variants but NOT SSVE... Make of that what you will.
 
  • Like
Reactions: Xiao_Xi

name99

macrumors 68020
Jun 21, 2004
2,410
2,318
Apple seems to consider M4 as an Armv9.2-A core.


The Armv9 specifications appear to be so unclear that LLVM has made erroneous assumptions that need to be corrected to accurately define the M4.
I'd say it's more the result of behinds the scenes political arguing between ARM Ltd and, at first Apple, now probably also QC.
Apple obviously was more interested in going the AMX route than SVE, and had enough clout to tell ARM "do what you like but we're doing this our way". For whatever reason QC seems equally unenthused about standard SVE. So in a climbdown, ARM makes it "optional" in v9...
It WAS originally non-optional. LLVM did not make an erroneous assumption; reality, in the form of engineering followed by politics, intervened. (Same thing as ARM's original crazy decision that SVE lengths could be any multiple of 128B, again shot down by engineers who looked at the spec and said "WTF???")

Now we seem to have moved to the next stage. ARM seems to have felt (who knows exactly what went on?) that they could maybe recover things somewhat by defining an ISA extension called SSVE that kinda sorta matched some of SVE functionality, and matched Apple's AMX vector instructions. Seems plausible.
BUT like I said, Apple isn't putting SSVE support in their LLVM list of capabilities. As we all know, while there is what looks like SSVE support in M4, the performance is terrible (presumably because SSVE demands results be written to the Z register, while the SME/AMX unit is set up to write all results to the ZA register). So???

Is this Apple's way of signaling to ARM "look, your details for SSVE are stupid; we told you so; and the result is that crappy performance for an ISA that we're not going to acknowledge even exists. Go fix the manual like we told you!"?
Is QC (which presumably, given Nuvia's history, mostly agrees with Apple's tech take on how to design an SSVE/SME hardware unit) silently nodding along in agreement, saying the same thing: "we'll give you SME next year, but not SSVE in its current form, nor SVE, not unless you fix A, B and C"?
 

Antony Newman

macrumors member
May 26, 2014
55
46
UK
SVE & SVE2 were the result of Riken & Fujitsu’s research into high efficacy supercomputing (Fugaku) - which was folded into the ARM ISA.

Fujitsu’s next sup design (Monaka) is due to be finalised in 2027 on 2nm - and currently does not include SME extensions (that could potentially speed up or reduce latency of the AI workloads currently under consideration).

 
  • Like
Reactions: Chuckeee

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
Apple seems to consider M4 as an Armv9.2-A core.

Technically apple-m4 is ARMv9.2a, but a quirk of LLVM defines v9.0 as requiring SVE, ...

The Armv9 specifications appear to be so unclear that LLVM has made erroneous assumptions that need to be corrected to accurately define the M4.

Not going to be much of a 'quirk' if Arm's v9 and almost everyone else's v9 implementation has SVE implemented.
if Arm is openly, freely contributing most of the Arm v9 specification optimizations, it probably isn't about lack of clarity. It is having one less ifdef to chasing for a single vendor going for the 'participation trophy' subset version of v9. Implementation of a 'subset' of SVE before doing SVE is more so the quirky path.

For a compiler if every single feature of v9 has to be a seperate command line flag what you end up with is a large bucketload of flag bloat (and the code that goes along with flag bloat. And the combination/permutations of testing of the flag bloat for interactions. )

Having a 'standard' were pramgatically every feature is 'optional' really isn't much of a standard. It absolutely doesn't lead to easier to maintain and test source code. It is more so a political convenience to make 'participation trophies' easier to hand out.
 

altaic

macrumors 6502a
Jan 26, 2004
711
484
Apple do have a bunch of patents on a particular idea. All LPDDR5 has to have internal error correction, but normally they do that internally and that's the end of the story. Apple's patents are about how to propagate the info out of the DRAM (I don't know if the details of how they do this can be done on all DRAM, or they ask for a slight tweak by Micron et al) so that the OS can track the patterns of where error correction is needed and respond appropriately (which may range from giving that page of DRAM more frequent refreshes to masking it out and never using it again).

This strongly suggests that, at least at the low-end, they don't use "special" ECC DRAM beyond the normal LPDDR5.

At this point the discussion generally devolves into
- uninteresting legalistic arguments about whether this is "real ECC and/or
- totally uninformed by any sort of data claims as to whether it is or is not "good enough"
and I tune out.


Remember that patent I pointed out regarding ranks? Even if we never see ranks in a consumer Mac...
One can quantify memory robustness empirically with a reference radiation source, some shielding, a device with LPDDR5 known to not have ECC, and an M2 Mac Mini. Even better if one can obtain a device known to have ECC LPDDR5, though that might be a lot pricier.
 

theorist9

macrumors 68040
May 28, 2015
3,880
3,060
The segmentation details already changed with the M3 Pro (6+6)! Then again with the M4 (4+6).
I already indicated the M3 Pro is 6+6 (see the denominators in my post). You seem to have missed that.

I think it would be very foolish to assume that the Pro/Max relationship will continue as it was for M1..M3 just because.
I believe you've misunderstood my post. I wasn't being "very foolish to assume..." since that's never what I said. Intead, the poster to whom I was replying appeared to be saying M4 will be a continuation of what they're doing now with M3, in that M4 Max = 2 x M4 Pro. In response, I was explaining that, in the M3, the Max isn't simply double the Pro.
 

treehuggerpro

macrumors regular
Oct 21, 2021
111
124
Hi @theorist9, It means (speculates) the future Max will be made using x2 Pro chips (Brava) given what Gurman’s code names imply. I’ll just point you back to my original posts for more and say thanks to everyone for this conversation.

Post #585
Post #598
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.