Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

ZombiePhysicist

Suspended
May 22, 2014
2,884
2,794
Here is an important question, and one that goes to the OPs point of why not a completely new chip. Pro gear needs ECC. If you have 256GB youre going to have a bit flip more than once a day. That is *not* acceptable in a pro market device. None of the apple chips have ECC as far as I know. As such, might require a completely new chip that does have it.
 

maikerukun

macrumors 6502a
Original poster
Oct 22, 2009
719
1,037
Here is an important question, and one that goes to the OPs point of why not a completely new chip. Pro gear needs ECC. If you have 256GB youre going to have a bit flip more than once a day. That is *not* acceptable in a pro market device. None of the apple chips have ECC as far as I know. As such, might require a completely new chip that does have it.
Which further leads to my suspicions...I'm really starting to believe there's almost ZERO chance Apple doesn't have a chip they have literally mentioned nothing about whatsoever only for the pro machine. And I can't wait to meet it!
 

mattspace

macrumors 68040
Jun 5, 2013
3,344
2,975
Australia
Which further leads to my suspicions...I'm really starting to believe there's almost ZERO chance Apple doesn't have a chip they have literally mentioned nothing about whatsoever only for the pro machine. And I can't wait to meet it!

Where this gets really messy is... how much is Apple going to have to charge to do an entire chip for which only a tiny sliver of their userbase is the sole customer?

And then do that chip in multiple SKUs and configurations for on-die ram.

Perhaps that's avoidable by only doing a single design with the maximum ram, and then binning to get lower spec versions, but even then, the failed full-memory chip costs as much to produce as the successful one.

Maybe it's an advantage that they can have a relatively boutique fab size to make them, but I can't imagine they'll be cheaper to Apple than a Xeon or Epyc would be.
 

Boil

macrumors 68040
Oct 23, 2018
3,478
3,174
Stargate Command
Y'all know the RAM is not fabricated on-wafer with the SoCs, yeah...?

The SoC has the memory controllers integrated, but the RAM itself is soldered to the mobo/PCB/substrate/whatever, just like the SoC is...
 

mattspace

macrumors 68040
Jun 5, 2013
3,344
2,975
Australia
Y'all know the RAM is not fabricated on-wafer with the SoCs, yeah...?

The SoC has the memory controllers integrated, but the RAM itself is soldered to the mobo/PCB/substrate/whatever, just like the SoC is...
Ahh my mistake, the impression I had was that it was on-die. Aside from memory, the binning for speed and sheer small volume of absolute unit sales would still apply.
 

Boil

macrumors 68040
Oct 23, 2018
3,478
3,174
Stargate Command
Ahh my mistake, the impression I had was that it was on-die. Aside from memory, the binning for speed and sheer small volume of absolute unit sales would still apply.

Binning covers CPU & GPU core count, not speed/clock frequency...

I see the first ASi Mac Pro with M2 Ultra & Extreme SoCs built on either the N3 or N3E process...

Two years down the road, when TSMC has N3X on high-volume production; that is when we will see the M3 Mac Pro, with higher clocks and power levels than the laptop/tablet-centric SoCs we are working with now...
 

ZombiePhysicist

Suspended
May 22, 2014
2,884
2,794
Where this gets really messy is... how much is Apple going to have to charge to do an entire chip for which only a tiny sliver of their userbase is the sole customer?

And then do that chip in multiple SKUs and configurations for on-die ram.

Perhaps that's avoidable by only doing a single design with the maximum ram, and then binning to get lower spec versions, but even then, the failed full-memory chip costs as much to produce as the successful one.

Maybe it's an advantage that they can have a relatively boutique fab size to make them, but I can't imagine they'll be cheaper to Apple than a Xeon or Epyc would be.
This goes back to my earlier post that after they release this chip, lower versions of it can be used for an iMac Pro and refresh of the Mac Studio.
 
  • Like
Reactions: maikerukun

ZombiePhysicist

Suspended
May 22, 2014
2,884
2,794
Y'all know the RAM is not fabricated on-wafer with the SoCs, yeah...?

The SoC has the memory controllers integrated, but the RAM itself is soldered to the mobo/PCB/substrate/whatever, just like the SoC is...
I did know that it’s not part of the chip, but I thought it was all on one package With the apple silicon? So if ram isn’t part of the apple silicon package, and is just soldered on the motherboard, why not have unsoldered dimms? I thought the reason ram options were limited with apple silicon is the ram is on that package.

Still, the internals of the chip need to be modified to deal with ecc, so still some change will need to take place.

1668530870429.png


“Apple designed the M1 as a system on a chip (SoC), with the RAM included as part of this package. While integrating RAM with the SoC is common in smartphones, such as the iPhone 14 series, this is a relatively new idea for desktop and laptop computers. Adding RAM to the SoC design enables faster access to memory, improving efficiency...In addition to physically adding the RAM to the SoC, Apple has changed the fundamental way the system uses memory. This is where unified memory on Apple silicon comes into play.”

So am I misunderstanding it, or isn’t the ram right on the soc package there?
 
Last edited:
  • Like
Reactions: maikerukun

Boil

macrumors 68040
Oct 23, 2018
3,478
3,174
Stargate Command
Y'all know the RAM is not fabricated on-wafer with the SoCs, yeah...?

The SoC has the memory controllers integrated, but the RAM itself is soldered to the mobo/PCB/substrate/whatever, just like the SoC is...

I did know that it’s not part of the chip, but I thought it was all on one package With the apple silicon? So if ram isn’t part of the apple silicon package, and is just soldered on the motherboard, why not have unsoldered dimms? I thought the reason ram options were limited with apple silicon is the ram is on that package.

Still, the internals of the chip need to be modified to deal with ecc, so still some change will need to take place.

View attachment 2113462



So am I misunderstanding it, or isn’t the ram right on the soc package there?

It is clear from the pic that the SoC and the RAM are two (well, in this case three) separate items, joined together on "the package"...

On the package (which I say above in the larger bold text) is not the same as being part of the die, which was the point I was making...

The reason the RAM is right there on the package is distance to the SoC (memory controllers), DIMMs would increase said distance (trace lengths) and not deliver the performance Apple wants...

The only thing I see (from the perspective of someone who is not an electrical engineer or anything like that) needed for ECC is actual ECC RAM and (I would assume) the proper changes to the on-SoC memory controllers...?
 

ZombiePhysicist

Suspended
May 22, 2014
2,884
2,794
It is clear from the pic that the SoC and the RAM are two (well, in this case three) separate items, joined together on "the package"...

On the package (which I say above in the larger bold text) is not the same as being part of the die, which was the point I was making...

The reason the RAM is right there on the package is distance to the SoC (memory controllers), DIMMs would increase said distance (trace lengths) and not deliver the performance Apple wants...

The only thing I see (from the perspective of someone who is not an electrical engineer or anything like that) needed for ECC is actual ECC RAM and (I would assume) the proper changes to the on-SoC memory controllers...?

Ok, whew, thought I missed something big. We were all just agreeing past each other.

While we're wishing for stuff. We should get teeny tiny DIMM sockets right on the package to have upgradable memory. :D
 
  • Like
Reactions: Boil

maikerukun

macrumors 6502a
Original poster
Oct 22, 2009
719
1,037
Where this gets really messy is... how much is Apple going to have to charge to do an entire chip for which only a tiny sliver of their userbase is the sole customer?

And then do that chip in multiple SKUs and configurations for on-die ram.

Perhaps that's avoidable by only doing a single design with the maximum ram, and then binning to get lower spec versions, but even then, the failed full-memory chip costs as much to produce as the successful one.

Maybe it's an advantage that they can have a relatively boutique fab size to make them, but I can't imagine they'll be cheaper to Apple than a Xeon or Epyc would be.
Oh I was NEVER expecting a system that beats the maxed out 7.1 to cost anything less than $55k maxed out and start around $6k or so. There's really no other way to make that happen.
 

Rob__Mac

macrumors member
Feb 18, 2021
93
463
Hackney, London
I can't believe we're nearly in 2023 and I'm still saying this, but it really remains to be seen what Apple will do with AS and 3rd-party GPU support.

A real dream would be to chuck an MPX module into my 7,1 that's whatever M2 pro thing they come out with, but then I guess that would negate the need for any of the RAM, certainly the Xeon and maybe the GPU's as well, at which point… what, my PCIe m.2 ssd's and a headphone port are what I'm adding? 😬
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
Here is an important question, and one that goes to the OPs point of why not a completely new chip. Pro gear needs ECC. If you have 256GB youre going to have a bit flip more than once a day. That is *not* acceptable in a pro market device. None of the apple chips have ECC as far as I know. As such, might require a completely new chip that does have it.

Or Apple just skips ECC and moves on while staying pretty close to the 128-256GB range. Depends upon how they are primarily looking at RAM as. Is it CPU RAM or is it GPU RAM. In the latter, case there really isn't a good track record at all with Apple. "Pro" AMD GPU cards have supported ECC functionality, but whenever Apple uses those dies that support get tossed ( in favor of larger raw RAM capacity. For GPUs ECC is normally an "added on top" feature were the memory controllers stores and fetches 'extra' memory on top of a foundation RAM modules that don't do ECC directly. ).

If Apple did ECC , then it would be likely similar to what Nvidia is doing with Grace server packages. Still "ECC-less" LPDDR5 foundation with enhanced memory controllers on top. Capacity will be capped well under 1TB. Is Apple going to saddle their GPU with the extra overhead? A coin flip. If they do ECC, then will be taking an "end user usable" capacity hit. Uses the same memory across the Mac product line up (economies of scale) . Doesn't have to completely throw away the memory controller they already have working. Again far, far , far more fiscally efficient.

It doesn't have to be a completely new chips. It can just be a 're-factored'/disaggregated chip that is on minimally different from the laptop dies. Same LPDDR5 memory controller with a small add on addition. UltraFusion connectors on two sides of a more squarish shape. ( they'd loose die edge space for memory controllers, which means max capacity would go down. If Max capacity goes down then ECC need goes down also. e.g. Intel ships mainstream desktop CPU SoCs with ECC abilities turn off at max capacity 128GB. 256GB isn't all that far way from that border. [But yes AMD effectively do that , but most mainstream motherboards do. ] )

So same major subscomponents of a M2 Max die but broken down into different shaped groupings to be more scaling effective chiplets. The M1 Max doesn't scale well past two chiplet/tiles. It is pretty much a too chunky chiplet.

The Mac Pro is likely going to share an SoC with the Studio and/or large screen "iMac Pro". If Apple is going to cover 256GB RAM in ECC, then they'll need to cover the Studio/"iMac Pro" zone with that also. And the Studio isn't likely going to grow bigger, so same space constraints.
 
  • Like
Reactions: maikerukun

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
Where this gets really messy is... how much is Apple going to have to charge to do an entire chip for which only a tiny sliver of their userbase is the sole customer?

Mac Pro and Mac Studio (and "iMac Pro" if they decide to do one) will likely share a "duo die ", Ultra-class SoC between them. They don't have to charge some crazy amount because they can lean on Mac Studio economies of scale to push the prices down. If there is a Studio and/or "iMac Pro" the numbers sold probably go up 5-10x . If did both probably could get order of magnitude increase which means spreading costs over far more users.

Apple just needs an incrementally more desktop oriented building blocks for the upper 'half' of their Mac line up. Chopping the UltraFusion connector off the laptop Max-class die would be easy to do. Actually gets them a cheaper die to make for the laptops.

Once the upper half of the Mac desktop line up is shifted over there is volume to spread costs over. Apple doesn't need to lean on just one sub-skew of the MBP 14/16" to generate volume over a 0.7-1M/year.


Apple is unlikely to charge anything less than what they are charging for an Ultra now. Going from full Max to Ultra is $1,200 now. So pay in ballpark of 1,900+ for Ultra SoC now (plus the baseline creadit for a Max) If shifted from laptop cost foundation perhaps some more $1,4000-1,600 range. And when scale chiplets also pragmatically scaling up memory so hit with that surcharge also.

So will be paying Xeon W 6300-6400 series , Threadripper 5000 level prices. ( but getting a CPU and GPU out of it). Mac Pro probably will not get much cheaper. The entry starting point should get better performance wise though. (which probably will be enough value add to sell more at the lower half of the Mac Pro skus. Whether that is going to be enough to offset losses at the extreme top end is debatable. )
 

Boil

macrumors 68040
Oct 23, 2018
3,478
3,174
Stargate Command
@deconstruct60

So Mn Max laptop variant & Mn Max desktop variant...?

I hope so, would be nice to see Apple focus on a desktop/workstation variant of Apple silicon...

M3 Ultra/Extreme should really shine on the N3X process...?
 
  • Like
Reactions: maikerukun

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
@deconstruct60

So Mn Max laptop variant & Mn Max desktop variant...?

At some point along the way Apple is going to hit forces that are going to push the Max-sized die up into chiplets.
Same reason why AMD is going that way now with their top end GPUs.


woHjvqju8MDZL2uqbtHAxa-1200-80.jpg.webp

AMD RDNA 3 GPU Architecture Deep Dive: The Ryzen Moment for GPUs | Tom's Hardware (tomshardware.com)


N3 is more expensive than N5. And N2 is going to be even more expensive. At some point huge monolithic dies with all of the L3/"System level cache" that Apple is lugging around isn't going to make sense moving to N2 (and smaller). It has already hit diminishing returns zone.

Very similar diminishing returns zone for outward/external facing functionality like PCI-e lanes. (closer to Analog I/O on that graph than logic. ). Implementing 8 or 16 Thunderbolt controllers in N3 or N2 probably into the zone where there is less and less "bang for the buck" there.

Amazon Graviton 3 somewhat similar issue handling. Memory and PCI-e chiplets are attached to the central die that just has the compute core and internal mesh.

Pretty good chance Apple is going to play this like Nvidia and stick with monolithic as long as they can. Mx Pro size die and smaller (at the prices Apple is charging) will be more doable for longer. Mx Max size die ( 400mm^2 ) is going to hit the 'wall' substantially sooner. The more die external I/O the larger the pressure.

For example, a 'desktop' Max class SoC where the Thunderbolt controllers were attached by a N4 chiplet could be 6 ports on both the "Max" and "Ultra" set ups and the Studio would be more uniform in port provisioning. Primary reason now decoupled is that it is being driven by MPB 14"/16" port allocations. [ and not just the TB controllers. Really only need one SSD controller , on Secure Enclave Processor , etc. Replicating that 4 times on N3 expensive (or more in future iteration) silicon is just plain dubious if not going to use it. The compute cores (CPU/GPU/NPU) need to be replicated.. the single use I/O definitively does not. ]

Also opens door to put a x16 PCI-e v4 chiplet on instead if need more than just one external chiplet in multiple die set ups. Again can leave that back on N4 and save money.

The laptop dies will use PCI-e v4 , v5 progress to hold the number of die external I/O constant ( or fewer). Could shrink from four x1 PCI-e v4 lanes to just one x1 PCI-e v5 (and Apple slaps a external PCI-e switch to dole out four v3 lanes or something more flexible. ). For the desktops ( and especially Mac Pro ) they aren't going to be able to 'shrink' their way out of the problem as easily.



Early on there were rumors about four code names for the post 'plain' M1 dies. Jade , Jade-chop , Jade2C , and Jade4C. I think what Apple delivered was Jade-chop (M1 Pro) and Jade-2C ( used as a Jade for M1 Max and dual for Ultra). They went cheaper and just did two dies into high volume ( and relatively low volume for a 3D mini-interposer for the Ultra) . Jade4C didn't ship in volume because ran into cost and complexity problems (and delayed too long as a cherry on top). Those cost problems only get worse with TSMC N3 and following if try to brute force laptop monolithic dies into "chiplet" roles.

[ And very similar issues if Apple has been trying to brute force their cellular modem onto the same process node as one monolithic iPhone die. Makes less and less sense going forward given increasing costs and divergence issues. ]

I hope so, would be nice to see Apple focus on a desktop/workstation variant of Apple silicon...

That's is leading away from what I'm saying. It is closer to desktop than "reuse Server CPU processor" in an expensive workstation. They going to need 'iMac' like volume to pay for the gap between the design. This is not a foray into making "Threadripper" and "Xeon SP" killer SoC. Apple would scale the Mac desktop up to fill the "workstation" role. It isn't a primary focus on high end workstation only that peels backwards to fill a desktop role.


M3 Ultra/Extreme should really shine on the N3X process...?

Zero idea about why you keep chattering about N3X. Apple is pretty unlikley to use it at all. N3P? far better likelihood. N3X? No. I highly doubt Apple is going to separate the process nodes used for laptop and desktop from each other at all for the CPU/GPU/NPU cores that do the compute. It is the higher I/O controller overhead costs for the desktops that shift to save costs ( use previously mastered fab processes to do those to be more cost effective).

The notion that Apple is going to radically throw all the laptop stuff out the window and start over from scratch for desktops makes no economic sense at all. Just fanboy hype. The issue is scaling the reuse ... not throwing it all away and starting over.

N3X would never work well on a 'plain' Mn or An implementation. So Apple isn't going to touch it with zero reuse viability.

It isn't 'if' some eventual Mx desktop SoC goes over to more disaggregated chiplets, it is more just a matter of 'when'. AMD is using the same exact fab company as Apple and they have seen the light. Apple isn't immune to the same basic fabrication hurdles.
 

Boil

macrumors 68040
Oct 23, 2018
3,478
3,174
Stargate Command
Zero idea about why you keep chattering about N3X.

Because it is two years out, so it could be the basis for desktop/workstation class M3 Ultra/Extreme SoCs in several products...
  • Mac Pro
  • Mac Studio
  • iMac Pro
More products to use these desktop/workstation class SoCs means "spreading the cost"...?

3nm means more transistors, which means more GPU cores...

N3X offers more higher power limits and clock speeds...

Both of these mean faster CPU & GPU, for more performance in the higher-end desktops, where it is needed...?

Wouldn't it make sense for Apple to use the improved/enhanced 3nm processes as TSMC makes them available for HVP...?
 

maikerukun

macrumors 6502a
Original poster
Oct 22, 2009
719
1,037

perhaps relevant
Interesting, hadn't seen that before...is it truly an evolution or them trying to hold some kind of significance in the space? Either way, interesting.
 
  • Like
Reactions: ZombiePhysicist

maikerukun

macrumors 6502a
Original poster
Oct 22, 2009
719
1,037
Ideally for me, the more I think about it, and I know this is unrealistic, but I want them to make another showstopper piece, expensive as hell (not unlike the 7.1) but meant to prove that Apple is capable of being the King of Silicon. The 7.1 currently sits on the Iron Throne and I think it's successor not only needs to be worthy of that but it needs to be BETTER than that. I don't want it to be equal or almost equal, I want it to be powerful enough to blow the 7.1 out of the water. I want it to double a maxed out 7.1 in performance. It's insane to me that Apple was ever okay with being second class in the pro sector. A sector they had on lockdown for most of the 2000's with Final Cut Pro 7 and the old tower.

They returned to their former glory with the 7.1 but released it at the most awkward point in history as they were bringing in their own silicon lol. I cannot understand their timing without bringing rhyme to reason. and the only REASONABLE possibility is they are working on something significantly more powerful. Something that will rival if not outright beat a 64 core CPU and 2 RTX 4090's.

And the only way they do that is with something drastically powerful that hasn't even been hinted at yet, or with extreme expandability even beyond that which we currently have in the 7.1 Those PCIe slots are gonna have to accept things that Apple has refused to accept for many years, and they're going to have to do it not in the shadows, but up front and center "I'm talking about squashing age old beefs that have quite frankly stunted the evolution of power in Apple".
 

Boil

macrumors 68040
Oct 23, 2018
3,478
3,174
Stargate Command
...double a maxed out 7.1 in performance.
...will rival if not outright beat a 64 core CPU and 2 RTX 4090's.

We can only hope...!

Those PCIe slots are gonna have to accept things that Apple has refused to accept for many years, and they're going to have to do it not in the shadows, but up front and center "I'm talking about squashing age old beefs that have quite frankly stunted the evolution of power in Apple".

Apple is moving to their own silicon; CPU, GPU, NPU, Media Engines, the whole shebang; highly doubtful Apple is going to pick Nvidia back up anytime soon......
 

maikerukun

macrumors 6502a
Original poster
Oct 22, 2009
719
1,037
We can only hope...!



Apple is moving to their own silicon; CPU, GPU, NPU, Media Engines, the whole shebang; highly doubtful Apple is going to pick Nvidia back up anytime soon......
I'm completely okay with that! But regardless, they're going to have to be working on their own GPU then to compete with a 4090, something that we can purchase multiples of and throw in our system. I don't care who makes it as long as it is quite literally the fastest GPU available, PC or otherwise lol.
 

maikerukun

macrumors 6502a
Original poster
Oct 22, 2009
719
1,037
"double a maxed out 7.1 in performance"
the softwares and drivers have to follow
>>> Blender, 3D softwares, adobe, AI, video ...
Ironically, the primary issue with the softwares currently is far more optimization on the software end than the hardwares. Adobe is NOTORIOUS for inefficient usage of GPUs available and their software crashes on me literally every single day. Cinema 4D paired with Octane X is actually EXTREMELY FAST. It too crashes but far less often, nothing out of the ordinary on Mac or PC. Biggest issue with Octane X is that OTOY is stopping their development of the intel version of it because of what Apple is doing with AS. As of right now, if you have a 4 GPU setup on your Mac Pro 7.1, you need to stick with C4D R26 as that is the last stable build for the 4 GPU version. As of C4D 2023, you will have to use Apple Silicon based machines for Octane X.

All of this said, REDSHIFT still works just fine in the latest C4D 2023 build, but is slower than Octane X.

As for Blender, Blender also doesn't take full advantage of all the GPUs "just yet, they have stated often they're working on it so...".

On the video side of things, as long as 8.1 has the AS media encoders video will never be an issue again.
 

Mac3Duser

macrumors regular
Aug 26, 2021
183
139
it had been observed with the M1ultra: all the cores were not well used compared to the M1max. And the difference was not double for the ultra.
 
  • Like
Reactions: singhs.apps
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.