Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

singhs.apps

macrumors 6502a
Oct 27, 2016
660
400
That is why having a platform that can increase Soc sales will increase Apple's profits greatly. But having an Apple Vice President who wants his desktop chassis operation to make big profits - that goal is a disaster for Apple, now that they are making their own CPU and GPU (one the one Soc).
There are other ways to grow profits : increase market share.
 
  • Like
Reactions: Melbourne Park

kvic

macrumors 6502a
Sep 10, 2015
516
460
We have new interesting development on Alder Lake performance from Youtuber in China.

https://www.reddit.com/r/hardware/comments/qo41ss
Perhaps of interest to this thread will be this translation:

Then he does the comparison between Alder and M1 Max, where he's using the i9-12900K to simulate the (rumored) i9-12900HK with a 6+8 config. He disables two of the P-cores and downclocks to 3.0 GHz/2.4 GHz on the rest, which yields the 35W figure. He's pleasantly surprised that Alder Lake might actually be competitive with the M1 Pro/Max, and notes that this result shows Alder Lake may be the most efficient x86 arch to date.

The experiment predicts mobile Alder Lake will score higher in Cinebench R23 than M1 Max.

Alder Lake (35W core power; 43W Package power): 14288 (Cinebench R23 MT)
M1 Max (34W package power per Anandtech): 12326

Interestingly fanboiz turn the rhetorics to the native ARM implementation of Cinebench R23 is poorly done for M1/Pro/Max..

Personally I don't care who's best. Just love to see Apple/AMD/Intel/(potentially) Qualcomm compete against each other the hell out of it. We're living in an interesting time of performance & performance per watt uplift in processors. Now it's no surprise by accident desktop Alder Lake is more power efficient than Zen3 Ryzen at gaming.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
I am not convinced. Given the Alder Lake reviews I have read, I would be very surprised if Intel could match the M1 Max at the same power draw. That said, comparing the desktop part to the M1 Max is more interesting anyway.

I'm not convinced the description of the experiment is accurate either. Primarily pointing out don't need a BGA package to run some numbers.

alder_lake_mobile_skus.jpg



This fall Intel has only talked about up to the H45, but the "Muscle" option is the more obvious match up to a M1 Max at the top of the product line ( not necessarily peak thermal load).



The PL2-PL4 leaked numbers for the HK mobile part suggest that someone just "set" the base power at a 'matching' power draw and let the meta power levels off the chain during the benchmark. Makes it more a poorly run experiment than "couldn't measure".

Early leaks of power levels puts the PL4 up around 215W and PL2 at 115W after nominally starting off at 45W (PL1).
https://www.tomshardware.com/news/intel-alder-lake-p-and-m-processor-power-limits-listed

There is a chromebook leak that caps at a slightly better. 159 and 80


The M1 Max with the GPUs fully lit up isn't sticking to under 40W either.

The PL4 doesn't really buy much in performance. I think it is primarily there to squeak by AMD benchmark scores when they can. Decent chance don't loose much if just stick to PL2. If give the Gen12 mobile versions the same leway as the overall cap of the M1 Max it will probably turn out close. The Gen12 won't do as well on graphics obviously but if have a CPU bound problem (or benchmark) the GPU part doesn't matter as much. The mobile isn't going to be the more balanced SoC ; it is far more skewed to CPU bound problems. The M1 Max has the opposite skew. Neither one should be measured against the TDP of a subsystem of the other. If take the Gen12 GPU TDP cap and apply that to the M1 Max it likely would 'fail'.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
In the latter scenario, a module simply won’t exist.
In the former, I would prefer Apple offers its own GPU modules (which would take care of your concern viz Apple leaving money on the table ).
It would also be easier for developers to target one class of GPUs in the entire Mac lineup (whether discreet or SOC based)

Would they be the same class of GPU? IF decouple the GPU cores from the Unified memory then going to need different driver and Metal calls. ( just like the Intel and AMD drivers and metal calls are different).

Even if Apple leveraged some entirely proprietary discrete card bus to try to hold onto "Unified Memory" there would like be an introduction substantive NUMA issues. ( e.g. would have to align frame buffers into single NUMA zones , operand loads from memory would the more highly variable mismatches , etc. ) .

Apple could work with Intel/AMD/etc to do more uniform memory features of their drivers (e.g., AMD Smart Memory). but the don't. It is the "my way or the highway". Folks seem to think Apple wants more GPU driver work that is entirely detached from the iPhone, iPad, and overwhelming vast majority of their Mac product line. They probably don't want the fork; at least for their own GPUs.

If Apple came up with a GPGPU driver model then could get 3rd party computational engines without disturbing their "gotta natively run iPhone apps with minimal overhead" objective . But Apple largely muddled that by intermingling compute with graphics in Metal (and throwing OpenCL and anything multiplatform useful under the bus).

For the immediate future I doubt. Jade2C or Jade4C (Max2/Max4) could properly provision a GPGPU computational card anyway..... soooooo yet another reason why the API/driver progress is minimal on that front.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
Perhaps of interest to this thread will be this translation:
..... to simulate the (rumored) i9-12900HK with a 6+8 config. He disables two of the P-cores and downclocks to 3.0 GHz/2.4 GHz on the rest, which yields the 35W figure.


The experiment predicts mobile Alder Lake will score higher in Cinebench R23 than M1 Max.

setting base clocks lower isn't necessarily going to hold PL2/PL4 from being invoked in the middle of the benchmark.
That it got the Max class performance , would be surprising because there leaks for the HK that are up there. The power consumption is the more questionable part. But yes getting rid of some of the extreme power levels probably won't hurt the score a huge amount. PL4 is more of a "tech bechmark porn" mode .
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
Mac mini Pro
  • M1 Max SoC

  • (4) Thunderbolt 4 (USB-C) ports

  • HDMI 2.1 port
$2999

There is no HDMI 2.1 port or 4th Thunderbolt controller on the M1 Max SoC Apple's HDMI effort is entirely based on their AppleTV 4K update work ( A12 and a discrete HDMI converter; used same converter on the MBP 14/16).

Apple's preferred converter vendor ( Kinetic Technologies ) doesn't have a. HDMI 2.1 converter.


Apple is using a 2920. Perhaps there is a higher power consuming alternative from another vendor, but I wouldn't hold my breath here expecting Apple to give up the economies of scale savings here. If HDMI was "Pro enough" for the laptops it is probably "pro enough" for the desktops too.






Mac Pro Cube
  • M1 Max Duo MCM

  • (6) Thunderbolt 4 (USB-C) ports
  • (4) USB 3.2 (USB-A) ports
$4999

Back to the Mac Pro 2013 ports at $2K higher prices.... those commentary threads are going to be long and heated.

Looking at the MP 2019 and M1 iMac. ... four Type A ports on a single I/O panel . Probably not going to happen. If Apple "half sizes" the Mac Pro 2019 case and there are is a "top" I/O panel or. two I/O panel slots on the back then maybe. But M1 iMac suggests Apple probably gotten to point doesn't have hang ups on placing plain Type C ports on same panel as Thunderbolt provisioned type C ports. ( at best probably looking at two Type A ) Apple is provisioning USB 3.1 out of their SoC. They probably don't want to deal with qualifying that controller against Type A "repeater/phys" at the port. It is probably only been built for the "10 Gbps 4:2 mux switch with USB-C 3.1 " and power delivery in mind as the specs.


3 TB ports, an optional 2.5/3.5 bay from Promise, and 1-2 PCI-e v4 x8 slots would buy Apple a ton more favorable commentary for a "half size" Tower. In contrast to coming back out on stage and trying the "can't innovate my ass" shtick again.

If there is a non-crippled. Mini Pro with a Max in it the. "need" for a crippled storage and card expansion "cube" is highly limited. The number of folks who really required 6 TB sockets is dismally small. [ that's stripping out the folks who were covering TB sockets for pure video out. ] And for the ones that needed that much DSP/ Ethernet / etc connections the ejection of internal options is a huge use case mismatch in terms of space efficiency.

Likewise if there is not completely thermally crippled large screen iMac. A Max2/Jade2C there would make more strategic sense for Apple than trying to revive the painted into a corner of the Mac Cube space. ( even the NeXT cube had slots. )



Similarly

  • 32-core Neural Engine

Somewhat doubtful get the second Video en/decode assembly (with 2nd Neural bundle) with the Jade2C and Jade4C chip dies. Pretty good chance Apple dumps that whole subsystem array at the "bottom" of the die shot for the interchip communication subsystem. (rather than make the Jade2C chip even bigger.... the Max is already past the mid-range size creeping toward very expensive big. )

Just like how there is only two E cores in each die that combine into total. 4 E core with a twin die package. Same thing with video en/decode. Still would have two en/decode set up on the combo. ( and have four with the 4 die set up which probably gets close to full afterburner replacement. )



Mac Pro Tower
  • M1 Max Quad MCM
  • 40-core CPU (32P/8E)

  • (6) Thunderbolt 4 (USB-C) ports
$9999

This gets into the substantively wasteful zone where have 12 Thunderbolt provisioning headers and half of them are going wasted... and still have no internal PCI-e slots. At the 4 die stage it would be prudent to do some deduplication of the baseline stuff don't need lots of copies of ( TB controllers , SSD controllers , image processors , etc. ) .

"Tower" for what reason if there are zero slots for anything (RAM , storage , or add-in cards) ? If on that path, that's the quad which can go into some mysterious desktop cube.
 

singhs.apps

macrumors 6502a
Oct 27, 2016
660
400
Would they be the same class of GPU? IF decouple the GPU cores from the Unified memory then going to need different driver and Metal calls. ( just like the Intel and AMD drivers and metal calls are different).
Maybe Apple has experience running so called onboard graphics and discreet ones in the same system for years ?

It won’t be an alien concept to them if they decided to allow expansions in AS Mac pros(discreet GPUs could be a great differentiation factor between Mac pros and the rest of the lineups )

Besides of Apple decides it will again go AIO route or bust in the Mac pros (we will see in about a years time probably), totally closing the door on in chassis expansion… then perhaps Apple sees a chance to succeed where two previous attempts failed miserably. Third time may be the charm ..or back to harm.

They better be good with their Soc GPUs then. Desktop isn’t the low hanging fruit like a smartphone where customer expectations are modest in comparison …

But again we will see.
 
Last edited:

singhs.apps

macrumors 6502a
Oct 27, 2016
660
400
Even if Apple leveraged some entirely proprietary discrete card bus to try to hold onto "Unified Memory" there would like be an introduction substantive NUMA issues. ( e.g. would have to align frame buffers into single NUMA zones , operand loads from memory would the more highly variable mismatches , etc. ) .
Why does it have to UMA ? The discreet GPU will be an additional GPU, not the main one.
 

singhs.apps

macrumors 6502a
Oct 27, 2016
660
400
For the immediate future I doubt. Jade2C or Jade4C (Max2/Max4) could properly provision a GPGPU computational card anyway..... soooooo yet another reason why the API/driver progress is minimal on that front.
What’s the minimum time period for code/s suggesting a change to appear before announcements of said change (Apple) ?
What was the time period between appearance of macOS support for AS and a formal announcement ?
 
Last edited:

neekon

macrumors member
May 30, 2008
60
38
i think the Apple Silicon based Mac Pro is going to be a lot like we have with the current Mac Pro
a tower, fully upgradeable parts
2-4x M1Max or similar desktop equivalent
upto 40 high performance cores(or more
upto 128 GPU cores, or more
8-10 DDR4/DD5 dimm slots for increased RAM needs(upto an additional 2TB
8 PCIe/MPX slots for additional expansion needs.
SSD upgrade slots similar to the current Mac Pro, as well as SATA connections

Imagine all of that coupled with 2x W6800x DUO cards or whatever is out at the time.
you will have a workstation that will dominate all that come to it.
 

Boil

macrumors 68040
Oct 23, 2018
3,478
3,174
Stargate Command
...and still have no internal PCI-e slots. At the 4 die stage it would be prudent to do some deduplication of the baseline stuff don't need lots of copies of ( TB controllers , SSD controllers , image processors , etc. ) .

"Tower" for what reason if there are zero slots for anything (RAM , storage , or add-in cards) ? If on that path, that's the quad which can go into some mysterious desktop cube.

Read my post again, it clearly has four PCIe Gen5 expension slots listed...
 

kvic

macrumors 6502a
Sep 10, 2015
516
460
I'm not convinced the description of the experiment is accurate either. Primarily pointing out don't need a BGA package to run some numbers.

[GRAPHICS snipped]

This fall Intel has only talked about up to the H45, but the "Muscle" option is the more obvious match up to a M1 Max at the top of the product line ( not necessarily peak thermal load).

The PL2-PL4 leaked numbers for the HK mobile part suggest that someone just "set" the base power at a 'matching' power draw and let the meta power levels off the chain during the benchmark. Makes it more a poorly run experiment than "couldn't measure".

I didn't pay much attention to these leaks until I saw the Reddit thread I quoted above. The methodology used by youtuber is quite sound I'm afraid. He caps desktop 12900K to 6P+8E (just like highest mobile Alder Lake die), and then fixed P core to 3GHz and E core to 2.4GHz

Surprise Surprise

'Coincidentally' core power about 35W (and package power about 43W). This is good simulation that I'll call it prediction instead of guessing (without much ground & justification).

Though I would think it's more likely the best-case scenario prediction since laptops won't have desktop-class cooling, perhaps memory speed, and efficiency&performance loss due to cpu clocks jumping up and down.

The performance & efficiency gain of Alder Lake isn't unreasonable for Intel. They took two years to catch up with AMD. It's perhaps a coincidence with M1 Max whose launch was delayed. It looks like right after Apple's launch, Intel seems to have caught up.

Why does it have to UMA ? The discreet GPU will be an additional GPU, not the main one.

Why does it have to be UMA? Because Apple (or its fans?) made a big deal of it, the best innovation since bread & butter. So it's likely they will stick to it for a while as a sound bite and prominent feature. When Apple doesn't emphasise it or even downplay it a bit, people's wish of seeing discrete GPUs on desktops and eGPU on laptops will likely realise. I believe it's only a matter of time.

Don't get me wrong. I think the way Apple does UMA is surely one notch ahead of how PCs have been poorly treating their UMA machines. Apple's way suits laptops & all-in-ones very well, which are their major volumes in Mac sales.

--

As a side question, since when Apple has two chassis of Mac Pro in recent history? Perhaps will be 2022: one Smaller Mac Pro (or formerly known as half sized Mac Pro) and the full-tower Intel Mac Pro. I don't quite get the fantasy of multiple new chassis and different entry price points, really.
 
  • Like
Reactions: singhs.apps

singhs.apps

macrumors 6502a
Oct 27, 2016
660
400
Why does it have to be UMA? Because Apple (or its fans?) made a big deal of it, the best innovation since bread & butter. So it's likely they will stick to it for a while as a sound bite and prominent feature. When Apple doesn't emphasise it or even downplay it a bit, people's wish of seeing discrete GPUs on desktops and eGPU on laptops will likely realise. I believe it's only a matter of time.
There are many ways to skin it …and 93% of the overall pc market isn’t going to sit still, when collectively they have a lot more at stake than Apple. AIOs and laptops may well follow Apple’s lead here ..or find ways to cooperate over standards (the best way to maintain compatibility instead of each company introducing its own )

For a pseudo UMA on desktops, something like CXL may work out well:

And

 
  • Like
Reactions: kvic

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
it comes from the specifications of how DDR5 and DDR4 is implemented. apple have stated the Bus size of 512bit and 400Gbps bandwith this makes it obligatory to be 8 Chanel memory

And the bandwith limitations is more likely a cap inorder that the CPU ant use 400Gbps in total, but allow the GPU to use extra bandwith.
Link

The M1 Pro/Max don't have DDR5 memory. They have LPDDR5 memory. Bus width on LPDDR5 is 16-bit; not 32. So a 128 memory controller complex has 8 channels; not 4. So two times 8 is 16. And four times eight is 32.
You are cutting the number of memory channels present in half.

Apple is NOT using standard off-the-shelf memory LPDDR5 modules here. The modules on the M1 Pro/Max subsume what the two modules were doing coupled to the M1 . On the M1 each of those modules has four independent (i..e, can be used concurrently ) channels as input/outputs. For the Pro/Max Apple goes even more custom and cranks it up to 8 channels into the package ( and likely a another whole set of DRAM dies stacked inside).
What Apple has done is pragmatically created a "poor man's" HBM solution. It is cheaper because it doesn't require an interposer ( the pinout/traces are incrementally less dense so Apple can use cheaper packaging technology and largely repurpose more mainstream LPDDR4X/5 dies inside the RAM package).

To get to two fans and a Max inside of a 14" enclosure they needed to save space. So in part they have make the RAM packages a bit more vertical and have to do more thermal juggling with stacked RAM die. [ Apple's RAM prices should be higher because this isn't generic, high volume RAM, but it probably still is excessive margins. ]

From the classic Mac Pro perspective this is a dual edged sword approach. Apple has built a customer memory controller that only works with their more than semi-custom RAM modules. The upside is that helps them squeeze out a substantively better performance/watt of something like GDDR6 but still have a performant GPU (and CPU) solution. They have blazing video en/decode also for a decent number of mainstream (for Apple) formats also.

The downside is that it is fairly unlikely that this memory controller can deal with general memory modules in an effective fashion. It is probably only tested and certified against the custom RAM packages has built to interface with it. The major issue there is that "joe bob" the random DIMM maker isn't going to get access to those RAM modules. Apple also have very little motivation to keep a relatively large inventory of these modules laying around
because maybe someone will later want to buy more RAM. To control the higher semi-custom costs they need to contract make only as much as they need ( units sold plus maintenance/repairs ) .

So that Apple would have to create yet another memory controller for a different performing set of generic DIMMs with substantively different Perf/Watt to add-in an option. If iMac (and rest of desktop line up) is only on the semi-custom RAM then that is a substantive effort for a relatively very, very small number of systems.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
Read my post again, it clearly has four PCIe Gen5 expension slots listed...

Sorry missed that. However, there is no Gen5 on the M1 dies. There are some x1 PCIe v4 ( Apple more focused on limitiing pin "fan out" on the current designs than major uplift in general I/O bandwidth).

Gen5 has about zero utility for the rest of the Mac line up. Which makes it less likely that Apple will radically fork off the Mac Pro on this front. Even less likely because Apple will have to spend a substantive effort to come up with a high performance/watt interchip bandwidth and cache coherence option to make the multi die M1/Jade solutions work in the first place. If doesn't work well then the whole SoC is a bust . So working interconnect versus chasing after PCI-e v5. The latter is much more likely to loose that resource allocation discussion.

The third issue is that Apple has allocated so much edge space to Memory width there isn't much left for substantive PCI-e lane allocation even at PCI-e v4. Putting 128 core GPU on the die runs contrary to a large PCI-e fan out also being present on the die. (there is only so much edge/circumference on the chip given the other list of "kichen sink" tasks it has to also do. ).

Finally Gen5 without CXL doesn't make much sense. ( even more so for Gen6). It is standard that Apple has showed about zero interest in. Pretty good chance its cache coherency doesn't line up 100% with Apple's approach and again when it gets to be resource allocation time, Apple is far more likely to do the allocation to their own stuff .
 

Boil

macrumors 68040
Oct 23, 2018
3,478
3,174
Stargate Command
Sorry missed that. However, there is no Gen5 on the M1 dies. There are some x1 PCIe v4 ( Apple more focused on limitiing pin "fan out" on the current designs than major uplift in general I/O bandwidth).

Gen5 has about zero utility for the rest of the Mac line up. Which makes it less likely that Apple will radically fork off the Mac Pro on this front. Even less likely because Apple will have to spend a substantive effort to come up with a high performance/watt interchip bandwidth and cache coherence option to make the multi die M1/Jade solutions work in the first place. If doesn't work well then the whole SoC is a bust . So working interconnect versus chasing after PCI-e v5. The latter is much more likely to loose that resource allocation discussion.

The third issue is that Apple has allocated so much edge space to Memory width there isn't much left for substantive PCI-e lane allocation even at PCI-e v4. Putting 128 core GPU on the die runs contrary to a large PCI-e fan out also being present on the die. (there is only so much edge/circumference on the chip given the other list of "kichen sink" tasks it has to also do. ).

Finally Gen5 without CXL doesn't make much sense. ( even more so for Gen6). It is standard that Apple has showed about zero interest in. Pretty good chance its cache coherency doesn't line up 100% with Apple's approach and again when it gets to be resource allocation time, Apple is far more likely to do the allocation to their own stuff .

At this point, we really know nothing about what any multi SoC / SiP / MCM configuration will be used for the Mac Pro lineup...

I, personally, don't need PCIe slots (okay, I might take one or two if there are some quad M.2 NVMe add-in cards that work with macOS), but there are those who have the need, so I threw in some next-gen PCIe tech...?

The next year is going to be quite interesting for the Mac Pro lineup...!
 

4wdwrx

macrumors regular
Jul 30, 2012
116
26
I don't think Apple will continue with compatibility with industry standard PC components.

I believe they will offer modular upgrades. Ie. proprietary Ram upgrades, SSD upgrade (have been doing for a while), Graphics (MPX), IO.

The main processor module will be like NUC compute units, allowing multiple compute units for scalability.
 

singhs.apps

macrumors 6502a
Oct 27, 2016
660
400
I, personally, don't need PCIe slots (okay, I might take one or two if there are some quad M.2 NVMe add-in cards that work with macOS), but there are those who have the need, so I threw in some next-gen PCIe tech...?
Why thank you, good sir !!..we are much obliged :D:p:D
 
  • Haha
Reactions: Boil

kvic

macrumors 6502a
Sep 10, 2015
516
460
For a pseudo UMA on desktops, something like CXL may work out well:

And


I brought up CXL (where PCIe Gen 5 a prerequisite) in a similar thread in this sub-forum a month or two ago. On PC side, CXL will be the Next Very Exciting Thing in servers and workstations. I can't see it'll show up in Smaller Mac Pro which at best may include one or two PCIe Gen 4 root complex for providing a couple of PCIe slots. In future models, hard to tell. CXL has three usage profiles. Hardly any of them particularly fits nicely into Apple's "unified memory architecture" or necessary for video/graphics-oriented Smaller/Future Mac Pro. I mentioned in this thread while current Intel Mac Pro is still a general-purpose workstation, Apple silicon replacements likely won't be and need not be.

Apple is NOT using standard off-the-shelf memory LPDDR5 modules here. The modules on the M1 Pro/Max subsume what the two modules were doing coupled to the M1 . On the M1 each of those modules has four independent (i..e, can be used concurrently ) channels as input/outputs. For the Pro/Max Apple goes even more custom and cranks it up to 8 channels into the package ( and likely a another whole set of DRAM dies stacked inside).

What Apple has done is pragmatically created a "poor man's" HBM solution. It is cheaper because it doesn't require an interposer ( the pinout/traces are incrementally less dense so Apple can use cheaper packaging technology and largely repurpose more mainstream LPDDR4X/5 dies inside the RAM package).

It's still pretty much LPDDR5. Perhaps DRAM dies are organized slightly differently and with additional PINs. Since they're soldered to SoC, it won't be a problem. Also I think LPDDR DRAM can't be made into DIMMs anyway.

The "poor man's HBM solution" description is spot on. Or improved version of Playstation 4. Or much improved version of PC with integrated GPU. It's okay as long as it works great at cheaper cost. x86-64 (aka AMD64) was also called poor man's Itanium. Yet the latter was a flop. The former flourished. By the same tune, Apple silicon as-is today is one way not the only way. The right way for Apple perhaps. Likely not for PCs. At cheaper cost but 90% of what Apple silicon can do, PCs will flourish and prevail still and no big shift in overall market share.

I don't think Apple will continue with compatibility with industry standard PC components.

I think it's the opposite. Apple pretty much depends on PC industry for the shared supply chains. Apple only needs to present a difference (or perceived difference) to its consumers/end users. They don't have to and not in their interest to be sort of in cold war with PC world and do everything differently down to bolts and nuts.
 

Tagbert

macrumors 603
Jun 22, 2011
6,261
7,285
Seattle
Kind of wonder though if Jade2C and Jade4C got cancelled in favor of just skipping the mostly M1 foundation and going on to M2 ( probably TSMC N4 based) one.

Minimally it is likely a different die layout than what the M1 Max uses. That "Max" is the other issue. How do they go "up" from the name 'Max'. What is bigger than the Maximum?

Could be M1 Max2 and M1 Max4 . Or perhaps get on a better name track with the M2 prefix. :)
Perhaps they will just reference the CPU cores, the way the chips in the current Mac Pros do.
  • Mac Pro with M1 Max 20 or M1 Max 40
 

Sophisticatednut

macrumors 68030
May 2, 2021
2,635
2,559
Scandinavia
The M1 Pro/Max don't have DDR5 memory. They have LPDDR5 memory. Bus width on LPDDR5 is 16-bit; not 32. So a 128 memory controller complex has 8 channels; not 4. So two times 8 is 16. And four times eight is 32.
You are cutting the number of memory channels present in half.
We already know it’s 4 Chanel’s from looking at the die shot and number of memory modules being 2-4. Unless apple done some more custom **** for future products, then it really doesn’t matter
We can se that two LPDDR5 memory dims have 4 Chanel’s maximum and the M1 Max just have two memory slots. 4x2 quad core memory
Apple is NOT using standard off-the-shelf memory LPDDR5 modules here. The modules on the M1 Pro/Max subsume what the two modules were doing coupled to the M1 . On the M1 each of those modules has four independent (i..e, can be used concurrently ) channels as input/outputs. For the Pro/Max Apple goes even more custom and cranks it up to 8 channels into the package ( and likely a another whole set of DRAM dies stacked inside).
Unlikely they cranked it up as as the M1 had LPDDR4X memory modules, m1Max/Pro have LPDDR5 and currently don’t need any dies to stack
What Apple has done is pragmatically created a "poor man's" HBM solution. It is cheaper because it doesn't require an interposer ( the pinout/traces are incrementally less dense so Apple can use cheaper packaging technology and largely repurpose more mainstream LPDDR4X/5 dies inside the RAM package).
Absolutely it’s just LPDDR5 memory with a wide buss.
The M1 with LPDDR4X was just standard speeds
To get to two fans and a Max inside of a 14" enclosure they needed to save space. So in part they have make the RAM packages a bit more vertical and have to do more thermal juggling with stacked RAM die. [ Apple's RAM prices should be higher because this isn't generic, high volume RAM, but it probably still is excessive margins. ]
There is no need to do this, LPDDR5 is twice as dense as the predecessor. And apples RAM prices have always been taken out of the ass of customers, they are using standard chips with a uneque implementation on the motherboard.
From the classic Mac Pro perspective this is a dual edged sword approach. Apple has built a customer memory controller that only works with their more than semi-custom RAM modules. The upside is that helps them squeeze out a substantively better performance/watt of something like GDDR6 but still have a performant GPU (and CPU) solution. They have blazing video en/decode also for a decent number of mainstream (for Apple) formats also.
There is nothing custom with this. It’s just standard LPDDR5 with a much wider memory bandwidth.
The downside is that it is fairly unlikely that this memory controller can deal with general memory modules in an effective fashion. It is probably only tested and certified against the custom RAM packages has built to interface with it.
We have absolutely no reason to believe OCW or other wouldn’t be able to sell a DDR5 memory stick with 4 channels as such exist for DDR4, apple would just do as they and everyone normally do and combine two ram slots for effective total 8chanel memory, exactly how they do it now.
The major issue there is that "joe bob" the random DIMM maker isn't going to get access to those RAM modules. Apple also have very little motivation to keep a relatively large inventory of these modules laying around
because maybe someone will later want to buy more RAM. To control the higher semi-custom costs they need to contract make only as much as they need ( units sold plus maintenance/repairs ) .
The is nothing custom with these DIMM modules as far as we can see. It’s all standard parts. And price for DIMMs go down with bulk orders so that doesn’t make sense considering apple takes a big margin irrespective of the cost
So that Apple would have to create yet another memory controller for a different performing set of generic DIMMs with substantively different Perf/Watt to add-in an option. If iMac (and rest of desktop line up) is only on the semi-custom RAM then that is a substantive effort for a relatively very, very small number of systems.
There is nothing custom to do outside of providing quad Chanel slots instead of the normal dual channel. And the memory would be more likely used as cashing for the main Universal memory bank just without the bottleneck when SOC system memory runs out it won’t need to fall back on SSD speeds to read directly from disk.

Considering how many custom things apple does with small short time gains. But major work.
Silicone transition for mac
64bit implemented early in iOS with iPhone 5S release 8 years ago with 1Gb LPDDR3
First iPhone to have more than 4Gb ram is the iPhone 12 Pro/max.

The iPad Pro didn’t got 6Gb ram until last year with the iPhone 12.
 

Itconnects

macrumors 6502
Jan 14, 2020
279
28
Hello, is 4k for Mac Pro 7,1 32gigs 256 too much money? Also if I have the serial number is it possible to check it it’s legit?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.