Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
They kind of already do. From what we know, the SSD is connected directly to the M1 internal bus and the on-chip controller emulates NVMe protocol to communicate with the OS. The next logical step is to drop NVMe altogether and just go full custom, potentially exposing the SSD storage as byte-addressable physical RAM in a common address space. This would allow the kernel to map SSD storage directly, eliminating the need for any logical layer and dramatically improving latency. But I have no idea about these things and I don't know whether there are any special requirements that would make such direct mapping approach non-viable.
That is what Sony did for the PS5, they have a translation layer though so BC games will still work. It is the only way to load the PS5 games with no loading screen at all (or a really really short one).
 
Nice reply! Do you close Stackoverflow questions for a living?

Actually M1 uses LPDDR4X which is faster than LPDDR4: https://www.anandtech.com/show/16252/mac-mini-apple-m1-tested/3

5GB RAM usage limit is for iPad OS not Mac OS. M1 GPU is faster than Radeon RX 560X and sometimes even as fast as GF 1650 (link above).

TFLOPS is not everything but the rumored 128-core GPU would be crazy fast. It would be faster than any GPU on the market, including GF 3090!!

M1 8 GPU cores 2.6 TFLOPS
M? 16 GPU cores 5.2 TFLOPS
M? 32 GPU cores 10.4 TFLOPS
M? 64 GPU cores 20.8 TFLOPS
M? 128 GPU cores 41.6 TFLOPS

Radeon Pro 5700 6.2 TFLOPS
Radeon Pro 5700 XT 7.7 TFLOPS
Radeon Pro Vega II 14.06 TFLOPS
Radeon Pro Vega II Duo 2x14.06 TFLOPS
GF RTX 3060 14.2 TFLOPS
GF RTX 3060 Ti 16.2 TFLOPS
Radeon RX 6800 16.2 TFLOPS
GF RTX 3070 20.3 TFLOPS
Radeon RX 6800 XT 20.7 TFLOPS
Radeon RX 6900 XT 23 TFLOPS
GF RTX 3080 29.8 TFLOPS
GF RTX 3090 35.6 TFLOPS


I extrapolated some gaming benchmarks for M2 and it will be impressive (1260p is for iMac 24"):

- M1 GPU 8 cores: Borderlands 3 1080p Ultra 22 fps - medium 30 fps (1260p 19-26, 1440p 15-23)
- M2 GPU 16 cores 1440p 30-46 fps, 32 cores 1440p 60-92 fps

- M1 GPU 8 cores: Deus Ex: Mankind Divided 1080p Ultra 24 fps (1260p 20, 1440p 18)
- M2 GPU 16 cores 1440p 36 fps, 32 cores 72 fps

- M1 GPU 8 cores: Shadow of the Tomb Raider 1080p Medium 24 fps (1260p 20, 1440p 18)
- M2 GPU 16 cores 1440p 36 fps, 32 cores 72 fps

- M1 GPU 8 cores: Metro Exodus 1080p medium 25-45 fps (1260p 21-38, 1440p 19-35)
- M2 GPU 16 1440p 38-70 fps, 32 cores 76-140 fps

32-core M2 GPU doing 60 fps at 1440p Ultra in Borderlands 3 (via Rosetta 2) will be on par with Radeon 5700 XT, RTX 2070 Super, 2080 or 1080 Ti.

GPU performance often increases proportionally thanks to parallel computing. If everything else in the architecture is the same more cores means that you can render more stuff at the same time. I don't know about all games but many games, especially newer, can take advantage of that. It's not always the case in reality and 4x more cores in theory doesn't always mean 4x the performance, but we can always hope when we're guessing, especially when M1 GPU already has exceeded our expectations. :)

We know that M1 with 8-core GPU at 10W can perform as good as other GPUs with much higher TDP. So a M2 with 32-core GPU at 40W could perform as the 2070 Super at 200W. I used the benchmarks in the videos below where M1 gets 22 fps at 1080p ultra in BL3 built-in benchmark and about 30 in game play. M2 32-core GPU would manage around 60 at 1440p Ultra while 2070 Super manages 56-66 at the same settings. I don't even take into account that M2 may have faster CPU or higher clocked GPU and LPDDR5 or other new benefits. It will be very exiting to see what Apple can come up with. :)

Awesome answer! So much better than some others here who only replied to criticize without answering anything.

If the M2 ends up on the 4nm arch and we get LPDDR5/6 maybe we can compete with Nvidia's 20XX or even 30XX series.
 
Last edited:
  • Like
Reactions: Homy
Thank you. MR is becoming a cesspool of those who either don't understand computing or couldn't do a google search. I wish MR forums were back to the PowerPC glory days where people always had a good TECHNICAL discussion.

Now this whole forum is "Is iT SafE tO cHarGe My LapTOp OveRnIgHT???"

Unified memory is much more interesting if they pull it off. Would be a game changer at 3060/3070 performance.

Chozes: You truly have no idea what you're saying. Unified memory has been a thing for years........ its just shared with system memory like it always has been....... I hate this forum anymore.
 
  • Like
Reactions: Chozes
Sorry please ignore my posts here, I made some assumptions which were incorrect for gpu performance. It’s good to learn and I appreciate your responses here.
 
  • Like
Reactions: Homy and BenRacicot
Yes, sorry, that's what I was aiming at. As mentioned in #4 I'm hoping for an M2 (not M1x), with a new setup for RAM. Perhaps LPDDR5 or 6.
And we are hearing reports of up to 64GBs of RAM aren't we?
LPDDR6 before 5? What are you talking about? You’re too deep in speculation and rumors man. Maybe the M2 will have 5 when it comes out in the next round of entry level Macs, maybe. The upcoming M1X for more Pro level Macs definitely will not.

Also I’m sure the RAM limit will increase for the M1X. I’m not sure why it wouldn’t.
 
Chozes: You truly have no idea what you're saying. Unified memory has been a thing for years........ its just shared with system memory like it always has been....... I hate this forum anymore.
No. It’s a new thing for Macs. The CPU and GPU (and everything else) share the same memory and don’t need to swap it around. Intel CPUs with integrated graphics needed a hard petition of CPU and GPU memory like dedicated GPUs.
 
  • Like
Reactions: BenRacicot
LPDDR6 before 5? What are you talking about? You’re too deep in speculation and rumors man. Maybe the M2 will have 5 when it comes out in the next round of entry level Macs, maybe. The upcoming M1X for more Pro level Macs definitely will not.

Also I’m sure the RAM limit will increase for the M1X. I’m not sure why it wouldn’t.
I only say these things because I've read about them and the possibility of them becoming available within tech products. Samsung has been prepping LPDDR6 for over a year.
 
I only say these things because I've read about them and the possibility of them becoming available within tech products. Samsung has been prepping LPDDR6 for over a year.

LPDDR6 does not exist yet. The most recent Lowe-power DDR standard is LPDDR5 (released February 2019). As far as I know, there are no news about the possible timeframe of LPDDR6 release or what this future standard might encompass. It will probably take another year or two at least until the spec is out and another four to five years until we will see LPDDR6 in commercial products.
 
Intel CPUs with integrated graphics needed a hard petition of CPU and GPU memory like dedicated GPUs.

This is simply untrue. Intel (and AMD) SoCs are unified memory systems, just like the M1. The big difference is that M1 has more capable memory controllers and higher memory level parallelism.
 
  • Like
Reactions: LinkRS and shardey
This is simply untrue. Intel (and AMD) SoCs are unified memory systems, just like the M1. The big difference is that M1 has more capable memory controllers and higher memory level parallelism.
Leman, you gotta chill out man. Are you ok?
I'm legit thinking of not using MR forums because of the tone of your comments. We dont need to be attacked. I'm SURE I'm not the only one who feels this way about your replies.
 
Leman, you gotta chill out man. Are you ok?
I'm legit thinking of not using MR forums because of the tone of your comments. We dont need to be attacked. I'm SURE I'm not the only one who feels this way about your replies.

Huch? I am not out to attack anyone. I am merely pointing out that some statements made in this tread are factually wrong and/or misleading. It's hardly my fault that LPDDR6 standard does not exist or that Intel has been shipping unified memory systems since at least 2012.

If you mean my first reply (#9), which was indeed somewhat snappy, then I apologize. I just though it was odd that you claimed to have done research on the topic for which so much information is available and yet say that we don't know how M1 GPU performs or that the NPU runs GPU shaders. There are people in this forum that are looking for information on these systems and misinformed posts like yours (which sound like authoritative statements) are not helping. So yeah, if I see someone posting misinformation on a topic I care about, I will obviously correct them. Isn't this what we are here for, to exchange information, knowledge, ideas and to learn something new?
 
Last edited:
Leman, you gotta chill out man. Are you ok?
I'm legit thinking of not using MR forums because of the tone of your comments. We dont need to be attacked. I'm SURE I'm not the only one who feels this way about your replies.
what is the tone of his comments ? he doesnt attacked anyone, he just simple point out the truth
Be on topic please and stop making a person the way that he isnt. Hes always on topic, stop misleading
 
I’ve been researching what the MX/2 GPU cores could be comparable to and have a few big questions.
....
2. It also seems pretty accepted that the M1 GPU is sharing the LPDDR4 system RAM.
3. Could the new chip (M2?) include 64GB of LPDDR5/6 for their mobile integrated GPU?
4. If this architecture carried over to the next M chip is not a very exciting GPU even at 32 cores.


Thoughts?

Number of Memory channels and the shared cache matters.

Integrated memory can mean less copying. That isn't a panacea ( e.g., very high refresh rate , large resolution frame buffers ), but it does 'buy' Apple some tradeoffs. ( support fewer monitors , better. ).


The die shots of the M1 suggest there are eight DDR controllers on the M1 die ( versus 4 for the A14).

m1-die-photo_0.jpg

https://www.techinsights.com/blog/two-new-apple-socs-two-market-events-apple-a14-and-m1

If that is an accurate interpretation of the die, then the LPDDR packages that Apple has soldered on are custom. Not generic , off-the-shelf LPDDR4 packages. There is concurrent access to the stacked DRAM dies in the packages. Pragmatically, that means the bandwidth is higher. So comparing to generic LPDDR4 would be the "apples to oranges" aspect. Have a probably cheaper, custom variant on HBM RAM modules using more mainstream DDR4/5 dies as a building block in custom packages.

This is a bit of a dual edged sword. If the double the die area they can get close to doubling the DDR controllers. Triple the die area then about triple DDR . They can throw a bigger cache at augmenting the bandwidth when can get accurate memory access predictions (e.g., AMD's Infinity Cache. ). They could surround the cpu/gpu/npu compute die on 3-4 sides with soldered on RAM. All of that would scale ( perhaps not perfectly linearly, but scale). They could increase the supported monitor count and gpu compute "grunt" up from where the M1 sits. Probably would not kill off the high end of the discrete GPUs ( AMD RX 6800-6900 or Nvidia RTX 3080-3090 ) , but would be giving the mid range ( and Intel's upcoming HPG line up a serious run for the money .... as if had a choice since it is a iGPU that is soldered into every Mac. )


The downside probably would be that they are not going to provision much PCI-e v3 ( or v4) bandwidth as the outer edge of the die is consumed by all of this RAM I/O. For laptops, that would be a decent trade off. (that is 80+ % of what Apple sells. 90+% if thrown in the iPad Pros. ) For the top end desktops, that would be backsliding.


LPDDR5 would eventually allow them to further extend this highly custom , "Super wide" approach. Apple wouldn't cover the top end of the discrete GPU market ( the AMD RX 6800-6900 or Nvidia RTX 3080-90 ) , but they would be solidly into the mid range (and likely cover Intel's HPG ). The highly custom RAM modules would get economies of scale if Apple applies it to effectively all of he Mac and iPad Pro volume ( up into the 30-90M per year range ; multiple modules per system. ).


IMHO, I suspect they'll have to grow the system cache to keep pace as expand the number of supported monitors and/or higher display refresh rates. However, that would not be a major change from the baseline design track they are on. The primarily objective here probably was to kill off all the laptop iGPU and dGPU they were using while covering as much of the mainstream iMac dGPU usage they had. If Apple's iGPU design scales to cover that then that's probably all they were looking for. [ And if can provision a single x16 slot for a top end design perhaps call it a day. ]
 
  • Like
Reactions: BenRacicot
This is simply untrue. Intel (and AMD) SoCs are unified memory systems, just like the M1. The big difference is that M1 has more capable memory controllers and higher memory level parallelism.

THANK YOU LEMAN! Took the words out of my mouth. Intel and AMD SOCs have been unified since *roughly* the discontinuation of the North/Southbridge for IO, when more components were moved to the die itself.

You aren't rude. Don't listen to the naysayers who complain they won't use MR because of your tone. I say BYE BYE to that user. You are a beacon of knowledge in a sea of users here who have no idea what they're typing.

Going back to my original statement. MR forums has become basically Reddit, full of morons. Someone should Wayback machine the forums to the PPC and early Intel days when this site was a forefront of technological discussion.
 
No. It’s a new thing for Macs. The CPU and GPU (and everything else) share the same memory and don’t need to swap it around. Intel CPUs with integrated graphics needed a hard petition of CPU and GPU memory like dedicated GPUs.

This is 100% false. Read a white paper for any SOC in roughly the last decade. There is no hard wall limit, its always floating.

The only time you would be correct would be a niche case where you have a VM (Virtual Machine, google it) locked to a certain VRAM requirement to save RAM.
 
TFLOPS is not everything but the rumored 128-core GPU would be crazy fast. It would be faster than any GPU on the market, including GF 3090!!

M1 8 GPU cores 2.6 TFLOPS
M? 16 GPU cores 5.2 TFLOPS
M? 32 GPU cores 10.4 TFLOPS
M? 64 GPU cores 20.8 TFLOPS
M? 128 GPU cores 41.6 TFLOPS

Radeon Pro 5700 6.2 TFLOPS
Radeon Pro 5700 XT 7.7 TFLOPS
Radeon Pro Vega II 14.06 TFLOPS
Radeon Pro Vega II Duo 2x14.06 TFLOPS
GF RTX 3060 14.2 TFLOPS
GF RTX 3060 Ti 16.2 TFLOPS
Radeon RX 6800 16.2 TFLOPS
GF RTX 3070 20.3 TFLOPS
Radeon RX 6800 XT 20.7 TFLOPS
Radeon RX 6900 XT 23 TFLOPS
GF RTX 3080 29.8 TFLOPS
GF RTX 3090 35.6 TFLOPS

The major assumption on those scaling M-series GPU core counts is that the memory bandwidth will also scale. They can't compute with data they don't have local on the die. ( keeping all of those cores "fed" with data becomes a larger and larger issue. ).

They have already had to do pretty wide already with just 8 GPUs "cores". ( When get to very high double digit multiples of that just how wide can they go? 2-5x bandwidth increase is one thing. Going up 12-16x can turn into an issue if the starting point is already wider than normal.

128 GPU cores without LPDDR5 is going to be tough to get to with linear performance increase on non ultra embarrassingly parallel , cacheable, data tasks. It probably would be better than the 64 GPU core model but not necessarily double on most real world apps.
 
Integrated memory can mean less copying. That isn't a panacea ( e.g., very high refresh rate , large resolution frame buffers ), but it does 'buy' Apple some tradeoffs. ( support fewer monitors , better. ).

I am not even sure that this is such a big limitation in practical terms. A single 5K frame is around 60MB uncompressed. To sustain a 60 FPS stream of such frames you need less than 3.6GB/s. M1 can likely support multiple 5K monitors before the bandwidth impact is felt. Not to mention that display image will be compressed in practice, saving you tons of bandwidth.

The die shots of the M1 suggest there are eight DDR controllers on the M1 die ( versus 4 for the A14).

That's a great reference and analysis! Just to add to this — M1 uses 128bit memory interface, with 8 independent 16-bit memory channels. Having these many memory channels is one of the many features that allows M1 to have high memory-level parallelism - multiple memory requests can be in flight simultaneously and so memory is used efficiently (having wider bus means that you might fetch more memory than you actually need, wasting the channel).

If that is an accurate interpretation of the die, then the LPDDR packages that Apple has soldered on are custom. Not generic , off-the-shelf LPDDR4 packages. There is concurrent access to the stacked DRAM dies in the packages. Pragmatically, that means the bandwidth is higher. So comparing to generic LPDDR4 would be the "apples to oranges" aspect.

Careful measurements of M1 RAM show that they perform just you'd expect regular regular LPDDR4X-4267 RAM. Peak bandwidth and latency is identical to Intel's Tiger Lake platform for example (as measured by Anandtech). It is possible that Appel can use the bandwidth more efficiently by utilizing more smaller channels, I have no idea. However, on-package RAM does seem to allow Apple to reach ridiculously low power consumption — less than half a watt for 16GB of RAM in average demanding tasks.

LPDDR5 would eventually allow them to further extend this highly custom , "Super wide" approach.

LPDDR5 would gives them a nice boost in bandwidth, but they would still need to use multi-channel controllers. That's why the conservative prediction for the upcoming prosumer chip is at least 256-bit RAM interface (double that of M1)

Apple wouldn't cover the top end of the discrete GPU market ( the AMD RX 6800-6900 or Nvidia RTX 3080-90 ) , but they would be solidly into the mid range (and likely cover Intel's HPG ).

True, but let's not forget that Apple has large system level caches — something that traditional GPUs lack. An RTX 3080 has only 5MB of L2 cache, even M1 has 16MB LLC. Large caches help to compensate the limited system RAM bandwidth in both compute and graphical workloads, and on the graphics side Apple also has TBDR. And again, let's not forget compression — A14/M1 has hardware memory compression between the cache and the RAM for GPU compute workloads, which allows it to save memory bandwidth.

The downside probably would be that they are not going to provision much PCI-e v3 ( or v4) bandwidth as the outer edge of the die is consumed by all of this RAM I/O. For laptops, that would be a decent trade off. (that is 80+ % of what Apple sells. 90+% if thrown in the iPad Pros. ) For the top end desktops, that would be backsliding.

At the same time, Apple is less reliant on PCI-e lanes. GPU, SSD — all the usual culprits run on some sort of internal chip magic so you are pretty much left with Thunderbolt for your PCIe requirements. Of course, it remains to be seen how (and whether) they will solve modularity issues for a new Mac Pro. But even if it will be modular, I doubt that it will allow much in terms of third-party PCI-e device expansion.
 
Last edited:
  • Like
Reactions: BenRacicot
You are a beacon of knowledge in a sea of users here who have no idea what they're typing.

I will put this on my CV :D

On a more serious note, there is a really severe issue with communicating on the internet. The "everybody is an expert" and "it's your task to disprove my claims" have become the default attitude. No wonder we are sinking in conspiracy theories and allowing corporations to take advantage of us.
 
Huch? I am not out to attack anyone. I am merely pointing out that some statements made in this tread are factually wrong and/or misleading. It's hardly my fault that LPDDR6 standard does not exist or that Intel has been shipping unified memory systems since at least 2012.

If you mean my first reply (#9), which was indeed somewhat snappy, then I apologize. I just though it was odd that you claimed to have done research on the topic for which so much information is available and yet say that we don't know how M1 GPU performs or that the NPU runs GPU shaders. There are people in this forum that are looking for information on these systems and misinformed posts like yours (which sound like authoritative statements) are not helping. So yeah, if I see someone posting misinformation on a topic I care about, I will obviously correct them. Isn't this what we are here for, to exchange information, knowledge, ideas and to learn something new?
Thank you for explaining. I only posted here to get insight on my initial research from intelligent people like yourself. Perhaps you can help me understand why you answered the way you did?

LPGDDR6 does exist, Samsung has been preparing to use theirs since 2020. However yeah I agree we wont be seeing it. I'd be shocked.

I used terms like "pretty accepted" because the hackernews thread on reverse engineering the M1 said that at ~30% of the die hadnt been labelled (or something to that effect) and I havent found anything newer. Have you? I'd love to see it.

Several people on Reddit had mentioned that the M1 core cannot be compared to other GPU cores of any type. RT, CUDA etc. And that since we're seeing the shared arch it's highly likely that shaders and things such as ray tracing could be aided by the Neural engine. But I havent read that anywhere else. Could this be true?

All that being said, the shared arch of using system RAM in LPDDR4x doesnt seem right. It could have that updated feel if we got something special like LPDDR5 or even 6, and it was stated that the GPU shared system ram. We'd all buy the 64gb version then.... That would be special.

And since shared arch is reportedly taxing SSDs we also need to know how this is going work moving forward.

I'd love you detailed, technical insight sir.
 
LPGDDR6 does exist, Samsung has been preparing to use theirs since 2020. However yeah I agree we wont be seeing it. I'd be shocked.

Could you maybe provide a source on this? As I said before, the latest standard is LPDDR5 and I am unable to find any information on Samsung working on LPDDR6. There do seem to exist some popular articles about Samsung’s LPDDR5 chips that have a typo in their title, which in turn got copied by other articles…



I used terms like "pretty accepted" because the hackernews thread on reverse engineering the M1 said that at ~30% of the die hadnt been labelled (or something to that effect) and I havent found anything newer. Have you? I'd love to see it.

M1 machines have been disassembled and analyzed on component level and the RAM is confirmed to be LPDDR4X.

Several people on Reddit had mentioned that the M1 core cannot be compared to other GPU cores of any type. RT, CUDA etc. And that since we're seeing the shared arch it's highly likely that shaders and things such as ray tracing could be aided by the Neural engine. But I havent read that anywhere else. Could this be true?

Frankly, I don’t even know what these people on Reddit could mean with any of those things. M1 does not have RT cores, that is true. It has GPU compute cores. A single M1 GPU core is capable of 128 floating point operations per cycle, making it an equivalent to 128 CUDA cores. You could say that M1 has 128 “CUDA” cores of you want. Of course, CUDA is Nvidia’s marketing term so there are some subtle and no so subtle differences.

NPU is a completely different processor that has nothing to do with the GPU itself. You could compare it to Nvidia’s tensor cores. Now, Nvidia packs all this machinery into a single product (GPU) because that’s what they sell, but Apple doesn’t sell GPUs, so they package all their processors differently (everything in a single chip).

All that being said, the shared arch of using system RAM in LPDDR4x doesnt seem right. It could have that updated feel if we got something special like LPDDR5 or even 6, and it was stated that the GPU shared system ram. We'd all buy the 64gb version then.... That would be special.

What do you need 64GB of RAM for? I would be more than happy with 32GB it maybe even 16GB, am I want us that’s the bandwidth is plentiful.

And since shared arch is reportedly taxing SSDs we also need to know how this is going work moving forward.

Can you explain what you mean with this? Are you referring to high amount of writes observed on some M1 SSDs? That was a big in the OS and it had been mostly fixed in 11.4

I'd love you detailed, technical insight sir.

Well, I was snappy with you so I assume it’s only fair that you are sarcastic with me. Anyway, hope my answers were to your liking.
 
  • Like
Reactions: BenRacicot
If the M2 ends up on the 4nm arch and we get LPDDR5/6 maybe we can compete with Nvidia's 20XX or even 30XX series.

The 30XX goes down into the 3050-3060 range. (and the AMD 6xxxx series goes "small" also) Probably yes. Is Apple going to make "every GPU for everybody"? Probably no. They don't do that for whole systems. So very likely they probably will not for a subcomponent either.

Apple set the bar for themselves. "fastest iGPU than anyone else". They are out to "take out" some dGPUs. There is little indication they are out to "take out" every dGPU (the entire spectrum).


4nm isn't gong to really help at the very high end GPU ( e.g., 3090 ). Apple is as likely to add more large CPU cores ( with AMX vector units ) as much as add GPU cores. They won't necessarily be a 1:1 allocation ( 8 CPU cores for 8 GPU cores), but also likely not going to be a hard ceiling on the CPU cores. ( e.g., cap at 16 or something like that). They are also competing with AMD and Intel (and ARM vendors ) which are out past 32 cores . As long as they are committed to just competing with a unified SoC they will have to take on both with just "one" tool. The SoC is competing on multiple fronts.

4nm will allow them to compete in both directions with a more reasonably sized package. The overall package size is going to matter for Apple. ( laptops have limited board space. So do 24" iMacs now. Probably Mini's also. etc. etc. )


By the time Apple gets the whole M-series onto 4nm the AMD top end options could be rolling onto 6nm. ( and Nvida could iterate also). It isn't like the competition is stuck. Apple is most likely rolling out 4nm to small stuff ( A15 and M1 evolution ) than apply it to a very large die over the intermediate term.
 
  • Like
Reactions: BenRacicot
LPGDDR6 does exist, Samsung has been preparing to use theirs since 2020. However yeah I agree we wont be seeing it. I'd be shocked.

If referring to this?


That is click bait typo stuff.

DDR6 is in the research labs. It "exists" as in there are prototypes that people are using to concretely talk to in JEDEC standards discussions.

https://www.techpowerup.com/251968/sk-hynix-fellow-says-pc5-ddr5-by-2020-ddr6-development-underway


But technically the DDR6 standard hasn't been settled. Even a sold date where they think it will be settled.

The techpowerup article above said DDR5 would be "here" by 2020. And in 2021 there still aren't high volume, retail shipments. But Apple on the verge of high volume shipments? That's probably several years out.

GDDR6 is in high volume. But that doesn't make for good , low latency generic CPU computational RAM. I doubt Apple would use it. Can get away with using it in gaming consoles because then tend to only run one user interactive program at a time that is skewed toward heavy graphics. Mac need to run several in parallel and not primarily focused on high frame rate display buffers.
 
  • Like
Reactions: BenRacicot
This is 100% false. Read a white paper for any SOC in roughly the last decade. There is no hard wall limit, its always floating.

The only time you would be correct would be a niche case where you have a VM (Virtual Machine, google it) locked to a certain VRAM requirement to save RAM.
Then why does Activity Monitor show a hard limit on ram for CPU and GPU? If you have 4GB of RAM then only 2.5GB is allowed for the CPU even if GPU isn’t using close to 1.5GB. It doesn’t seem unified to me.

Was Apple lying when they introduced Apple silicon and said the unified memory was a new thing? Everything I’ve read about this since it was announced says Apple silicon is a fundamental shift in how memory is used. I’ve heard developers talk about unified memory making it faster to do certain things like image editing too.
 
Then why does Activity Monitor show a hard limit on ram for CPU and GPU? If you have 4GB of RAM then only 2.5GB is allowed for the CPU even if GPU isn’t using close to 1.5GB. It doesn’t seem unified to me.

That is just driver/OS provision thing. M1 machines also reserve some memory for GPU operation as well as system operations, it’s just that these provisions are not directly advertised. But Intel GPU can address any of the system memory. That’s what ultimately matters on the technical level.

Was Apple lying when they introduced Apple silicon and said the unified memory was a new thing? Everything I’ve read about this since it was announced says Apple silicon is a fundamental shift in how memory is used. I’ve heard developers talk about unified memory making it faster to do certain things like image editing too.

They didn’t lie. Neither did they claim that they have invented unified memory. Now, unified memory is usually found in two kinds of systems: low end, where it serves to save money and power, and custom supercomputers, where it enables heterogeneous workflows. Apple Silicon innovation is in bringing unified memory solutions across the entire range of consumer hardware - from entry-level to professional workstation. This simplifies the programming model, enables more efficient processing and improves user experience.
 
  • Like
Reactions: BenRacicot
@leman for clarification RDNA2 doesn’t have “dedicated” RT cores either. The ray accelerators are a part of the CU, so they cannot accelerate RT and raster graphics at the same time. It provides better hardware utilization, but suffers for one or the other.

EDIT: I suspect it is a similar route Apple will take if they add hardware acceleration to their GPU’s. Intel is going the nvidia route with dedicated RT cores (which is why none if the Xe iGPUs have it yet).
 
  • Like
Reactions: BenRacicot
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.