Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
AMD is vulnerable. Don't try to pretend that "less vulnerable" is the same as "not vulnerable".

Oh please, you still dont know anything about Intel CPU security crisis at all.

AMD Ryzen does not have Meltdown and Spectre at all. NO REPORTS ABOUT SPECTRE ON AMD RYZEN.

Why is it related to Mac Pro? Because it still uses Intel CPU. Also, OS updates cause the overall performance. That's why Intel server suffers from performance drop after OS updates to prevent Meltdown and Spectre.
 
Last edited:
d64.gif
 
Please explain the difference between "architecture" and "micro-architecture" for us, and how it relates to this claim.

All of the current Intel and AMD CPUs implement the x64 instruction set - but there are many different CPUs that use many different ways that they implement that instruction set. Which are "architecture" and which are "micro-architecture"? Why do you say that a Sandy Bridge and a Skylake are the same architecture?

Skylake, Kabylake, Sandy Bridge, Haswell, Ivy Bridge are microarchitectures. They are NOT architectures but they have the same architecture. Which means they have the same design but different details and designs.

Do you get it? If not check Wiki. All Intel CPUs from 2011 have all kinds of security issues including Meltdown, Spectre, Meltdown/Spectre prime, Intel AMT, Spectre NG, L1TF bug, and BrachScope.
[doublepost=1544762134][/doublepost]
Let me hold up a mirror. ;)

You realize that AMD and ARM processors have many of the same Meltdown/Spectre vulnerabilities? Including Ryzen and Threadripper!

https://www.techarp.com/guides/complete-meltdown-spectre-cpu-list/2/

Probably not, or you wouldn't have made such an ignorant post.

Too bad, I already read it and yet AMD already updated their CPU while Intel slowly updated like a lazy person. Officially, there are no Spectre issues on AMD.

https://www.amd.com/en/corporate/security-updates
[doublepost=1544762358][/doublepost]
Let me hold up a mirror. ;)

You realize that AMD and ARM processors have many of the same Meltdown/Spectre vulnerabilities? Including Ryzen and Threadripper!

https://www.techarp.com/guides/complete-meltdown-spectre-cpu-list/2/

Probably not, or you wouldn't have made such an ignorant post.

https://wccftech.com/amd-zen-2-cpus-fix-spectre-exploit/

Since Intel had been using the same architecture for a long time, I totally doubt their ability to solve this issue while AMD already engaged these issues as soon as possible. Do you even aware that Intel announced Kaby Lake even they knew about Meltdown and Spectre issues in 2017? Google reported this issue to Intel on July 2017 and yet Intel ignored that.
 
Why doesn't Apple use Nvidia graphics cards?

1. Expansive and they don't make a customized GPU like AMD.

2. Apple and few companies hated Nvidia CUDA system.

3. Apple created OpenCL in 2009. No reason to use Nvidia's CUDA and AMD GPU is better for OpenCL.
 
This is the Mac Pro thread, which have been using Xeons with ECC support.

Any comments about "Core" processors without ECC support are irrelevant - unless you're pitching a non-Xeon xMac.
Well, for one, Mainstream AM4 platform will offer "Server-grade" 16 core/32 thread CPU. That is the same core count you get from 7960X, For how much? Third of the price, of it(Top end Zen 2 CPU for AM4 will cost no more than 499$). And you will still get ECC memory.

Threadripper, and EPYC CPUs will be made out of the chiplets that are making the AM4 solutions.

AMD will have much better products than Intel next year. Most likely also, for less money than Intel charges. Curent 32 core EPYC costs half of similar, 28 core solution from Intel. 64 core EPYC based on Zen2 is rumored to cost no more than 6000$.

For those that need a caption, "Apple tried to bully Nvidia, and Nvidia responded appropriately!"
I do not know if this is irony, but it was the other way around. Nvidia was booted out of Apple hardware, because of the legal dispute they tried to force over Apple.
 
1. Expansive and they don't make a customized GPU like AMD.

2. Apple and few companies hated Nvidia CUDA system.

3. Apple created OpenCL in 2009. No reason to use Nvidia's CUDA and AMD GPU is better for OpenCL.

Apple are deprecating OpenCL...

My belief regarding Nvidia is that they stopped using them because pretty much every time they used an Nvidia GPU, they had massively high failure rates. AMD cards have had their issues too - but not to the same extent Nvidia ones have every time they've been put into Macs.

As I said on another thread on this subject, I can only think of one Nvidia card in the last decade Apple used that wasn't subject to a REP.
 
Apple are deprecating OpenCL...

My belief regarding Nvidia is that they stopped using them because pretty much every time they used an Nvidia GPU, they had massively high failure rates. AMD cards have had their issues too - but not to the same extent Nvidia ones have every time they've been put into Macs.

As I said on another thread on this subject, I can only think of one Nvidia card in the last decade Apple used that wasn't subject to a REP.
Which is ironic because Maxwell and Pascal (and now Turing) use so much less power for more performance than Polaris and Vega. GCN is a neat idea, but it hasn't allowed AMD to scale well at all graphically.
[doublepost=1544795236][/doublepost]Just an aside, Threadripper probably isn't a good fit due to interdie latency. EPYC would be better as each die has direct access to RAM, but then you still have to deal with NUMA latency issues. Apple hasn't had to deal with that since the old Mac Pro days of multi socket systems. So I am not sure if they have optimized for UMA.
 
Which is ironic because Maxwell and Pascal (and now Turing) use so much less power for more performance than Polaris and Vega. GCN is a neat idea, but it hasn't allowed AMD to scale well at all graphically.
In Gaming - yes. But that is geometry bound. What happens when you have ALU bound scenarios, ergo: compute, and professional use cases?

The efficiency, in that scenario, is the same. But AMD GPUs are cheaper. That is why AMD is in Apple computers, instead of Nvidia.
 
  • Like
Reactions: ssgbryan
Apple are deprecating OpenCL...

My belief regarding Nvidia is that they stopped using them because pretty much every time they used an Nvidia GPU, they had massively high failure rates. AMD cards have had their issues too - but not to the same extent Nvidia ones have every time they've been put into Macs.

As I said on another thread on this subject, I can only think of one Nvidia card in the last decade Apple used that wasn't subject to a REP.

Nvidia GPU is not optimized for Metal but AMD. Like I said, AMD can make customized GPU unlike Nvidia and that's what Apple needs for Mac computer.
[doublepost=1544806883][/doublepost]
Which is ironic because Maxwell and Pascal (and now Turing) use so much less power for more performance than Polaris and Vega. GCN is a neat idea, but it hasn't allowed AMD to scale well at all graphically.
[doublepost=1544795236][/doublepost]Just an aside, Threadripper probably isn't a good fit due to interdie latency. EPYC would be better as each die has direct access to RAM, but then you still have to deal with NUMA latency issues. Apple hasn't had to deal with that since the old Mac Pro days of multi socket systems. So I am not sure if they have optimized for UMA.

Just for gaming. In these days, Polaris and Vega have a lot of value.

Latency doesn't matter for other uses. For gaming, latency matters but who cares about it since Mac is not even optimized for gaming? For working purposes, there is no issue with latency. Also, AMD will have better performance for 3rd gen.
 
Latency doesn't matter for other uses. For gaming, latency matters but who cares about it since Mac is not even optimized for gaming? For working purposes, there is no issue with latency.
I really beg to differ.

NUMA latency has nothing to do with gaming - it refers to latency in the memory system. (NUMA == Non Uniform Memory Access)

"Non Uniform" means that some RAM memory is fast, other RAM memory is slow. (The terms "near" and "far" also show up.) Depending on the difference between "fast" and "slow", it can be a big deal. (Actually, you get "fast" and "slow" with two NUMA nodes - with more than two nodes you can have intermediate cases.)

A simple case is a Xeon dual-socket system. Both sockets have memory controllers, so typically half the DIMM slots are connected to one socket and the other half to the other socket.

If one Xeon CPU needs RAM data that's on DIMMs connected to the other CPU, the data is fetched by the memory controller on the other CPU and transferred (via QPI or UPI) to the requesting CPU. This introduces latency over accessing memory local to the CPU's memory controller.

AMD's multi-die implementations make even single socket systems NUMA.

tr.jpg

https://www.tomshardware.com/reviews/amd-ryzen-threadripper-2-2990wx-2950x,5725-2.html

Threadripper 2990WX borrows from AMD's EPYC server designs and comes with four active dies. The company fused off PCIe and memory control from two of the dies, creating silicon only useful for computing. Meanwhile, the other two I/O-enabled dies serve up two channels of DDR4 memory support and 32 lanes of PCIe 3.0 each.

Unfortunately, the compute dies suffer from increased latency on every request to main memory and PCIe-attached devices, as those requests always have to traverse the Infinity Fabric.

AMD added more Infinity Fabric channels to connect two more dies. Unfortunately, that has a tremendous impact on fabric bandwidth, which drops from 50 Gb/s on a 16-core Threadripper 2950X to 25 Gb/s in this implementation. And again, AMD measured performance with a 3200 MT/s data rate, meaning throughput at DDR4-2933 will be lower. Even with the benefits of tightly-controlled fabric scheduling magic, the combination of reduced bandwidth and 32 threads that must communicate over the fabric for I/O and memory requests has an impact on performance.

Since the title of this thread is "Why Apple don't use AMD Ryzen/Threadripper CPU?" - perhaps this graphic explains the reason.
 
Last edited:
Current state of information on Zen 2, and EPYC 2, and 3000 series CPUs is that there will be no NUMA ;). Problem solved? ;)
 
  • Like
Reactions: throAU
As usual, where are the links? :rolleyes:

If the CPU has "chiplets" and Infinity Fabric - how can it not be NUMA?
Youtuber named AdoredTV said in his leak, about EPYC2, that the CPUs will be based on chiplets, and the same source saying that there are chiplets, said there will be no more NUMA.

So far, one part of the information was 100% correct. Looking at the IO die, and how it connects everything - it is possible.
 
I really beg to differ.

NUMA latency has nothing to do with gaming - it refers to latency in the memory system. (NUMA == Non Uniform Memory Access)

"Non Uniform" means that some RAM memory is fast, other RAM memory is slow. (The terms "near" and "far" also show up.) Depending on the difference between "fast" and "slow", it can be a big deal. (Actually, you get "fast" and "slow" with two NUMA nodes - with more than two nodes you can have intermediate cases.)

A simple case is a Xeon dual-socket system. Both sockets have memory controllers, so typically half the DIMM slots are connected to one socket and the other half to the other socket.

If one Xeon CPU needs RAM data that's on DIMMs connected to the other CPU, the data is fetched by the memory controller on the other CPU and transferred (via QPI or UPI) to the requesting CPU. This introduces latency over accessing memory local to the CPU's memory controller.

AMD's multi-die implementations make even single socket systems NUMA.

View attachment 810831


Since the title of this thread is "Why Apple don't use AMD Ryzen/Threadripper CPU?" - perhaps this graphic explains the reason.

And do you have either Ryzen or Threadripper in real life? I guess not. The only thing that you have is a theory since you dont know anything about architecture, CPU issues, and more.
 
  • Like
Reactions: koyoot
1. Expansive and they don't make a customized GPU like AMD.

2. Apple and few companies hated Nvidia CUDA system.

3. Apple created OpenCL in 2009. No reason to use Nvidia's CUDA and AMD GPU is better for OpenCL.

4. Nvidia refused to allow Apple to use Quadro branding on memory-expanded GTX cards. ;)
 
Consider windows or linux system if you really want to go with Ryzen or threadripper system. It is unlikely they will go AMD just to use them on iMac or Mac Pro. I would bet if they ever switch CPUs, it will be their own.
 
And do you have either Ryzen or Threadripper in real life? I guess not. The only thing that you have is a theory since you dont know anything about architecture, CPU issues, and more.
Is what Aiden posted wrong? NUMA vs UMA is an issue.
Ryzen only uses a 1/2 CCX so it is UMA. Threadripper has 4/8 CCX’s so it is NUMA. First gen Threadripper allows you to disable half the cores to improve latency and performance, not sure if Zen+ includes this mode still. I haven’t looked at the latest version of Ryzen Master as it kept crashing Windows so I uninstalled it.
 
Last edited:
Whether we like it or not Intel is still the essential owner of the x86 architecture.

Yes, how AMD actually runs the instructions differently than Intel, but it isn’t a rewrite. Generally you may lose 10% performance for every day tasks as is (assuming that Intel and AMD were able to hit the same core count, frequencies, and size for a given workload. Intel is frighteningly behind. I don’t think anyone would notice the difference.. The effort to optimize for AMD would be nominal.

Intel DOES have their own specific brand of issues. Just as an example they are now looking to essentially stitch together 10nm cores to 14nm elements because they can’t build a single chip at 10nm completely.

AMD has their own issues, but will continue to thrive.

I’m honestly surprised Apple hasn’t bought AMD. Apple loves designing their own chips, and AMD has recently dont a better job of this compared to Intel. There is virtually no need to do an x86 to ARM migration if they went with AMD and make their own destiny.
 
Last edited:
I’m honestly surprised Apple hasn’t bought AMD. Apple loves designing their own chips, and AMD has recently dont a better job of this compared to Intel. There is virtually no need to do an x86 to ARM migration if they went with AMD and make their own destiny.
What for they would have to buy AMD if AMD has Semi-Custom business and can design anything Apple would want?
 
As usual, where are the links? :rolleyes:

If the CPU has "chiplets" and Infinity Fabric - how can it not be NUMA?

Technically, you can make all access "far" for every one of the chiplets; then it is uniformly slower. :)
For a subset of the problem this is the path that AMD took.

amd-chiplet-678_678x452.png


https://www.anandtech.com/show/1356...n-approach-7nm-zen-2-cores-meets-14-nm-io-die


All the memory is hung off the I/O chip in the middle. ( this is slightly drifting back to "front side bus" days of yesteryear ... and the same bandwidth problems.). The chiplets all come in through the 'infintiy' links (where the infinity markers on on the I/O chip in the middle. There is either a large crossbar switch or token ring to multiplex the access of all the cores to the DRAM segments of the I/O chip. Passing through that is probably uniform ( depending on the internals of the I/O chip. If the I/O chip has a "top' and 'bottom' half then may be some non uniformity there. )

What that image doesn't really cover though is is cache swaps. If the latest data is in the L1/L2/L3 cache of chiplet A and chiplet B now needs to get it then the swap will have to traverse the I/O chip to get it. if the data is in a L2 cache of another core in the same chiplet then it will be a 'local' transfer. Hence, still have NUMA if looking at the whole memory stack instead of just the RAM DIMMs.

Finally, there is a presumption here that inside the AMD chiplet access to the Infinity link to the I/O chip is uniform. Pretty good chance that it isn't uniformly a "1 hop" jump to the I/O chip from every single core. So still have NUMA.


Xeon E5 had NUMA with just a single CPU package. NUMA isn't necessarily precipitated by sockets.

v4_24coresHCC.png

https://www.anandtech.com/show/10158/the-intel-xeon-e5-v4-review/2

The 12 cores on the left have even access to the two DDR channel/memory controllers on that are also on the left (all sharing the same token ring network). However, to get to the other two it is a non-uniform hop to the other token ring and the other two memory channels.

The SP ( and Xeon W and Core i9 ) has a mesh instead of a token ring with even larger non uniformity.....

skl-x_mesh.png

https://www.anandtech.com/show/1155...-core-i9-7900x-i7-7820x-and-i7-7800x-tested/5

So there the core in the lower right has to jump through multiple of those yellow links to get to the memory controller on the left versus the other three cores on adjacent to the memory controller on the left.

The upside of the mesh is that if get to a point where only 1-3 cores are active then shrink down to a subset that are not the farthest away from any one memory controller (e.g., the far , four corners). Still is NUMA, but control the scope and impact.
[doublepost=1544883653][/doublepost]
...Just an aside, Threadripper probably isn't a good fit due to interdie latency. EPYC would be better as each die has direct access to RAM, but then you still have to deal with NUMA latency issues. Apple hasn't had to deal with that since the old Mac Pro days of multi socket systems. So I am not sure if they have optimized for UMA.

NUMA just about sockets. The 12 core Xeon E5 v2 has NUMA.


E5-2%20dies.png


https://www.anandtech.com/show/7852/intel-xeon-e52697-v2-and-xeon-e52687w-v2-review-12-and-8-cores

Depending upon how much L3 synchronization traffic kicked up may not get uniform access to memory. The 12 core has token rings stretched a bit thinner than the lower core count variants. NUMA was still around in the MP 2013, but Apple could afford to mostly ignore it with a very limited impact.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.