This is the Mac Pro thread, which have been using Xeons with ECC support.And also does ECC, which the core i9, etc. do not.
Any comments about "Core" processors without ECC support are irrelevant - unless you're pitching a non-Xeon xMac.
This is the Mac Pro thread, which have been using Xeons with ECC support.And also does ECC, which the core i9, etc. do not.
AMD is vulnerable. Don't try to pretend that "less vulnerable" is the same as "not vulnerable".
Please explain the difference between "architecture" and "micro-architecture" for us, and how it relates to this claim.
All of the current Intel and AMD CPUs implement the x64 instruction set - but there are many different CPUs that use many different ways that they implement that instruction set. Which are "architecture" and which are "micro-architecture"? Why do you say that a Sandy Bridge and a Skylake are the same architecture?
Let me hold up a mirror.
You realize that AMD and ARM processors have many of the same Meltdown/Spectre vulnerabilities? Including Ryzen and Threadripper!
https://www.techarp.com/guides/complete-meltdown-spectre-cpu-list/2/
Probably not, or you wouldn't have made such an ignorant post.
Let me hold up a mirror.
You realize that AMD and ARM processors have many of the same Meltdown/Spectre vulnerabilities? Including Ryzen and Threadripper!
https://www.techarp.com/guides/complete-meltdown-spectre-cpu-list/2/
Probably not, or you wouldn't have made such an ignorant post.
Why doesn't Apple use Nvidia graphics cards?
Why doesn't Apple use Nvidia graphics cards?
Well, for one, Mainstream AM4 platform will offer "Server-grade" 16 core/32 thread CPU. That is the same core count you get from 7960X, For how much? Third of the price, of it(Top end Zen 2 CPU for AM4 will cost no more than 499$). And you will still get ECC memory.This is the Mac Pro thread, which have been using Xeons with ECC support.
Any comments about "Core" processors without ECC support are irrelevant - unless you're pitching a non-Xeon xMac.
I do not know if this is irony, but it was the other way around. Nvidia was booted out of Apple hardware, because of the legal dispute they tried to force over Apple.For those that need a caption, "Apple tried to bully Nvidia, and Nvidia responded appropriately!"
1. Expansive and they don't make a customized GPU like AMD.
2. Apple and few companies hated Nvidia CUDA system.
3. Apple created OpenCL in 2009. No reason to use Nvidia's CUDA and AMD GPU is better for OpenCL.
Which is ironic because Maxwell and Pascal (and now Turing) use so much less power for more performance than Polaris and Vega. GCN is a neat idea, but it hasn't allowed AMD to scale well at all graphically.Apple are deprecating OpenCL...
My belief regarding Nvidia is that they stopped using them because pretty much every time they used an Nvidia GPU, they had massively high failure rates. AMD cards have had their issues too - but not to the same extent Nvidia ones have every time they've been put into Macs.
As I said on another thread on this subject, I can only think of one Nvidia card in the last decade Apple used that wasn't subject to a REP.
In Gaming - yes. But that is geometry bound. What happens when you have ALU bound scenarios, ergo: compute, and professional use cases?Which is ironic because Maxwell and Pascal (and now Turing) use so much less power for more performance than Polaris and Vega. GCN is a neat idea, but it hasn't allowed AMD to scale well at all graphically.
Apple are deprecating OpenCL...
My belief regarding Nvidia is that they stopped using them because pretty much every time they used an Nvidia GPU, they had massively high failure rates. AMD cards have had their issues too - but not to the same extent Nvidia ones have every time they've been put into Macs.
As I said on another thread on this subject, I can only think of one Nvidia card in the last decade Apple used that wasn't subject to a REP.
Which is ironic because Maxwell and Pascal (and now Turing) use so much less power for more performance than Polaris and Vega. GCN is a neat idea, but it hasn't allowed AMD to scale well at all graphically.
[doublepost=1544795236][/doublepost]Just an aside, Threadripper probably isn't a good fit due to interdie latency. EPYC would be better as each die has direct access to RAM, but then you still have to deal with NUMA latency issues. Apple hasn't had to deal with that since the old Mac Pro days of multi socket systems. So I am not sure if they have optimized for UMA.
I really beg to differ.Latency doesn't matter for other uses. For gaming, latency matters but who cares about it since Mac is not even optimized for gaming? For working purposes, there is no issue with latency.
https://www.tomshardware.com/reviews/amd-ryzen-threadripper-2-2990wx-2950x,5725-2.html
Threadripper 2990WX borrows from AMD's EPYC server designs and comes with four active dies. The company fused off PCIe and memory control from two of the dies, creating silicon only useful for computing. Meanwhile, the other two I/O-enabled dies serve up two channels of DDR4 memory support and 32 lanes of PCIe 3.0 each.
Unfortunately, the compute dies suffer from increased latency on every request to main memory and PCIe-attached devices, as those requests always have to traverse the Infinity Fabric.
AMD added more Infinity Fabric channels to connect two more dies. Unfortunately, that has a tremendous impact on fabric bandwidth, which drops from 50 Gb/s on a 16-core Threadripper 2950X to 25 Gb/s in this implementation. And again, AMD measured performance with a 3200 MT/s data rate, meaning throughput at DDR4-2933 will be lower. Even with the benefits of tightly-controlled fabric scheduling magic, the combination of reduced bandwidth and 32 threads that must communicate over the fabric for I/O and memory requests has an impact on performance.
As usual, where are the links?Current state of information on Zen 2, and EPYC 2, and 3000 series CPUs is that there will be no NUMA . Problem solved?
Youtuber named AdoredTV said in his leak, about EPYC2, that the CPUs will be based on chiplets, and the same source saying that there are chiplets, said there will be no more NUMA.As usual, where are the links?
If the CPU has "chiplets" and Infinity Fabric - how can it not be NUMA?
I really beg to differ.
NUMA latency has nothing to do with gaming - it refers to latency in the memory system. (NUMA == Non Uniform Memory Access)
"Non Uniform" means that some RAM memory is fast, other RAM memory is slow. (The terms "near" and "far" also show up.) Depending on the difference between "fast" and "slow", it can be a big deal. (Actually, you get "fast" and "slow" with two NUMA nodes - with more than two nodes you can have intermediate cases.)
A simple case is a Xeon dual-socket system. Both sockets have memory controllers, so typically half the DIMM slots are connected to one socket and the other half to the other socket.
If one Xeon CPU needs RAM data that's on DIMMs connected to the other CPU, the data is fetched by the memory controller on the other CPU and transferred (via QPI or UPI) to the requesting CPU. This introduces latency over accessing memory local to the CPU's memory controller.
AMD's multi-die implementations make even single socket systems NUMA.
View attachment 810831
Since the title of this thread is "Why Apple don't use AMD Ryzen/Threadripper CPU?" - perhaps this graphic explains the reason.
1. Expansive and they don't make a customized GPU like AMD.
2. Apple and few companies hated Nvidia CUDA system.
3. Apple created OpenCL in 2009. No reason to use Nvidia's CUDA and AMD GPU is better for OpenCL.
Is what Aiden posted wrong? NUMA vs UMA is an issue.And do you have either Ryzen or Threadripper in real life? I guess not. The only thing that you have is a theory since you dont know anything about architecture, CPU issues, and more.
i9 has ECC support iircRyzen is just fine when you don't overclock as well. And also does ECC, which the core i9, etc. do not.
What for they would have to buy AMD if AMD has Semi-Custom business and can design anything Apple would want?I’m honestly surprised Apple hasn’t bought AMD. Apple loves designing their own chips, and AMD has recently dont a better job of this compared to Intel. There is virtually no need to do an x86 to ARM migration if they went with AMD and make their own destiny.
As usual, where are the links?
If the CPU has "chiplets" and Infinity Fabric - how can it not be NUMA?
...Just an aside, Threadripper probably isn't a good fit due to interdie latency. EPYC would be better as each die has direct access to RAM, but then you still have to deal with NUMA latency issues. Apple hasn't had to deal with that since the old Mac Pro days of multi socket systems. So I am not sure if they have optimized for UMA.