Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Technically, you can make all access "far" for every one of the chiplets; then it is uniformly slower. :)
Thank you for the excellent clarification.

It could probably be claimed that no memory is 100% uniform - if you have two DIMMs in a bank the second DIMM would be slower due to light speed delays because the motherboard traces are longer.

So the real issue is the difference in latency/bandwidth between "near" and "far" memory. If the difference is small, then NUMA causes no real problems.

It would be interesting to know what the near/far differences are on these various designs, and how they compare to the 64ns/105ns difference on the Threadripper.
 
With Zen 2 all chiplets have to take one hop to access memory.

Zen 1 Threadripper and Epyc are NUMA because sometimes you have to take one hop, and sometimes none.
 
  • Like
Reactions: throAU
I think everybody is forgetting the most important thing in Zen 2 EPYC CPUs: I/O Die, and its role.

It may be more than just Infinity Fabric links. It may actually be "master" controller for "slave" Chiplet CPUs.
 
I think everybody is forgetting the most important thing in Zen 2 EPYC CPUs: I/O Die, and its role.

It may be more than just Infinity Fabric links. It may actually be "master" controller for "slave" Chiplet CPUs.
An on die northbridge...
 
I think everybody is forgetting the most important thing in Zen 2 EPYC CPUs: I/O Die, and its role.

It may be more than just Infinity Fabric links. It may actually be "master" controller for "slave" Chiplet CPUs.


There is little indication that the I/O Die executes anything at the application level. There is a good chance there might be a secure enclave ARM processor there with some boot duties so it is necessary for the chiplet cores to "start" but post boot operations there probably isn't much there. It is a "master" in the very narrow sense that if you completely starve a CPU core of all I/O data it can't do much of anything. Those chiplets can't get to any persistent storage (no firmware, no disk , no storage network data, etc.) without the i/O chip.... but a "master' in a computational dimension that is more than a bit lacking.


There may be some coordinating sleep/wake power management control ( depends upon how fine grained control is given to the OS ). There probably should be some coordinated control over which chiplets are put to sleep in corner case modes where the computational workload dries up. However, in back in the full computational dimension ... "master"/"salve" is the wrong notion.

There is pretty high chance they have stuffed the PCH in the I/O chip. there is some internal computation for USB/Ethernet etc., but not at the app level.

If AMD wanted to be really clever there could be some memory compression/decompression there in future version, but that isn't master slave either.

The cores in the chiplets need the functionality in the I/O chip, but that isn't master/slave relationship.
[doublepost=1544896829][/doublepost]
An on die northbridge...

not exactly "on die" if it is on a different die in the same package. More like "on chip" ( chip == package . The physical container the die(s) are in. )

And probably a hefty chunk of the southbridge as were the previous Zen implementations.

Probably closer to merging the old classic north and south bridge chips onto a single die and then mounting that die inside the CPU package. Which is why it is on the slippery slope to going back to "front side bus" problems. Can get to context where have way too high of a ratio of cores to memory channel / controller paths and and to many memory requests stampeding too few "doors" to the RAM chips.

The L1/L2/L3 caches are now substantially larger and the branch predictors are much better and have multithreading (SMT / "hyperthreads" ) so can hide the log jams a bit better, but core counts going way higher run will evenutally bring back the problems (unless constrain the workload into being chopped up into fine grain pieces. )
 
  • Like
Reactions: diamond.g
Without links to support your claims, not buying it. (And links to YouTube usually aren't worth the bandwidth to watch.)

Intel CPUs at 100°C - really? Really? Put the heatsink back on and measure again.

My Intel MacBook stand to 100°C every time I launch a cpu intensive task. If I disable turbo boost, he stays at about 85°C. I have a MacBook Pro retina late 2013 15"
 
Without links to support your claims, not buying it. (And links to YouTube usually aren't worth the bandwidth to watch.)

Intel CPUs at 100°C - really? Really? Put the heatsink back on and measure again.
i7/i9 15" and i7 Minis constantly hit 100 under load.
 
Perhaps instead of "Designed in California" those Apple products should say "Designed to Throttle" ;)
the Mini doesn't throttle tho, it stays in turbo even at full load.
can't say the same for the i9 that i returned
 
Maybe yes
Perhaps instead of "Designed in California" those Apple products should say "Designed to Throttle" ;)
You're right, but I never had problems with this temperatures because the area near the trackpad does not become
hot. You can sense the warm processor only placing your fingers between the screen and the keyboard. Moreover, after 6 years, my MacBook works perfectly well and is as fast as the first day I bought it.
 
  • Like
Reactions: Ploki
Perhaps instead of "Designed in California" those Apple products should say "Designed to Throttle" ;)
It is happening the same way for EVERY single vendor in the industry, and the reason for Throttling is not "Designed to Throttle in California" but Intel Trade Marked Throttling.

Runing CPUs way out of thermal specs is the sole reason, for throttling, and Apple is not the only vendor affected by it.
 
Ru[n]ning CPUs way out of thermal specs is the sole reason, for throttling, and Apple is not the only vendor affected by it.
No.

Putting insufficient cooling for the TDP of the CPU is the reason for throttling, and Apple is by far the worst offender at this. (Look up "form over function".)
 
  • Like
Reactions: ssgbryan
No.

Putting insufficient cooling for the TDP of the CPU is the reason for throttling, and Apple is by far the worst offender at this. (Look up "form over function".)
Its Intel who specs 45W TDP for its mobile CPUs and yet, lets them run at clocks, and power draw way higher than those specs ;).

Apple designed chassis way before Intel released CPUs, and it was designed to dissipate 45W TDP. It is not Apple's fault, that Intel, to be competitive, had to push their CPUs way of out of Spec. i9 9900K is 125W TDP CPU. I can even bring the case of "35W" TDP Core i7 8700T which is actually 82W TDP CPU, and nowhere near its "specs". 8700T is only able to maintain base, 2.4 GHz, clock speeds in 35W TDP thermal envelope.

Those CPUs will throttle always if you will put them in specced thermal threshold. But yeah. Its Apple fault. Not Intel's.
 
Its Intel who specs 45W TDP for its mobile CPUs and yet, lets them run at clocks, and power draw way higher than those specs ;).

Apple designed chassis way before Intel released CPUs, and it was designed to dissipate 45W TDP. It is not Apple's fault, that Intel, to be competitive, had to push their CPUs way of out of Spec. i9 9900K is 125W TDP CPU. I can even bring the case of "35W" TDP Core i7 8700T which is actually 82W TDP CPU, and nowhere near its "specs". 8700T is only able to maintain base, 2.4 GHz, clock speeds in 35W TDP thermal envelope.

Those CPUs will throttle always if you will put them in specced thermal threshold. But yeah. Its Apple fault. Not Intel's.
Do you have any links for claiming that Intel specs 35 watts for 82 watt CPUs?

8700T is only able to maintain base, 2.4 GHz, clock speeds in 35W TDP thermal envelope.
Isn't this *exactly* the definition of TDP ????
 
Do you have any links for claiming that Intel specs 35 watts for 82 watt CPUs?
https://www.computerbase.de/2018-06/intel-core-i7-8700t-i5-8500t-cpu-test-coffee-lake/3/

Full review of low power Intel chips. In simple Cinebench R15 Multithreaded benchmark i7-8700T platform uses 117W of power. In Idle - it uses 33W, regardless of power limit. Do the math yourself what is the power delta ;). If you will cut out Turbo State, from 8700T it will stay in 35W.

All of Intel CPUs are breaking their Power specs, unless you will force them not to. It is not Apple's, or any other company fault, that products using Intel CPUs are breaking specs and bloody throttling like hell.
 
https://www.computerbase.de/2018-06/intel-core-i7-8700t-i5-8500t-cpu-test-coffee-lake/3/

Full review of low power Intel chips. In simple Cinebench R15 Multithreaded benchmark i7-8700T platform uses 117W of power. In Idle - it uses 33W, regardless of power limit. Do the math yourself what is the power delta ;). If you will cut out Turbo State, from 8700T it will stay in 35W.

All of Intel CPUs are breaking their Power specs, unless you will force them not to. It is not Apple's, or any other company fault, that products using Intel CPUs are breaking specs and bloody throttling like hell.
Sehr interessant. Danke.
 
  • Like
Reactions: koyoot
As usual, where are the links? :rolleyes:

If the CPU has "chiplets" and Infinity Fabric - how can it not be NUMA?

Google what NUMA means.

If every access to memory goes via the IO die (which rumour has it, is where the memory controller may now reside, and possibly also another level of cache), then guess what? UNIFORM MEMORY ARCHITECTURE.


edit:
showing a xeon block diagram to show how ryzen may not be uniform is kinda misguided. Ryzen is infinity fabric (i.e., switched network) between the cores and elsewhere on the die, Xeon is not...
 
  • Like
Reactions: koyoot
Yes, there are real differences in the instruction sets. In addition, there are differences in the microarchitecture that affect optimizations. Code optimized for real (Intel) x64 chips might be less than optimal on Fake x64 (AMD) chips.

I know I am replying to an old thread, but I found this such a hilarious post! I mean, ok, maybe it's a 15 year old who doesn't remember that AMD designed the x86-64 architecture (originally called AMD64), and Intel held off as much as they could (they were going with their competing IA-64), but in the end had to adopt it. So, in a rather x86 "first", it is Intel that is producing "compatible" CPUs :)
But no, there are no "real differences" in the instruction sets either are using. "compatibility issues" in x86 CPUs have not really been an issue since the 90s, you can just go with whoever has a better cpu for your budget (unless you want a Mac, in which case, tough luck).
Sure, I'd love a Ryzen Mac, but I'd be perfectly fine with a Mac Pro with a modern Intel CPU that is not a non-upgradeable weird cylinder thing. Still hanging on my ancient 6-core cheese grater until then.
 
  • Like
Reactions: svan71
I know I am replying to an old thread, but I found this such a hilarious post! I mean, ok, maybe it's a 15 year old who doesn't remember that AMD designed the x86-64 architecture (originally called AMD64), and Intel held off as much as they could (they were going with their competing IA-64), but in the end had to adopt it. So, in a rather x86 "first", it is Intel that is producing "compatible" CPUs :)
But no, there are no "real differences" in the instruction sets either are using. "compatibility issues" in x86 CPUs have not really been an issue since the 90s, you can just go with whoever has a better cpu for your budget (unless you want a Mac, in which case, tough luck).
Sure, I'd love a Ryzen Mac, but I'd be perfectly fine with a Mac Pro with a modern Intel CPU that is not a non-upgradeable weird cylinder thing. Still hanging on my ancient 6-core cheese grater until then.
Hilarious, until you notice that AMD doesn't support SGX instructions which are common on Intel, Intel and AMD have somewhat different virtualization instruction extensions, ....

Yes, there are real differences - especially if you want to use the latest extensions to get the best performance.
 
Last edited:
Hilarious, until you notice that AMD doesn't support SGX instructions which are common on Intel, Intel and AMD have somewhat different virtualization instruction extensions, ....

Yes, there are real differences - especially if you want to use the latest extensions to get the best performance.

You crack me up. After pointing out that Intel is the one with the "compatible" architecture this time around, you throw irrelevant facts like the different virtualization extensions (both equivalent and supported by every vendor) or the different trusted execution environment features claiming, erroneously, they have to do with "performance". In fact, the mentioned Intel's Software Guard Extensions turned out to be a liability, as they are susceptible to some variants of the Spectre vulnerability. Unfortunately, the Dunning-Kruger effect is strong with you, so I can't explain things further...

PS. I do like your avatar though ;) Just avoid discussing CPUs with computer scientists...
 
  • Like
Reactions: throAU and svan71
Well sadly, Intel needs to develop a whole new CPU line up since Jim Keller mentioned about making a new architecture like Ryzen.

Tho it will take more than 3 years, Apple has to change their CPU eventually.
 
If Apple went AMD - they need to go Eypc, not Thread ripper

32 core/64 thread systems can be had for $6,500.
 
"Code optimized for real (Intel) x64 chips might be less than optimal on Fake x64 (AMD) chips." How is the original fake?
 
  • Like
Reactions: JoSch
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.