Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Elwe

macrumors regular
Original poster
Dec 30, 2006
164
95
Given all the great news about second-generation Epyc or more generally the Zen 2 architecture, as maybe this is more about Theadripper 3 . . . and assuming that Apple could have been one of those companies, like Google and HP, that AMD would have been willing to show its internal roadmaps . . . why would this not have been an ideal time to use AMD's platform for a modern workstation?

They would have immediately gotten so much. So much. The only apparent negatives would be AVX512, and some single-thread performance given how Apple is apparently cooling and clocking these particular Intel cpus.

I cannot imagine Apple would introduce a new Mac Pro and think they can move this over to ARM any time soon . . . and Apple does not have an x86 license . . . so they are locked into an Intel platform that is guaranteed to substantially change already some time next year. Even then, you will not get the core counts and aggregate performance AMD already has now. It is not clear if Intel has decided to embrace PCIe 4.0 next year. And who knows what they will do with pricing.

It hope it cannot be as silly as not wanting to propagate the hackintosh community by avoiding support for modern AMD cpus . . . Is it Thunderbolt 3 licensing somehow?
 
  • Like
Reactions: ZombiePhysicist
I'm not sure what you want to hear but no one here can answer your question. We don't know.

For now we have to be content with whatever Intel and Apple choose to offer.
 
AMD would be the better bet. As others have pointed out, right now Intel has nothing and that is all that is on their roadmap. Whereas with Zen 3, AMD is looking to go to 4 threads per core.

Thunderbolt is available on X570 boards right now - which is irrelevant itself, since T-bolt is still a solution in search of a problem.

The 7,1 is just like the 6,1, only more expensive and the buyer gets even less for their dollar. It is designed for a very small set of customers - large video production houses. It is overpriced and underspecced for anything else.

I agree that AMD is a better platform. It is why after 19 years, I am leaving Mac and OSX behind. I moved from OS/2 to OSX because it was a better operating system than Windows. That hasn't been the case for a while now.

My next system will either be a Ryzen 9 or 3rd gen TR. (I currently have a Z210 that I am using as a test bed. After I move, it will become a hackintosh for my last piece of OSX software (Adobe Acrobat). Moving everything else will cost me less than $200.

At the end of the day, I actually DO stuff on my computers, and emojis don't actually help me get work done faster.

My biggest hold up right now is designing a system that doesn't have that tacky RGB nonsense.
 
  • Like
Reactions: roobarb!
Given all the great news about second-generation Epyc or more generally the Zen 2 architecture, as maybe this is more about Theadripper 3 . . . and assuming that Apple could have been one of those companies, like Google and HP, that AMD would have been willing to show its internal roadmaps . . . why would this not have been an ideal time to use AMD's platform for a modern workstation?

Google ( Facebook, Amazon , MS Azure , Baidu) don't just get sone roadmaps. Google has had EPYC generation for months. And Engineering samples for many more months longer than that. They get the early production also.

Expanding off of the EPYC gen one instances in those cloud instances was far more strategic than anything Apple could or would provide.

It would have highly non ideal for the Mac Pro because it was already vastly so late. Trying to hook it to Threadripper ( which is second to last in product catagory to roll out on this new Zen geration ) would have made the Mac Pro even later. Threadripper probably won't be in volume until the end of the year. All the high clock chiplets AMD has are being shoveled into Ryzen 3 and EPYC. Threadripper won't ship until that demand substantive drops. And EPYC is likely to sell extremely well for months.



They would have immediately gotten so much. So much.

Can only get much from something that is shipping. Which has been the core root issue with the Mac Pro for years. Solve that ... then Apple can move onto other "problems".

The only apparent negatives would be AVX512, and some single-thread performance given how Apple is apparently cooling and clocking these particular Intel cpus.

Apple also knows that they aren't giving up any 7nm wafer starts to AMD either. Right now Apple and AMD are competiting with each other for TSMC wafers. Apple has a very solid knowledge that AMD cannot crank out more x86 chiplets than they already have reservations for at this point. In part because they are the ones sucking them up with the new A series chips for this Fall's products.

Not a permanent problem but a contributing factor to why Threadripper is going to be quite late. (and wouldn't necessarily solve the launch in 2019 Mac Pro problem. )


I cannot imagine Apple would introduce a new Mac Pro and think they can move this over to ARM any time soon . . . and Apple does not have an x86 license . . . so they are locked into an Intel platform that is guaranteed to substantially change already some time next year.

First, Apple could if they weren't using an Apple architecture license ARM chip and willing to take some lumps just to be OCD about architecture. ARM's N1 as a baseline https://www.anandtech.com/show/13959/arm-announces-neoverse-n1-platform. Chop it down to 8-20 cores. Goose the clock speed and bleed more wattage. That could be close enough if deeply wedded to a "Big Bang" migration of all systems. The much larger missing part is the I/O chips. ( Thunderbolt , USB 3.2/4 . etc. )


Second, not particularly locked into the "Intel Platform" as much as the x86 one. Which is more "cheaper to keep her and stay " than lock in. Switch to AMD would again be a bigger I/O chip hurdle than a CPU one.

Third, Intel will likely get incrementally better next year, but it isn't likely to be a huge leap on generic code. They are going to be behind AMD for a while on many benchmarks for most of 2020 and into 2021.


Even then, you will not get the core counts and aggregate performance AMD already has now. It is not clear if Intel has decided to embrace PCIe 4.0 next year. And who knows what they will do with pricing.

Intel really doesn't have any choice about pricing. That damn is cracked and just waiting on it to transition to the busted next year.



It hope it cannot be as silly as not wanting to propagate the hackintosh community by avoiding support for modern AMD cpus . . . Is it Thunderbolt 3 licensing somehow?

It is not licensing. It is far more tha AMD hasn't done their 'homework' over last 2 years and it is going to show over next 1-2 years.
 
Google ( Facebook, Amazon , MS Azure , Baidu) don't just get sone roadmaps. Google has had EPYC generation for months. And Engineering samples for many more months longer than that. They get the early production also.

It is not licensing. It is far more tha AMD hasn't done their 'homework' over last 2 years and it is going to show over next 1-2 years.


Yeah, by then Zen 3 & 4 will have 4 threads per core, so obviously they are doomed.
 
Few reasons at least.

Apple would get bulk discounts from Intel for purchasing CPUs for desktop and laptops. The discounts are diminished by having Intel and AMD options.

Second, AMD processors have slower per core performance which is slower when you aren't doing things like rendering. Most people aren't rendering all the time and a lot of rendering is now done on GPU or render farms anyway.

Third...obviously Apple would have to modify the T2 chip to give Thunderbolt support for AMD processors which is pointless and would annoy Intel.

There is a fourth reason. Intel always holds back when they have weak competition. Whenever AMD steps up the game then Intel releases everything they were holding back and really starts kicking ass and cutting costs. They did that to AMD in the 486, Pentium and Core era. They are clever bastards we have to admit.
 
Few reasons at least.

Apple would get bulk discounts from Intel for purchasing CPUs for desktop and laptops. The discounts are diminished by having Intel and AMD options.

Second, AMD processors have slower per core performance which is slower when you aren't doing things like rendering. Most people aren't rendering all the time and a lot of rendering is now done on GPU or render farms anyway.

Third...obviously Apple would have to modify the T2 chip to give Thunderbolt support for AMD processors which is pointless and would annoy Intel.

There is a fourth reason. Intel always holds back when they have weak competition. Whenever AMD steps up the game then Intel releases everything they were holding back and really starts kicking ass and cutting costs. They did that to AMD in the 486, Pentium and Core era. They are clever bastards we have to admit.

I'll make this short. You may want to read up on Rome (Zen 2) with up to 64 cores / 128 threads. Second TB works fine with AMD, it is just PCIe and DP rolled up with power delivery in the single cable. The current problem at Intel is their fabrication snafu (10nm) and their failure to foresee the change from monolithic dies to the more flexible chiplet layout.
 
HAHAHAHAHAHAHAHAHAHAHAHA

Every OEM is about to get significant discounts, because Intel certainly can't charge more money for so little performance.

Zen 2 Instructions Per Clock have increased by 15% from just 2nd Gen Ryzen to 3rd Gen Ryzen. Having an Intel 5Ghz chip doesn't matter if an AMD 4.6Ghz chip out performs it. Single threaded performance isn't very important anymore. Hell, by next year, 8 cores/16 threads will be a common requirement.

Intel isn't "holding back" - look at their road map. They got nothing for the next couple of years. Intel is in the same boat that AMD was about a decade ago. They bet wrong.

I agree about T2 & Tbolt - but only because Tbolt is a solution in search of a problem.

I'd remind you that rendering on GPUs is a WinTel thing. We can't do that on OSX, due to lack of Nvidia support, both now and for the foreseeable future, due to Apple's aggressive unwillingness to sign off on Nvidia drivers.

Oh, and since I do 3d art - I do render all day. Give me those cores and give me those threads, because even the free software is driven by core/thread count.

Right now, the biggest problem with going with an AMD solution is finding one that doesn't have that tacky RBG lighting nonsense.
 
  • Like
Reactions: Digital_Sousaphone
HAHAHAHAHAHAHAHAHAHAHAHA

Every OEM is about to get significant discounts, because Intel certainly can't charge more money for so little performance.

Zen 2 Instructions Per Clock have increased by 15% from just 2nd Gen Ryzen to 3rd Gen Ryzen. Having an Intel 5Ghz chip doesn't matter if an AMD 4.6Ghz chip out performs it. Single threaded performance isn't very important anymore. Hell, by next year, 8 cores/16 threads will be a common requirement.

Intel isn't "holding back" - look at their road map. They got nothing for the next couple of years. Intel is in the same boat that AMD was about a decade ago. They bet wrong.

I agree about T2 & Tbolt - but only because Tbolt is a solution in search of a problem.

I'd remind you that rendering on GPUs is a WinTel thing. We can't do that on OSX, due to lack of Nvidia support, both now and for the foreseeable future, due to Apple's aggressive unwillingness to sign off on Nvidia drivers.

Oh, and since I do 3d art - I do render all day. Give me those cores and give me those threads, because even the free software is driven by core/thread count.

Right now, the biggest problem with going with an AMD solution is finding one that doesn't have that tacky RBG lighting nonsense.

Speak like a decent person please. I'm 64 years old, have had computers starting with a Z80 based systems running cp/m. I read the tech news every day too, so please don't think you're the only person who does this. Millions of people read tech sites too.

These conversations are verbatim the same conversations users had in the 90s when AMD and Intel had their first of several wars. Every war, same opinions, same insults, same playground behavior. Intel always wins. I'm not defending Intel, but they play poker and they don't show what they really have on so-called "roadmaps". They know how to use published "roadmaps" to confuse their competition. They know when to price high and when to price low. When to offer bulk discounts to preferred customers and how much the discount should be. They do this every single time and it is us gray haired oldies who remember it. There's a lot more politics and game play to this than specs published on a roadmap.

If you're still under the impression that this is incorrect you can always write a letter to Apple to tell them that they, one of the top three biggest corporations in the world with thousands of knowledgeable employees, should take your advice because you know better than them after seeing a roadmap on a tech news site.
 
Ahhh..... The appeal to authority fallacy. As a fellow old person, I am not impressed. This isn't the 1990's - I was also there, and Intel wasn't winning in the '90s. They were getting beat down on a regular basis.

Andy Grove once pointed out that with the fab process, you wouldn't know if you had screwed up until 5 years into the process - that is what happened to AMD with Bulldozer, and is what is happening with Intel and it's 14++++ (to infinity and beyond) process.

On a separate note, you might want to learn the difference between the roadmaps that marketing pushes out at conventions and the ones they push to investors. Screw around with the latter, and you are looking at not only shareholder lawsuits, but also inquiries by governmental agencies.

But most of the folks here don't read tech sites - if they did, you wouldn't see all of the threads on how to keep our limping decade old Mac Pros relevant - they would leave.

Most of us are still on OSX due to inertia and emotional ties, we certainly aren't staying around for performance - that went by the wayside once Mr. Cook took over running Apple.

There are already people writing to Mr. Cook, AKA the second coming of John Sculley. Apple doesn't do computing anymore - they are about overpriced phones and emojis. The only reason they still make computers is that we can't use Xcode on an iPad (yet). Once that happens, Apple will exit the computer field.

It has been a fun 20 years, but I am preparing for my transition away from Apple. At the end of the day, the computer is a tool, and I prefer having new, better performing tools rather than constantly accepting limitations that exist for the ease of Apple, as opposed to me.
 
Does anyone think that four threads per core will be useful? Hyper-threading (or SMT or whatever) is usually pretty weak at improving performance. (Four threads per core won't be anywhere near four times faster.)
 
  • Like
Reactions: Macintosh IIcx
640K should be enough for anybody......

I would kill for 4 threads per core. I know this is hard for some folks to understand, but some of us actually need horsepower. It is why we are running Mac Pros instead of a mac mini or iMac. Remember, it isn't just what you need today, it is what you will need tomorrow (especially since updates to Macs are so few and far between).

I am a 3d art hobbyist - I have 1 $500 program (Zbrush) and everything else is bottom of the stack (Daz Studio, Poser, Vue, Photoshop Elements). All of those programs are inexpensive (or free), and all of them will use every CPU cycle and every scrap of ram I can throw at it. A $900 AMD Ryzen 9 computer will out perform my current box, and a Threadripper system will outperform the incoming Mac Pro 7,1.

4 threads per core will be a back breaker for Intel in the data center. 4 threads per core will be faster than 2 threads per core, and that is what matters.

I have a bit more confidence in AMD engineers than you seem to have. They aren't working on Zen 3. Its done, they are designing Zen 4.

The future is multi-core performance. Gaming is the last major sector to push for it. The upcoming consoles will be 8 core/16 thread boxen. That is important because all those games will also be showing up on PCs, and that means 8 core/16 thread systems are going mainstream.

Budget systems are 6 core/12 threads and people are already noticing that some games are CPU limited on those systems.
 
Does anyone think that four threads per core will be useful? Hyper-threading (or SMT or whatever) is usually pretty weak at improving performance. (Four threads per core won't be anywhere near four times faster.)

In HPC and the data center it will. Power9 powers the fastest super computer in the world, is it going to change the desktop workstation...I dunno but I also don’t think that’s AMD’s target it either since there’s no real money in it.
 
Apple gets a discount on CPUs from Intel by shipping exclusively Intel in Macs. Apple would be foolish to go with AMD on their low-volume high margin Mac Pro product when they would raise the BOM cost on all of their high-volume low-margin (relative) products like MBA.

Maybe after Apple starts shipping A-series Macs next year they could go with AMD on the Mac Pro but I think Apple has other plans.
 
Does anyone think that four threads per core will be useful? Hyper-threading (or SMT or whatever) is usually pretty weak at improving performance. (Four threads per core won't be anywhere near four times faster.)
Do we REALLY need printing presses? Ink and quill with a sheet of velum has worked for decades!
 
  • Like
Reactions: ssgbryan
640K should be enough for anybody......

I would kill for 4 threads per core. I know this is hard for some folks to understand, but some of us actually need horsepower. It is why we are running Mac Pros instead of a mac mini or iMac.

If you need horsepower, why are you still on a Mac?
 
  • Like
Reactions: BlueTide
If you need horsepower, why are you still on a Mac?

Like a lot of folks, I was waiting on the 7,1. Now that we know what it is, I have started to transition away from OSX.

I have a HP Z210 that I am using to get familiar with Windows 10 and see what applications I will need.

AMD hasn't finished with Zen 2. Once 3rd gen Threadripper is out, I'll either run it or one of the Ryzen 9 CPUs that is due out later this year.

Jan 1, I'll be on an AMD system. I'll also be looking to replace the rest of my and my family's Apple gear (iPhone, iPad, and Apple TV). Once you make the break, there isn't really a reason to stay with any Apple products.
 
AidenShaw said:
Does anyone think that four threads per core will be useful? Hyper-threading (or SMT or whatever) is usually pretty weak at improving performance. (Four threads per core won't be anywhere near four times faster.)

Do we REALLY need printing presses? Ink and quill with a sheet of velum has worked for decades!
Look at the MDS performance data (like https://forums.macrumors.com/threads/mp5-1-mds-mitigation-real-world-impacts-results.2182124/ ). For many apps, 2-way hyperthreading is only a minor boost in performance.

4-way hyperthreading is even less likely to scale to boost performance. The only way that 4-way could be useful is if the 4-way HT CPU has twice as many execution units as the 2-way. If you're going to throw that many transistors at the problem (like Power9 does), you could double the number of cores and use 2-way HT - and likely have better performance.

Hyperthreading is typically only useful in extremely deeply threaded server loads. Not on workstation loads. (I'll classify render farms as server loads....)

I have three servers with 72-cores and 144-threads, and can dynamically disable hyperthreading. I testing an ML job that took about 4 hours of clock time. With 72 cores - about 4 hours. With 144 cores - about 3hr 57min.

I'll be skeptical until benchmarks on 4-way HT CPUs show up. If you want to be exuberant without any data, go ahead.
 
Last edited:
Have fun living the i5 life, I guess.

It isn't just what you need today, but what you will need 5 years from now (especially based on Apple's update cycle). More cores and threads means that you have many poorly coded Adobe apps open and running at the same time, not just one or two.

All of my mission critical, hobbyist level software is heavily multi-threaded - none of them are server apps, btw.

AFA available data - did you even watch the Eypc 2 rollout? If not, roll on over to Youtube and review it.
 
Like a lot of folks, I was waiting on the 7,1. Now that we know what it is, I have started to transition away from OSX.
.

I was just wondering because Apple really hasn't been competitive in the workstation market since about 2011.

The 7,1 is certainly a welcome, if limited, development. It could be great if the breathless promises of Metal support materialize, even with only 28 CPU cores.

Or it could just as easily die on the vine like the 6,1 did.

Time will tell.
 
Last edited:
If you need horsepower, why are you still on a Mac?

Such a valid statement!
I was just wondering because Apple really hasn't been competitive in the workstation market since about 2011.

The 7,1 is certainly a welcome, if limited, development. It could be great if the breathless promises of Metal support materialize, even with only 28 CPU cores.

Or it could just as easily die on the vine like the 6,1 did.

Time will tell.


I feel like Apple has foolishly overlooked using AMD, which indicates to me that the Mac Pro might be short lived. They might have a contract/deal with Intel – but if they wanted the 7,1 to really be cutting edge and have a long lifespan they could've gotten that with AMD. Intel's outlook is weak. On top of that they still hinder the use of Nvidia GPUs and instead developed the proprietary MPX module which is going to cost an arm and leg. Also it might only work in the 7,1 and not future revisions...
 
Sigh ... there more than a few AMD hype spins on this thread at this point. I don't have much time to get to all of them but I'll whittle away a bit at a time as windows open up.


In HPC and the data center it will. Power9 powers the fastest super computer in the world, is it going to change the desktop workstation...


The June 2019 list? https://www.top500.org/lists/2019/06/

the List says that #1 Summit has 2,414,592 cores. Let's do a 'deep dive' on that over at Oak Ridge

https://www.olcf.ornl.gov/olcf-resources/compute-systems/summit/

2 Power 9 CPUs per node. 4,608 nodes. 9,216 CPUs. 22 cores per cpu 202,752 cores. Hmmm. that huge hefty chunk of that 2M is Volta cores ... which don't have multiple way SMT. There are six Volta V100 per node. That is where the 'grunt computation' is primarily coming from .

Power 9 comes in two flavors 4 SMT per core and 8 SMT per core. The 4 SMT variant is used for "Scale Out" (like these supercomputres ). The 8 SMT is used for scale up ( big shared memory NUMA boxes and pragmatically the z-Series Mainframe. )

Power 9 in those supercomputers primary job is to do the scalar subsections of the code and to keep the GPGPUs feed and the results stored. 4 SMT there is going to probably have greater impact on dealing with file system I/O than the 'core' matrix number crunching.


If look that Rmax/Cores ratio

1. 0.0615 (Power 9 / Tesla V100 )
2. 0.0602 (Power 9 / Tesla V100 )
5. 0.0524 ( Xeon SP Platinum with Optane and liquid cooler )
10. 0.0631 ( Power 9 / Telsa V100 )


Power 9 can connect to Tesla V100 via NVLINK so there is a lower bottleneck moving data in/out of the GPGPU accelerators. The V100's have HBM which is big bandwidth too.

The Xeon SP powered#5 doesn't quite have the same ratio but isn't 2x or 3x behind. It is just 20% (compared to the best one which is actually #10 ). Optane is probably letting them keep more data local to the node which means don't need to fill copious spare time waiting on data with a SMT thread.


#10 has the better Power 9 score in part scaling out too far gets a drop in efficiency. Getting data closer to where it is computed helps.


I dunno but I also don’t think that’s AMD’s target it either since there’s no real money in it.

AMD is going to have a derivative "high core count" processor that has better links to their GPGPU. For example an specialized EPYC with four future Infinity Fabric links off to four "next ,next gen" GPGPUs with Infinity Fabric links , local HBM used more so as L4 cache , and shared memory with EPYC .


For Intel, GPU Xe is a bigger issue whether they get to play in top20 supercomputer land in 2-4 years than Xeon SP core count in a single CPU package. The interconnects and latencies matter on very large scale out systems at least as much as the SMT thread count. But can avoid latencies ( relatively lots of local data) all the better.

SMT thread count is good for dealing with latencies. Not crunching vector math on memory accesses with highly predictive data caching sequences. (e.g., like strolling through matrices. ) 4 SMT for 'graphics' pipelines that are 100% stuck in x86 code ... highly likely isn't going to help.

4 SMT would help AMD better crack the web/cloud server market where there are relatively long latencies between physically remote clients swapping data with servers. Also with "scale up" big NUMA Online Transaction Systems (OLTP) Database systems ( where there is lots more random access and deeper storage tiering layers that are relatively slower ). An area where AMD is still behind if trying to max out throughput.
 
Simultaneous Multithreading (SMT) was invent way before Intel marketing coughed up Hyperthreading.


...
....
Hyperthreading is typically only useful in extremely deeply threaded server loads. Not on workstation loads. (I'll classify render farms as server loads....)

It isn't really threading that is an issue. It is primarily latency. The basic idea grew out of the observiation that processors spend lots of time doing nothing while waiting for the much slower parts of the memory storage heirarchy to come up with the data needed to do a computation.

Don't need a single program with lots of threads to get a high state of waiting. 1,000 people all doing something different tends to shred caches and lead to higher latencies. Single user workstation doing primary a single task (even if it can be parallelized) will tend to have problems doing that. All of these "tech porn" benchmarks where folks completely stop everything except to run the "porn" benchmark twice ( SMT on / off ) on as the only task on the machine almost entirely miss the point.


As the memory/storage gaps get smaller that "wasted" time tends to drop.

I have three servers with 72-cores and 144-threads, and can dynamically disable hyperthreading. I testing an ML job that took about 4 hours of clock time. With 72 cores - about 4 hours. With 144 cores - about 3hr 57min.

If most of those threads are tossed work to blocked ( "can't finish work for 2-30 seconds" ) GPUs then not going to get much done on the wait time.

ML inference from a high number of clients which has a limited, low time service level requirements you'd probably see a different story.


I'll be skeptical until benchmarks on 4-way HT CPUs show up. If you want to be exuberant without any data, go ahead.

some stuff will and some won't. Things like AVX-512 ( or whatever future vector length gets to ) are relatively big to replicate. Basic math , load/store , pointer chasing , etc. are being scaled up. and as long as can keep them feed with some local data while wait for the rest SMT can work.

For example

"... this means a 4-stack processor could be paired up with as much as 1.84TB/sec of memory bandwidth, ..."
https://www.anandtech.com/show/14733/sk-hynix-announces-36-gbps-hbm2e-memory-for-2020
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.