Actually, the statement says quite the opposite: AMD is not focussing much on delivering powerful cards for gamers (they are far behind on that, even if they will eventually equalise) but, as far as investing in ML and AI implementation with GPGPUs is concerned, they believe that the future is based on multiple card configurations.
Errr, no. Unless AMD has created some obtuse marketing slang for MCM what MCM stands for is "Multiiple Chip Module". So if AMD says they are
"... We are looking at the MCM approach ..." they are talking about putting multiple GPU dies on one module. That is how you get to mutiple-GPUs . It isn't necessarily multiple cards. In fact, quite the opposite. AMD has a track record of putting multiple GPU packages on one card. Using MCM they could do it even more efficiently.
A GPU isn't the whole card. So if they say "multi-GPU is considerably different" you can't leap to that is is multiple cards. That can simply be just multiple dies.
For example, Intel put together a package/module with a AMD Vega GPU + HBM RAM and an Intel CPU die into a single package. Intel used EMIB to do it but that is just a variation on the same general concept.
https://www.anandtech.com/show/13242/hot-chips-2018-intel-on-graphics-live-blog
In the Intel case, one of the EMIB example's dies above is actually a interposer (the middle example with the HBM and Vega ) coupled to a x86 die and there is just PCI-e run over the silicon bridge between the two.
The dies doesn't necessarily have to be heterogenous. AMD essentially does a MCM when they package multiple 8 core Zen dies into a Threadripper package ( to get 16-32 if have 4 or 8 active on each die).
So AMD could easily be following a similar strategy with the next gen GPUs. If take two Interposers with Vega like GPU with attached 'local' HBM memory stacks and then run an AMD 'Infiniy Bridge' connection between the GPUs they could create a relatively very large MCM with the two glued together on the same card. ( If it is 'compute' targeted card they could completely toss the silicon chips and logic associated with driving external physical display connectors. ).
One of the problems is that vendors can only make interposer just so big. ( it is around same limits of how big can make a die since those a 'printed'/'fabbed' with similar techniques.) Humongous, super max sized dies have problems.
What I was wondering is whether Apple will still focus on a macOS development to exploit multiple cards,
They already do have substantive support. The new eGPU management stuff added with macOS 10.14 allow you to assign workload to one of more than one GPUs present in the system.
If you mean SLI/Crossfire of trying to make a flat memory space virtual GPU out of 2 (or more ) GPUs for generic graphics.... then no. Didn't before and probably won't now. Computationally, I think Metal 2 is behind the curve of OpenCL 2+ (and CUDA) on several of the "shared, flat memory address space" characteristics. They have work to do there.
when only the nMP (hopefully) would allow it, or just leave the multiple cards being used only by vertical applications, therefore ending like the 6,1 which never used both cards at the same time for the same purpose.
6,1 did engage multiple cards if put in the effort. There were multiple issues there. One, that didn't come to OpenCL until well after the 6,1 shipped. Two, some of the 6,1 GPUs didn't fully support OpenCL 2+ concepts. Three, for compute that will result just in
They said that one of the mistakes of the 6,1 was believing that the market would have moved to dual configurations instead of one powerful card.
That isn't quite it. This issue was there wasn't "one" market. Some folks did have workloads that were well aligned with two GPUs. Others didn't. Apple's problem with the 6,1 was two fold. One, they tried to span both markets with one standard configuration that had two GPUs. Two, switched bets on path to enabling multiple GPUs ( walked away from OpenCL and went Metal). The first was a problem because didn't have a system could configure well with just one GPU (e.g., a 300W GPU and a 140W CPU sharing the same 'core' would bleed too much heat to the lower of the two). It is also a problem because the value proposition is damaged in selling something costly to those who don't need it ( those who just one, not necessarily powerful, GPU). The second is a problem because even the "two (or more)" GPUs folks will have adoption problems if path forward is murky (and there are less turbulent paths to follow).
They also got the projection track wrong GPU TDP growth. The size/capacity and scope of memory being thrown at the cards had some disconnects with some of the particulars with the 6,1 design. The D700 probably came in over some projected power budget. Apple was also suck with AMD problems with iterating forward just one 'mid' class GPUs moving forward.
It's kind of confusing to me.
Parts of Apple seemed to be confused themselves.
I agree. I think it will start with one card and offer multiple GPU configurations. Whether they are going to be also from nVidia, PCI-e or soldered, it's another discussion.
it would be equally dubious to solder the 2nd card as it would be to not to offer it at all. Part of the problem was not just that there were two "twin" GPUs, but part of the issue was not all of the "2 or more" GPUs users needed the second GPU that Apple had to offer. The primary issue is that Apple won't ( can't is too strong a phrase but pretty close) make all of the cards for everybody. What they need in the 2nd, nominally empty, slot is something that other vendors can fill. That slot doesn't have to support boot screens or other Mac specific stuff. As a computational card it only really needs to crunch numbers. But it may not be a GPGPU computational card at all.
That is the other fundamental flaw. The notion that the "dual" cards have to be exact twins. That's nice in some contexts but it seriously should not be necessary.
There is really no need to solder the primary GPU card either. If they want to sell 'twins' even less so. Solder isn't the primary functional issue. Relatively seamless integration with Thunderbolt probably is. (export the DisplayPort out of the box on a external facing edge or not.).
That was the 6,1: one card as a GPGPU and the other used for standard graphics. I wish I could use both of them for graphics or to focus on improving my simulations when I run them or even just to speed up my Finder.
Neither card was directly coupled to a external display connector. In 2016 GPU cards could only run a couple of displays. The number of display the modern mid-high can run is past good enough for most. Even if crank up the resolution space of a smaller number of displays things are pretty good.
Getting back to the topic these "multiple GPU" solutions that AMD is looking into won't necessarily mean more graphics output. The output of Machine Learning / AI / Blockchain isn't graphics data per se.
That needs apps AND an OS that are tuned together and Apple didn't deliver appropriate libraries (as far as I know) and didn't push developers enough since the 6,1 has been on the market since 2014 and Final Cut still runs slower than on a MBP with Quicksync.
One, I think Quicksync is a corner where Apple didn't avoid a single vendor solution and put effort into exploiting it. Since all Macs had intel iGPUs in them, Quicksync worked across the whole product line up. Having to deal with AMD, Nvidia , Apple's , and Intel fixed function coders of varying abilities across a fragmented Mac space probably wouldn't turn out so well.
Second, for video that is outside the scope of Quicksync focused codecs the speed differences aren't so great. Quicksync is only good for a certain subset of video. (it happens to be a popular consumer subset, but subset none the less. )
But yeah, it would be fantastic if they realised that they would sell ********s of nMPs if they let us choose which and how many cards to install in our workstations and mainly letting us upgrade them with time and money: not everybody can start from the top configuration.
It is highly doubtful they would relatively sell "*****s of nMP". It won't pass iMac in sales. Probably at least one (if not two) orders of magnitude off of Mac laptop sales. For the next Mac Pro is probably is the case of whether the Mac Pro is a visible slice of any Mac product pie chart Apple execs look at in their weekly sales tracking meetings. ( 1-2% or less is a sliver that won't even show in a reasonable sized pie chart. ). If the Mac Pro got to 4% it would doing good. That isn't huge though (relative to the rest of the line up).
AMD's future MCM GPU solution probably would not be the starting/entry configuration at all. In fact, I'd be surprised is it was even put in the nominal, boot GPU 'slot' at all even if Apple deployed one in certain BTO configs.
The dual GPUs drive the base price of the Mac Pro from the $2.5K up to the $3K range. That part of "top configuration" ( actually "more expensive" ) configuration problem is true. Apple needs to either come back to something closer to $2.5K or shift the components cost budget to something that "everybody" they are targeting will like more of ( e.g., 16GB base RAM or bigger SSD or both ). The macOS software stack can easily consume and make good use of both of those with no changes to most currently actively developed programs.
[doublepost=1540793326][/doublepost]
I’m not claiming it’s Apple branded, but it will be available from Apple like the Blackmagic Pro 580. I discovered it in the Vega driver in the latest beta. Details are in the Vega & Polaris support thread.
There is nothing there that suggest that this is targeted at gaming. If it is a Blackmagic eGPU upgraded from 580 to Vega 56 then that system is focused on creation ( e.g., aid to Blackmagic Resolve) and "creation" , not gaming.
Gaming would be s side effect of what it could also be useful for, but if the Apple dog and pony show major theme is about creation ( "There is more in the making") then that's problem what they would demo ( something being created. ) Either with Resolve (if Bmagic is doing demo) or FCPX ( if Apple is doing the demo ).