Wasn't this Knights Corner? I just feel like we've been here before.
Not particularly. Intel's iGPUs started out in the extra mostly "unused"/"extra" space on the die. Over time Apple (and other System makers) asked Intel throw higher and higher transistor budget at the iGPU. And they did.
The shift to multiple dies is in part driven by the slowing down by Moore's Law. As the 7nm , 5nm , 4nm ... processors get more complicated it gets more painful to make super large dies. Growing the transistor budget as the size slows down will shift more so to lashing multiple ( pragmatically more affordable) dies together to grow the overall package budget.
The Xeon E7 ( 'max' core and socket count) series got subsummed into with the shift to Xeon SP ( effectively got E5 2600 and E7 x8xx series merged into one group). Pragmatically, what going to see with the upcoming "super max" SP variants is Knights Corner being merged into SP class offerings. Xeon Phi 100 series started off around 57-61 cores. The current generation of SP tops out at 28. If they did a 2 x 28 mash up they'd have 56 ... which is pretty close to what they started out with on Phi. It would just be a another "black hole" consumption of another product line.
Intel already indicated back in 2017 that they were heading toward multiple chips to get to high core counts (before "chiplet" stuff).
"... On speaking with Diane Bryant, the 'data center gets new nodes first' is going to be achieved by using multiple small dies on a single package. But rather than use a multi-chip package as in previous multi-core products, Intel will be using EMIB as demonstrated at ISSCC: an MCP/2.5D interposer-like design with an Embedded Multi-Die Interconnect Bridge (EMIB). ..."
https://www.anandtech.com/show/1111...n-core-on-14nm-data-center-first-to-new-nodes
Once going to process of putting multiple CPU dies to get to higher core counts, it is relatively natural addition to substitute out one of those CPU focused dies and put in a GPU die. ( e.g., the Intel + AMD GPU mash up that Intel did. A future iteration where they completely toss that AMD GPU would be easy with an Intel GPU die if they had one. )
Well, I'll certainly keep an eye on what they're doing. But I think it would have to be a heck of a release to pry Apple away from AMD.
(Not being x86 based probably explains how this is different from Knights Corner.)
Pry Apple away? Apple spent probably 2005-present yearly handing Intel a wish list of what they wanted a GPU to do. The bulk of GPUs bough by Apple over that time span has been Intel GPUs. In terms of numbers they greatly outnumber the other two vendors. They have been dominate over the last 7+ years. Intel has 10 years of "we wish we had blah , blah , blah " lists from Apple. It isn't like they are a complete outsider and don't know. For example they have been a ground floor implementor of Metal on the Mac.
If Intel simply grows out their Mac support software stack to cover the new GPUs, doesn't try to piss on Metal, executes on their roadmaps , and manages to get rough parity with AMD performance at a incrementally lower cost. ..... it won't be a "pry" away. They'll just win the design 'bake off".
Apple does not want single source major suppliers. They don't.
If Intel had spent the money from MacAfee on GPUs they wouldn't be another year or so away from finishing Xe and moving to other memory busses than the mainstream CPU ones.
Is any of the new stuff Intel announced using a shared memory space? I was really disappointed when their Vega i7 wasn't a shared memory space. That would have been a slam dunk for Apple.
Apple's GPUs being completely dependent upon shared physical memory is exactly why is relatively far away from being a viable discrete GPU for a Mac Pro. It could be added but they don't even have the concept. Metal does, but the GPU doesn't.
Gen 11 is a integrated GPU. Some shared physical RAM is just part of the definition. Is it all shard flat addressable? No. As I outlined they deferred aspects of OpenCL 2+ to the 10+ generations. Did Intel have the requirements for Metal when they started several years ago. Probably ( Metal has been around since 2014 so if they started in 2016-2017 on this Gen 11 GPU they would have had the specs for Metal 2. Metal 2 appeared on Macs in 2017 and obvious Intel would have been told about that way ahead of time since most Macs in that time period come with Intel GPUs. )
Actually not joking. Apple Car seems to be on track to be one of the biggest boondoggles of funds since Apple spent $2+ billion to become a patent troll ( the Nortel patents). 100+ folks tossed because they have relatively little at this point. Apple had a window to be a electronics supplier to several car companies. Instead engaged in "Monkey see, Monkey do" with Tesla ... which is almost like the blind leading the blind. Apple has gobs of money so it won't hurt them and they can sweep it under the rug over time.
but it's not entirely irrelevant that they would need a bunch of other ARM chip sizes possibly for whatever car/VR/AR headset they're working on. I think given where they are going it's not impossible that they'd want to have an internal strategy for being able to produce A series chips at many sizes.
Apple strategically needing to do more ARM die variations for substantially higher volume products than the Mac Pro would be all the
more indicative of what a waste of time , resources , and effort for Apple to plow into the discrete GPU space.
And if they ever start building their own in house servers for iCloud/whatever...
They can just buy Data Center level mainstream parts just like everyone else..... just as they are doing now.
Apple needs internally circuit design Data center solutions like they need a hole in the head. The closest major cloud competitor to that is Amazon and they simply just bought Annapurna Labs "off the shelf". They could just buy Ampere's or Cavium's Thunder solutions if they were desperate at being 'green' in their data center set ups.
if Apple wanted to attach their own FGPA solution to a x86 server chp then Intel or AMD would probably being able to do that for them if they had some special "web service specfic" hot spots they needed to cover. There is about zero rational reason for Apple to drift in to the highly internally designed CPU package space for the core web services they provide.
[doublepost=1549235952][/doublepost]
HDDs aren't primary storage for Pro users anymore. Therefore, you have 1, 5, 10, 40 Gbps options to keep them away from your ears, the cooling system of your mMP and give room to M2 multiple slots for future-proof scalability.
Away from the ears doesn't mean you aren't using them. The issue is whether they absolutely have to be inside the same physical box. That 1-3 drives can't heavily task a USB (gen 2 ) connection unless contrive some rolling down hill with hurricane winds at your back corner case "use case".
But yes for larger budgets where "time is money" and the folks time being sucked up are paid more than moderate amounts, it often pays to just go multiple SSDs. All the more so if sneaker netted around.