No, it doesn't. when I say dGPU, I mean a separate, dedicated GPU. I could be AMD's, or it could be Apple's, or Nvidia's, or some other party. I am betting on an Apple internal dGPU, mostly because apple is run by control freaks who just got away from depending on one unstable supplier (Intel), and don't want to have to rely on potentially others.
While HBMx seems to hold a remarkable fascination for some members of this message board, I note that Nvidia's GPUs don't use HBM, and have no issue kicking AMD's ass all over the landscape, and yes, I am including the HBM equipped GPU in the Mac Pro (mac Pro is very capable, but if there is a weak point in it, it is the GPU).
To my mind, separating the CPU and GPU on the "Pro" machines makes too much sense. There's quite a lot of benefits from manufacturing (less dies that need to be thrown away, smaller die size) to cooling (less heat concentrated in one area), to modularity (the main point of the Mac Pro).Mac Pro — honestly no idea how Apple is going to approach this. Unified memory systems are particularly attractive for professional workflows, since no data has to be copied between CPU and GPU. But then again Mac Pro would benefit from modularity and it is not clear whether building a monster SoC for Mac Pro is feasible.
some kind multi cpu like link to the gpuMac Pro — honestly no idea
So the evidence really strongly suggests Apple is not using AMD. I would go look at leman's dGPU thread for that discussion. But basically, Apple has all-but directly stated they will use their own graphics solution.I still don't see HBM as a fit anywhere. I understand the logic behind it, I just don't see the fit between LPDDR5/6 and GDDR6. It does have lower latency, but not enough to matter. I also don't see Apple's on SOC GPUs being intended to compete with dGPUs; I see them as just having to significantly outperform the Intel iGPUs.
For GPU performance that is intended to beat dGPUs, there will be off chip dGPUs, either AMD or Apple in-house designs (probably start off with AMD, but eventually move to Apple when they feel that they have the performance required). The SoCs used for machines with dGPUs will replace the GPU sections of the SoCs with more CPU cores, more/bigger ML/AI cores, and the logic to interface with PCIe slots, and other SoCs. In the high end MBP 16", you get 1 such SoC, with a dGPU and VRAM; on the mid-range iMac you also get 1. On the high end iMac, you get 2 such SOCs. On the new Mac Pro, you get 4 or more.
My point about nVidia not using HBM (and I know they have used it on some Titan models in the past) was to point out that HBM2/3 is NOT necessary to build high perfromance GPUs, and just because a particular design uses HBM2/3, it is NOT a guarantee of high performance.
So the evidence really strongly suggests Apple is not using AMD. I would go look at leman's dGPU thread for that discussion. But basically, Apple has all-but directly stated they will use their own graphics solution.
I don't think I'd consider that solution looking like a traditional dGPU likely. It's in the cards, especially for the Mac Pro. But it's less of a stretch to me to say "Apple will use a chiplet or APU design for the MBP16 but use HBM2E as shared memory" than "Apple will design a traditional discrete GPU for the MBP16 with GDDR6."
It would be a real shame if the iMacs were the same as their portable cousins. The iMacs have way more cooling potential and no battery concerns. Just as now where a base 27” iMac is more powerful than a MacBook Pro, I would consider that a necessity post-arm changeover, otherwise there’s no point, just buy a portable and a big screen.
Apple WILL use their own graphics solution for the consumer and lower end of the product line. But relying on iGPUs in something like a Mac Pro, or even iMac Pro (if it is continued) and higher end MacBook Pros would be suicidal. It ensures that Pros won't be buying those systems, at least for the initial releases, until there is hard evidence to prove that they performance of the iGPU can at least match the current dGPUs. Pros have serious, income impacting tasks to get done; they will pay for very expensive systems (the fully loaded Mac Pro, for example) if the income earning potential is there. They will not pay for elegant, "cool" or stylish design statements (see what happened with the trashcan Mac Pro).
Chiplet type designs (as used by AMD) have not even been hinted at by Apple. They haven't even been mentioned, anywhere. I don't think Apple will go that way, because it is in direct opposition to SoC type designs. Argument can be made, I suppose, that Apple's SoCs in fact are Chiplet designs in the theoretical sense, but we have never seen multiple Apple designed silicon chips being packaged together.
Apple will eventually design dGPUs, and probably match or exceed the AMD/nVidia dGPUs for specific tasks, like video editing/encoding/decoding, but they will not target 300 fps rate gamers. And they will NOT do it in the first round of product releases. While Apple has vast resources, those resources are NOT infinite. The IC designers are probably in the process of designing multiple SoCs right now, and probably will not be designing dGPU equivalents, integrated or not. When the initial line of AS Macs is rolled out is when Apple starts iterating, where improvements can be made. It is then that Apple dGPUs can be expected to show up with competitive performance. Before then, it will most likely be a third party dGPU, most likely an AMD chip of some sort, as this is what Apple has the most experience (hardware and software) with.
Yes, I agree a chiplet design becomes problematic when we reach Mac Pro territory. This is very crude, but I think expecting them to want about 5W a core (the most we see a single lightning core use at peak) with all 16 cores in use is reasonable. Throw in an unknown number of NPUs and a bucket of extra cache and 80-100W sounds right. Creating a giant, threadripper-sized "chiplet" to accommodate those cores and a GPU is certainly ambitious, although possible.For the professional machines using dCPUs, they cannot be on SoC. This is not because I say so, but because there will not be any way to dissipate the heat.
A “pro” SoC will have from 16 to 32 high performance cores on board. It will have the hardware video encoders/decoders, ML/AI/neural engines (each much more powerful than the one in the A12z), Thinderbolt controllers, PCIe controllers, on SoC RAM, and possibly logic to allow multiple such chips to be used together. This is probably a 80-100W SoC.
Now add, on SOC, GPUs to allow Apple’s “Pro” machines to stay competitive with an nVidia RTX 3080 (currently thought to have 300W TDP). Lets also assume, for the sake of argument, that Apple’s chip designers are better at GPU design than nVidia’s are. So they come up with a GPU that is competitive with nVidias RTX3080, and only uses 100W.
Taking the above 80-100W CPU, and adding another 100W from the new super efficient GPU gives a total TDP of 180-200W. That kind of power will call for some extensive cooling. The Mac Pro and iMac can handle that, none if the other machines (MBP 16”, Non-Pro iMac 30”) cannot.
The above assumes that Apple can design a competitve GPU that uses 1/3 the power of nVidia’s offering.
Not Going To Happen. Apple may be able to reduce power in the GPU cores, but not by 66%.
I think it is far more likely that you use an off SoC dGPU. It allows for more flexibility, and makes the pro level SoC far easier to cool.
It also opens up a lot of options. Thing like various dGPU options, or even different SoCs (16 or 32 cores), or, in the MacPro, multiple SoCs.
As for chiplets, it would be a way to allow customization. But Apple has not, at this point, given any indication that they will be going this way. Chiplets are only a way of increasing yields, they are not used for performance purposes. Note that thing like Infinity Fabric, as used by AMD, is a silicon level/on chip interconnect, not a board or chiplet interconnect.
I did try to look at your links, but tgey did not come up properly for ne.
Taking the above 80-100W CPU, and adding another 100W from the new super efficient GPU gives a total TDP of 180-200W. That kind of power will call for some extensive cooling. The Mac Pro and iMac can handle that, none if the other machines (MBP 16”, Non-Pro iMac 30”) cannot.
The second problem I have is that you are using an nVidia card as your baseline. You and I both know nVidia's chips are far more power-efficient than AMD's. We also know Apple has used AMD exclusively for years. Apple has never felt the need to offer better performance than nVidia before and they are not going to start now. The Vega 56 in the iMac Pro has a 210W TDP.
If it will be 40w or 100w is beside the point as we talk about the same thing: a HEP. The important point is that the Mac Pro get many SoCs in order to scale to highly parallel work flow and there we are on the same page. You assume the GPGPU paradigm using dGPU will persist. It is clear that Apple will explore new routes such as dedicated accelerators such as an NPU, en/decoder (afterburner) and possibly ray tracers which reduces GPU usage. It will be expensive to create dedicated chips for each of these functions but a generic HEP might be much cheaper even if you need to buy too many CPU cores in order to get the other function you need. How much does a A12Z cost? $100? There is a lot of combined performance in 10 A12Z and 1000 usd is peanuts.Apple can do whatever they want. They are one of the few manufacturers to ship a liquid cooled system (didn't work out over time, but that is a different topic). I think Apple won't develop such a limited application SoC when they have a substantial part of their product lineup that could take advantage of am SoC that is usable in more products.
The reason I don't think that Apple WON'T use a 100-300W SoC is because I believe that Apple will develop ONE "High End Performance" (HEP) SoC. It will be somewhere below 40W. It will be used on the MacBook Pro 16", the higher end iMacs (perhaps more than 1), and MacPros (definitely more than 1). This HEP SoC may be binned for higher clock speeds on the iMac Pro (if it continues) and Mac Pros. This is going to be a 16/24/32 HP Core SoC (I honestly don't have any idea which way they will go), no GPU on board, with enhanced ML/AI/Neural Engine, and the logic to interface with an external GPU and PCIe, both for NVME type SSDs and PCIe slots, as well as logic to allow multiple SoCs to work together. 16" MacBook Pros get either the highest performance "consumer" SoCs with iGPUs, or the HEP SoC with a dGPU. iMac 24" get the same. 30" iMacs get HEP SoCs with a dGPU, with a possible BTO to use two HEP SOCs (if the iMac Pro continues, or as the top 30" iMac). Mac Pros get 4 HEP SoCs, with at least one dGPU on an MPX card, with possibility of adding more dGPUs on MPX cards, as well as the Afterburner accelerator.
This is the way I see the Apple SoCs shaking out. They will use higher performance iGPUs that handily beat out any existing iGPUs (Intel or AMD). When demands exceed the abilities of the iGPUs, there will be dGPUs for those who need to use them, and are willing to pay for them. This simplifies the SoCs down to two variants, simplifying manufacturing, and more importantly, increasing the volume of the HEP SoC. It will probably be very difficult to have any significant volume of a MacPro only SoC, which implies extreme costs.
Why the multiple HEP SoCs on some systems? Because the workloads on those systems will have higher thread counts, and there people running out of threads on the current Mac Pro (hyperthreaded 28 core Intel Xeon, 56 threads), so the upcoming Mac Pro better be able to have at least as many cores as there are currently threads on the present Mac Pro. 64 threads (4 X 16 HP SoCs, or possibly 2 X 32 HP Core SoCs) or 96 threads(4 X 24 HP SoCs) or even 128 (4 X 32 HP Core SoCs) would find very happy buyers.
no one in their right mind would buy an equivalent windows ultrabook, unless for very specific software or something, or they absolutely hate macos.
And yet they kept using a 2.5” laptop platter drive for a long time. With 5,400 rpm. That had nothing to do with making sense.
Apple will add accelerators to help with video encode/decode, but that only goes so far, as we have just found out with the HEVC encode/decode accelerators on the iPad Pro; the accelerators do help HEVC processing lot; BUT THEY ONLY HELP WHEN USED FOR THE PURPOSE THEY WERE DESIGNED FOR, in other words, they are currently a waste of silicon for the AV1 format.
Why am I saying all of this? Because more than likely, the Mac Pro SoC is currently under design, and may even have initial samples for test and debug purposes.
The "consumer" SoC is already being fabbed up, and some volumes are already built. They better be, because manufacturing of the AS Macs will be shortly under way, if it hasn't started already, and those SoCs better be in inventory, or at least available in volume before the system assembly can start. Apple needs to have AS Macs available in inventory on the day that the AS Macs are made available. Also note, there is 3-4 weeks of transit time between China and North America. So assuming that the first AS Macs will be out in mid-Oct. to early-Nov., and taking 3 weeks out, means that volume (my estimate 500k units or so) must be on a boat no later than the end of Sept.-early Oct. Manufacturing, assuming that all goes smoothly, at 100K a week, must start in mid to late August. This makes the assumption that all goes smoothly, with acceptable board yeilds and zero surprises. Allowing for a slower start up and some expected production glitches, and the first week or two of slow production (say 50K a week) means that AS Mac production is startring just about now. Which means that the first AS SoCs are avaiable NOW, and may have been available a few weeks ago.