Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Which model are you buying?

  • MacBook

  • Mac mini

  • MacBook Pro

  • Mac Pro


Results are only viewable after voting.

Kostask

macrumors regular
Jul 4, 2020
230
104
Calgary, Alberta, Canada
That is a point we disagree on. I think the Professional lineup goes with dGPUs, so no real need for HBMx. The Pro lineup will have a lot of RAM (32/48/64GB, or more) and dGPUs. If I am correct, the memory bandwidth demand from the Professional machines will be less than the memory bandwidth from consumer machines. So the higher amount of memory, and the use of dGPUs will pretty much eliminate the need for HBMx. I think Apple will design the Pro SoCs without any on board GPUs, and use the silicon area for more CPU cores, more/bigger ML cores, more USB4/TB4 ports, and PCIe/dGPU slot interfaces.
 

Boil

macrumors 68040
Original poster
Oct 23, 2018
3,477
3,173
Stargate Command
But that thought pattern might also imply that Apple will continue to use third-party dGPUs, meaning AMD GPUs, of which a good portion of the current AMD GPUs offered in the Mac Pro (and the iMac Pro) by Apple have HBMx memory....
 

Kostask

macrumors regular
Jul 4, 2020
230
104
Calgary, Alberta, Canada
No, it doesn't. when I say dGPU, I mean a separate, dedicated GPU. I could be AMD's, or it could be Apple's, or Nvidia's, or some other party. I am betting on an Apple internal dGPU, mostly because apple is run by control freaks who just got away from depending on one unstable supplier (Intel), and don't want to have to rely on potentially others.

While HBMx seems to hold a remarkable fascination for some members of this message board, I note that Nvidia's GPUs don't use HBM, and have no issue kicking AMD's ass all over the landscape, and yes, I am including the HBM equipped GPU in the Mac Pro (mac Pro is very capable, but if there is a weak point in it, it is the GPU).
 

leman

macrumors Core
Oct 14, 2008
19,518
19,669
No, it doesn't. when I say dGPU, I mean a separate, dedicated GPU. I could be AMD's, or it could be Apple's, or Nvidia's, or some other party. I am betting on an Apple internal dGPU, mostly because apple is run by control freaks who just got away from depending on one unstable supplier (Intel), and don't want to have to rely on potentially others.

While HBMx seems to hold a remarkable fascination for some members of this message board, I note that Nvidia's GPUs don't use HBM, and have no issue kicking AMD's ass all over the landscape, and yes, I am including the HBM equipped GPU in the Mac Pro (mac Pro is very capable, but if there is a weak point in it, it is the GPU).

Depends on what you mean by "Professional" lineup. Mac laptops (including the larger Pro's) will certainly use Apple "integrated" GPUs. HBM as system memory would make sure that these GPUs don't have any performance limitations compared to the traditional dGPU components (also see the P.S.)

Mac Pro — honestly no idea how Apple is going to approach this. Unified memory systems are particularly attractive for professional workflows, since no data has to be copied between CPU and GPU. But then again Mac Pro would benefit from modularity and it is not clear whether building a monster SoC for Mac Pro is feasible.

P.S.

Quick remark on memory discussion: Nvidia does well without HBM because GDDR6 offers comparable bandwidth. The drawback of GDDR variants is very high latency, which is less of an issue for GPUs. A GPU will simply switch to do some other task while it's waiting for that data to arrive from memory — after all, GPUs can have thousands of tasks in flight simultaneously. CPUs are much more sensitive to latency however.

Since HBM has comparable latency to normal DDR RAM, it can be used as system memory. But it also has very high bandwidth, which makes it a good fit as a memory for a parallel processor. This combination of features would make it excellent choice for unified system memory shared by both CPU and GPU. If Apple is serious about high-performance SoCs, they will absolutely have to use some sort of high-bandwidth system memory or their GPUs won't be able to measure to the dGPUs.

Thats why my prediction goes:

13" models, lower end iMac — LPDDR5 (sufficient to compete with GDDR5)
16" models, higher end iMac — HBM (to compete with mid-range dGPUs)
Mac Pro — honestly no idea
 

JMacHack

Suspended
Mar 16, 2017
1,965
2,424
Mac Pro — honestly no idea how Apple is going to approach this. Unified memory systems are particularly attractive for professional workflows, since no data has to be copied between CPU and GPU. But then again Mac Pro would benefit from modularity and it is not clear whether building a monster SoC for Mac Pro is feasible.
To my mind, separating the CPU and GPU on the "Pro" machines makes too much sense. There's quite a lot of benefits from manufacturing (less dies that need to be thrown away, smaller die size) to cooling (less heat concentrated in one area), to modularity (the main point of the Mac Pro).

Added to that, Apple could devote more space on the respective dies to ASICs that benefit different workflows. More hardware decode on the GPU and more AI cores on the CPU.

And to be perfectly frank, I don't believe a monster SoC, Mac Pro level anyway, is feasible at all.
 

Kostask

macrumors regular
Jul 4, 2020
230
104
Calgary, Alberta, Canada
I still don't see HBM as a fit anywhere. I understand the logic behind it, I just don't see the fit between LPDDR5/6 and GDDR6. It does have lower latency, but not enough to matter. I also don't see Apple's on SOC GPUs being intended to compete with dGPUs; I see them as just having to significantly outperform the Intel iGPUs.

For GPU performance that is intended to beat dGPUs, there will be off chip dGPUs, either AMD or Apple in-house designs (probably start off with AMD, but eventually move to Apple when they feel that they have the performance required). The SoCs used for machines with dGPUs will replace the GPU sections of the SoCs with more CPU cores, more/bigger ML/AI cores, and the logic to interface with PCIe slots, and other SoCs. In the high end MBP 16", you get 1 such SoC, with a dGPU and VRAM; on the mid-range iMac you also get 1. On the high end iMac, you get 2 such SOCs. On the new Mac Pro, you get 4 or more.

My point about nVidia not using HBM (and I know they have used it on some Titan models in the past) was to point out that HBM2/3 is NOT necessary to build high perfromance GPUs, and just because a particular design uses HBM2/3, it is NOT a guarantee of high performance.
 
Last edited:

awesomedeluxe

macrumors 6502
Jun 29, 2009
262
105
I still don't see HBM as a fit anywhere. I understand the logic behind it, I just don't see the fit between LPDDR5/6 and GDDR6. It does have lower latency, but not enough to matter. I also don't see Apple's on SOC GPUs being intended to compete with dGPUs; I see them as just having to significantly outperform the Intel iGPUs.

For GPU performance that is intended to beat dGPUs, there will be off chip dGPUs, either AMD or Apple in-house designs (probably start off with AMD, but eventually move to Apple when they feel that they have the performance required). The SoCs used for machines with dGPUs will replace the GPU sections of the SoCs with more CPU cores, more/bigger ML/AI cores, and the logic to interface with PCIe slots, and other SoCs. In the high end MBP 16", you get 1 such SoC, with a dGPU and VRAM; on the mid-range iMac you also get 1. On the high end iMac, you get 2 such SOCs. On the new Mac Pro, you get 4 or more.

My point about nVidia not using HBM (and I know they have used it on some Titan models in the past) was to point out that HBM2/3 is NOT necessary to build high perfromance GPUs, and just because a particular design uses HBM2/3, it is NOT a guarantee of high performance.
So the evidence really strongly suggests Apple is not using AMD. I would go look at leman's dGPU thread for that discussion. But basically, Apple has all-but directly stated they will use their own graphics solution.

I don't think I'd consider that solution looking like a traditional dGPU likely. It's in the cards, especially for the Mac Pro. But it's less of a stretch to me to say "Apple will use a chiplet or APU design for the MBP16 but use HBM2E as shared memory" than "Apple will design a traditional discrete GPU for the MBP16 with GDDR6."
 

ian87w

macrumors G3
Feb 22, 2020
8,704
12,638
Indonesia
I will definitely be eyeing on the new Macbooks. I always want a thin-n-light laptop but with good battery. The current "thin-n-light" laptops on the market are hardly thin-n-light. The Macbook Air 11" was probably the ideal weight, but the screen is too small. With Apple Silicon, it would be awesome to have a Macbook that is close to iPad in terms of weight and battery endurance.
 

Kostask

macrumors regular
Jul 4, 2020
230
104
Calgary, Alberta, Canada
So the evidence really strongly suggests Apple is not using AMD. I would go look at leman's dGPU thread for that discussion. But basically, Apple has all-but directly stated they will use their own graphics solution.

I don't think I'd consider that solution looking like a traditional dGPU likely. It's in the cards, especially for the Mac Pro. But it's less of a stretch to me to say "Apple will use a chiplet or APU design for the MBP16 but use HBM2E as shared memory" than "Apple will design a traditional discrete GPU for the MBP16 with GDDR6."

Apple WILL use their own graphics solution for the consumer and lower end of the product line. But relying on iGPUs in something like a Mac Pro, or even iMac Pro (if it is continued) and higher end MacBook Pros would be suicidal. It ensures that Pros won't be buying those systems, at least for the initial releases, until there is hard evidence to prove that they performance of the iGPU can at least match the current dGPUs. Pros have serious, income impacting tasks to get done; they will pay for very expensive systems (the fully loaded Mac Pro, for example) if the income earning potential is there. They will not pay for elegant, "cool" or stylish design statements (see what happened with the trashcan Mac Pro).

Chiplet type designs (as used by AMD) have not even been hinted at by Apple. They haven't even been mentioned, anywhere. I don't think Apple will go that way, because it is in direct opposition to SoC type designs. Argument can be made, I suppose, that Apple's SoCs in fact are Chiplet designs in the theoretical sense, but we have never seen multiple Apple designed silicon chips being packaged together.

Apple will eventually design dGPUs, and probably match or exceed the AMD/nVidia dGPUs for specific tasks, like video editing/encoding/decoding, but they will not target 300 fps rate gamers. And they will NOT do it in the first round of product releases. While Apple has vast resources, those resources are NOT infinite. The IC designers are probably in the process of designing multiple SoCs right now, and probably will not be designing dGPU equivalents, integrated or not. When the initial line of AS Macs is rolled out is when Apple starts iterating, where improvements can be made. It is then that Apple dGPUs can be expected to show up with competitive performance. Before then, it will most likely be a third party dGPU, most likely an AMD chip of some sort, as this is what Apple has the most experience (hardware and software) with.
 

Mohamed Kamal

macrumors member
Jan 5, 2020
61
35
I know that everyone is excited for the macbook pro ARM, but I think that the Macbook (air?) is the more exciting model. The MBP will increase in performance and battery life and will be overall better but the new model won’t be a MUST UPGRADE over the current model.. However the MBA will be in a whole other level.. very thin and light, completely silent (possibly), killer battery life, no heat whatsoever, extreme build quality, AND performance at least as good as the developer chip. It will completely obliterate the current MBA.. it will literally be the perfect ultrabook, and no one in their right mind would buy an equivalent windows ultrabook, unless for very specific software or something, or they absolutely hate macos. Now if apple drop the price to like 899$ and maybe reduce the bezels and put a 13” screen on a 12” macbook body.. it would be -dare I say- as revolutionary as the original MacBook air.
 

Jimmy James

macrumors 603
Oct 26, 2008
5,489
4,067
Magicland
It would be a real shame if the iMacs were the same as their portable cousins. The iMacs have way more cooling potential and no battery concerns. Just as now where a base 27” iMac is more powerful than a MacBook Pro, I would consider that a necessity post-arm changeover, otherwise there’s no point, just buy a portable and a big screen.

And yet they kept using a 2.5” laptop platter drive for a long time. With 5,400 rpm. That had nothing to do with making sense.
 

profcutter

macrumors 68000
Mar 28, 2019
1,550
1,296
No disagreement here. But with a real SSD, the 27 inchers outpace the MacBook pros.
 

awesomedeluxe

macrumors 6502
Jun 29, 2009
262
105
Apple WILL use their own graphics solution for the consumer and lower end of the product line. But relying on iGPUs in something like a Mac Pro, or even iMac Pro (if it is continued) and higher end MacBook Pros would be suicidal. It ensures that Pros won't be buying those systems, at least for the initial releases, until there is hard evidence to prove that they performance of the iGPU can at least match the current dGPUs. Pros have serious, income impacting tasks to get done; they will pay for very expensive systems (the fully loaded Mac Pro, for example) if the income earning potential is there. They will not pay for elegant, "cool" or stylish design statements (see what happened with the trashcan Mac Pro).

Chiplet type designs (as used by AMD) have not even been hinted at by Apple. They haven't even been mentioned, anywhere. I don't think Apple will go that way, because it is in direct opposition to SoC type designs. Argument can be made, I suppose, that Apple's SoCs in fact are Chiplet designs in the theoretical sense, but we have never seen multiple Apple designed silicon chips being packaged together.

Apple will eventually design dGPUs, and probably match or exceed the AMD/nVidia dGPUs for specific tasks, like video editing/encoding/decoding, but they will not target 300 fps rate gamers. And they will NOT do it in the first round of product releases. While Apple has vast resources, those resources are NOT infinite. The IC designers are probably in the process of designing multiple SoCs right now, and probably will not be designing dGPU equivalents, integrated or not. When the initial line of AS Macs is rolled out is when Apple starts iterating, where improvements can be made. It is then that Apple dGPUs can be expected to show up with competitive performance. Before then, it will most likely be a third party dGPU, most likely an AMD chip of some sort, as this is what Apple has the most experience (hardware and software) with.

So let's start with where we agree.

I agree that Apple's graphics solution for higher end Macs will not come in the first round. I expect it next year, 2H, and maybe even on N5P. This just means the MBP 16 and iMac Pro don't launch until 2H 2021 - and the Mac Pro until 2022. This year and early next year will see launches of the Air, Mac Mini, most likely an MBP 13, and maybe an iMac.

I would also not consider Apple's A# APUs a "chiplet." However, I'd like your opinion on the CC Fabric shown on the A12X diagram on wikichip. This appears to be a proprietary silicon interconnect of some variety - possibly the piece Apple needs to make a chiplet.

Lastly, I agree that Apple will have a powerful graphics solution, and that it won't be anemic or a design-statement copout. But we need to consider what approach Apple will take. They are starting from ground zero. You say we haven't seen any evidence Apple is working on a chiplet design, but we've seen no more evidence they are working on a traditional dGPU.

I think the evidence suggests Apple doesn't favor a traditional dGPU design. Not just because they are purging AMD from their developer docs, but because of their focus on unified memory and GPGPU computing.

I'm unsure why you think a chiplet design is anathema to SoCs. The same SoC can still be used - the only difference is that there are stacks of HBM and another die full of GPU cores on the package. Certainly the graphics parts are redundant on desktop machines, but how is this any different than using an SoC with a dGPU? Whether or not Apple decides to make a die with more CPU cores and no GPU cores for desktop machines is an entirely separate issue.

It's true that a chiplet design has performance limitations a dGPU doesn't have. In particular, it probably can't accommodate the same clock speeds in smaller devices due to heat/area issues. Apple may consider these tradeoffs worth it in order to have a closely integrated GPU optimized for GPGPU computing in all of its devices. A chiplet design is the "middle ground" between an APU and a dGPU that allows Apple to partially mitigate heat/area issues while also keeping the CPU and GPU cores close enough to share memory and communicate efficiently. Is there an issue with this approach that I'm not seeing, or do we just disagree on priorities?
 

Kostask

macrumors regular
Jul 4, 2020
230
104
Calgary, Alberta, Canada
For the professional machines using dCPUs, they cannot be on SoC. This is not because I say so, but because there will not be any way to dissipate the heat.

A “pro” SoC will have from 16 to 32 high performance cores on board. It will have the hardware video encoders/decoders, ML/AI/neural engines (each much more powerful than the one in the A12z), Thinderbolt controllers, PCIe controllers, on SoC RAM, and possibly logic to allow multiple such chips to be used together. This is probably a 80-100W SoC.

Now add, on SOC, GPUs to allow Apple’s “Pro” machines to stay competitive with an nVidia RTX 3080 (currently thought to have 300W TDP). Lets also assume, for the sake of argument, that Apple’s chip designers are better at GPU design than nVidia’s are. So they come up with a GPU that is competitive with nVidias RTX3080, and only uses 100W.

Taking the above 80-100W CPU, and adding another 100W from the new super efficient GPU gives a total TDP of 180-200W. That kind of power will call for some extensive cooling. The Mac Pro and iMac can handle that, none if the other machines (MBP 16”, Non-Pro iMac 30”) cannot.

The above assumes that Apple can design a competitve GPU that uses 1/3 the power of nVidia’s offering.

Not Going To Happen. Apple may be able to reduce power in the GPU cores, but not by 66%.

I think it is far more likely that you use an off SoC dGPU. It allows for more flexibility, and makes the pro level SoC far easier to cool.

It also opens up a lot of options. Thing like various dGPU options, or even different SoCs (16 or 32 cores), or, in the MacPro, multiple SoCs.

As for chiplets, it would be a way to allow customization. But Apple has not, at this point, given any indication that they will be going this way. Chiplets are only a way of increasing yields, they are not used for performance purposes. Note that thing like Infinity Fabric, as used by AMD, is a silicon level/on chip interconnect, not a board or chiplet interconnect.

I did try to look at your links, but tgey did not come up properly for ne.
 
  • Like
Reactions: awesomedeluxe

Nogi Memes

macrumors member
Apr 4, 2020
39
135
iBook

PowerBook

I will buy whichever is released first.

I predict Macbook name will not be used for AS laptops due to:

1. Low end AS laptop series will outperform base model Intel MBP.

2. Too confusing to have laptops with same name and different chip architectures.
 
Last edited:

awesomedeluxe

macrumors 6502
Jun 29, 2009
262
105
For the professional machines using dCPUs, they cannot be on SoC. This is not because I say so, but because there will not be any way to dissipate the heat.

A “pro” SoC will have from 16 to 32 high performance cores on board. It will have the hardware video encoders/decoders, ML/AI/neural engines (each much more powerful than the one in the A12z), Thinderbolt controllers, PCIe controllers, on SoC RAM, and possibly logic to allow multiple such chips to be used together. This is probably a 80-100W SoC.

Now add, on SOC, GPUs to allow Apple’s “Pro” machines to stay competitive with an nVidia RTX 3080 (currently thought to have 300W TDP). Lets also assume, for the sake of argument, that Apple’s chip designers are better at GPU design than nVidia’s are. So they come up with a GPU that is competitive with nVidias RTX3080, and only uses 100W.

Taking the above 80-100W CPU, and adding another 100W from the new super efficient GPU gives a total TDP of 180-200W. That kind of power will call for some extensive cooling. The Mac Pro and iMac can handle that, none if the other machines (MBP 16”, Non-Pro iMac 30”) cannot.

The above assumes that Apple can design a competitve GPU that uses 1/3 the power of nVidia’s offering.

Not Going To Happen. Apple may be able to reduce power in the GPU cores, but not by 66%.

I think it is far more likely that you use an off SoC dGPU. It allows for more flexibility, and makes the pro level SoC far easier to cool.

It also opens up a lot of options. Thing like various dGPU options, or even different SoCs (16 or 32 cores), or, in the MacPro, multiple SoCs.

As for chiplets, it would be a way to allow customization. But Apple has not, at this point, given any indication that they will be going this way. Chiplets are only a way of increasing yields, they are not used for performance purposes. Note that thing like Infinity Fabric, as used by AMD, is a silicon level/on chip interconnect, not a board or chiplet interconnect.

I did try to look at your links, but tgey did not come up properly for ne.
Yes, I agree a chiplet design becomes problematic when we reach Mac Pro territory. This is very crude, but I think expecting them to want about 5W a core (the most we see a single lightning core use at peak) with all 16 cores in use is reasonable. Throw in an unknown number of NPUs and a bucket of extra cache and 80-100W sounds right. Creating a giant, threadripper-sized "chiplet" to accommodate those cores and a GPU is certainly ambitious, although possible.

But I have two problems with your analysis. The first is that you seem to be designing Apple's solution in a way that puts the Mac Pro first. You should be taking the opposite approach. The Mac Pro is one of Apple's least important offerings; Apple will design their graphics solution with the MBP 16 and iMac in mind.

The second problem I have is that you are using an nVidia card as your baseline. You and I both know nVidia's chips are far more power-efficient than AMD's. We also know Apple has used AMD exclusively for years. Apple has never felt the need to offer better performance than nVidia before and they are not going to start now. The Vega 56 in the iMac Pro has a 210W TDP.

So, let's start with the Macbook Pro 16. It has a 45W Intel APU and a separate 50W AMD dGPU. Obviously, not all of that is available to you in a chiplet design. You can't fit 95W of power in what was formerly a 45-50W space. But you do have more than 45-50W, because it is easier to cool your part on account of the rest of your system being cool. 60W is probably a good estimate for the MBP16. We'll make a chiplet that looks like this:

[A14X] - [GPU]
. [2x HBM2E]

The A14X is the Bloomberg part . 8 performance cores. You don't need to give all of those cores 5W; two or even four might be allowed that much, but with all 8 on, we'll walk our clocks back until we're using about 2.5W a core. That's 20W, but we'll say 25W since there's also a neural engine and other stuff on there. It has GPU cores too for non-intensive tasks, but since it's never on when the GPU die is in use, it's not relevant here.

Our on-package HBM2E needs a lot of power. Each stack uses 5W; two stacks are 10W. That's a lot of power in a small area, we're at 35W now. This leaves 25W for the GPU.

Now, I don't know where you pulled your GPU power estimate from. The A12Z never uses more than 10W, and it has 8 graphics cores in parallel. I struggle to envision a scenario where these cores are using even 1W a piece, but I'll assume that they do. Maybe you envision these cores being clocked up, but that is not the right approach for a laptop, where many GPU cores in parallel at low speeds is the name of the game. We have a 30% power efficiency boost moving from N7 to N5(P), so .7W a core is a good estimate. Just piling on as many as we can, that's 36 cores. It's about 120mm^2, so just the right size for good yield.

Now, would an Apple graphics die with 36 cores and access to 32GB of HBM2E beat the Radeon Pro 5600M? Based on what we know from limited benchmarks, I'd say yeah, definitely. Absolutely. I consider this an absolute win. Would it beat a theoretical 2021 7nm nVidia part? No, but Apple was never going to use that part.

This is already enough power for the iMac. We can use 1W per GPU core to boost their clocks ~15%, and give all CPU cores 5W all the time, ta-da it's an iMac. We can scale up for the iMac Pro easily. Double the CPU cores, double our HBM2E, and let's increase the GPU count to 56. Our whole system is still using less power than a Vega Pro 56 uses, and would 56 Apple GPU cores, clocked up 15%, with access to 64GB HBM2E outperform the Vega Pro 56? Yes. Yes they would, and if you think they wouldn't, good news you have ~50W left you can use to bridge the gap.
 
  • Like
Reactions: iPadified

leman

macrumors Core
Oct 14, 2008
19,518
19,669
Taking the above 80-100W CPU, and adding another 100W from the new super efficient GPU gives a total TDP of 180-200W. That kind of power will call for some extensive cooling. The Mac Pro and iMac can handle that, none if the other machines (MBP 16”, Non-Pro iMac 30”) cannot.

The Mac Pro CPU TDP is already breaking 200 watts and didn’t Apple claim that they can run it at almost 300 Watts? GPUs in the 300 Watt range are also not a rarity. Cooling down a monster SoC won’t be a problem. Building one will :)

I don’t see any issues for the MBP either, the target combined TDP for the 16” is already known: it’s 75-80 watts. More than plenty to deliver a CPU that will blow any contemporary x86 out of the water and a GPU competitive with current mid-high range.

Where I do see an issue is with high-end GPUs. Apple can compete with others very well in rasterization due to TBDR. But there are no magic tricks with compute. Extrapolating from current data, Apple’s compute architecture might actually be 20% more efficient than AMD or Nvidia. It still won’t be easy to beating high-end workstation cards - Apple will need a GPU with TDP of 150 watts or more. Just don’t see it on a SoC. Chip let design with shared memory controller/cache might make sense. I guess we’ll have to wait and see.
 

leman

macrumors Core
Oct 14, 2008
19,518
19,669
The second problem I have is that you are using an nVidia card as your baseline. You and I both know nVidia's chips are far more power-efficient than AMD's. We also know Apple has used AMD exclusively for years. Apple has never felt the need to offer better performance than nVidia before and they are not going to start now. The Vega 56 in the iMac Pro has a 210W TDP.

Just as a side note: it’s not that Nvidia GPUs are more power-efficient per se, they are just more power efficient at the performance level specified by Nvidia. Turing is simply faster. AMD has to catch up and so they clock their cards outside of their comfort zone. Vega and Navi show great power efficiency at lower clocks though. Just look at the 5600M Pro.
 

Kostask

macrumors regular
Jul 4, 2020
230
104
Calgary, Alberta, Canada
The reason I used the RTX3080 as a point of comparison is because my concept for a Mac Pro SoC won't see the light of day for at least a year, maybe two. By then, the RTX 3080 will be midrange level performance, not cutting edge as it is now. You don't have a choice in the performance levels, the people buying the machines do. If a pro, doing real pro work, is looking at buying a new workstation, he (it can be he individual or the generic "he") will not buy a Mac Pro that takes any longer to do a task than an equivalent loaded up Windows machine. That Windows machine can have multiple, easily changed out high end GPUs. So why buy a machine that will limit income. Not happening; save the money by buying a Windows box. This is what you are competing with, and the parameters you need to work with.

None of this applies to the "consumer" AS Macs. The Consumer AS Macs have far lower performance expectatons; it will be fine to just soundly beat the performance of the current Intel iGPUs and CPUs. That will be easy.

Equalling or even slightly exceeding a Radeon 5600M is a non-starter in 1-2 years for a high end laptop or iMac Pro/Mac Pro desktop. The Radeon cards are barely adequate now on the Mac Pro, and in 1-2 years, you can bet that more complex CFD, Maya, and video editing of 8K and 12K video will become standard tasks. Apple will add accelerators to help with video encode/decode, but that only goes so far, as we have just found out with the HEVC encode/decode accelerators on the iPad Pro; the accelerators do help HEVC processing lot; BUT THEY ONLY HELP WHEN USED FOR THE PURPOSE THEY WERE DESIGNED FOR, in other words, they are currently a waste of silicon for the AV1 format.

Why am I saying all of this? Because more than likely, the Mac Pro SoC is currently under design, and may even have initial samples for test and debug purposes.

The "consumer" SoC is already being fabbed up, and some volumes are already built. They better be, because manufacturing of the AS Macs will be shortly under way, if it hasn't started already, and those SoCs better be in inventory, or at least available in volume before the system assembly can start. Apple needs to have AS Macs available in inventory on the day that the AS Macs are made available. Also note, there is 3-4 weeks of transit time between China and North America. So assuming that the first AS Macs will be out in mid-Oct. to early-Nov., and taking 3 weeks out, means that volume (my estimate 500k units or so) must be on a boat no later than the end of Sept.-early Oct. Manufacturing, assuming that all goes smoothly, at 100K a week, must start in mid to late August. This makes the assumption that all goes smoothly, with acceptable board yeilds and zero surprises. Allowing for a slower start up and some expected production glitches, and the first week or two of slow production (say 50K a week) means that AS Mac production is startring just about now. Which means that the first AS SoCs are avaiable NOW, and may have been available a few weeks ago.

Obviously, I am attempting to "read the tea leaves", and my concepts may be incorrect. But some of them are going to be right, some will be very wrong. I don't see what I have written as being totally out of the question, just a different way of looking at the same problem and coming up with a different way of addressing it. I think the disagreement here is the actual implementation. Does Apple try to make a "Monster" SoC with everything on board, including the dGPU, or do they go more conservative with a separate SoC and a separate dGPU. There is no reason they couldn't implement a "Monster SoC", but I contend they will not; it will concentrate a lot of heat into a small area, and there is room inside for separate packages, and even a dGPU on replaceable cards, even on laptops. This also opens up the possibility of options to build various graphics performance level systems. You believe that a "Monster" SoC is doable, and can be adequately cooled, even on a laptop. We'll see.
 

iPadified

macrumors 68020
Apr 25, 2017
2,014
2,257
Well, they could take the well travelled route and use more than one chip for highly parallel workflows. Vega Duo springs to mind and if not sufficient you use two of these cards. I cannot see the difference of using up to eight 100W AS SoC for highly parallel work flow. It is probably cheaper to build one “standard” high end chip including the CPU, NPU, GPU that having separate chips for GPU and CPU. That standard chip would be reasonably small for yield but still be a good “module” in a multi chip setup. 100W might be a good target as that fits a high end iMac.

I do not believe the Map Pro was solely designed for intel and AMD chips but also for use with AS. The MPX modules are made for adding more AS chips in parallel as needed.

It is difficult to understand why Apple cannot use a 100-300W chips when Intel, AMD and NVIDIA can.
 

Kostask

macrumors regular
Jul 4, 2020
230
104
Calgary, Alberta, Canada
Apple can do whatever they want. They are one of the few manufacturers to ship a liquid cooled system (didn't work out over time, but that is a different topic). I think Apple won't develop such a limited application SoC when they have a substantial part of their product lineup that could take advantage of am SoC that is usable in more products.

The reason I don't think that Apple WON'T use a 100-300W SoC is because I believe that Apple will develop ONE "High End Performance" (HEP) SoC. It will be somewhere below 40W. It will be used on the MacBook Pro 16", the higher end iMacs (perhaps more than 1), and MacPros (definitely more than 1). This HEP SoC may be binned for higher clock speeds on the iMac Pro (if it continues) and Mac Pros. This is going to be a 16/24/32 HP Core SoC (I honestly don't have any idea which way they will go), no GPU on board, with enhanced ML/AI/Neural Engine, and the logic to interface with an external GPU and PCIe, both for NVME type SSDs and PCIe slots, as well as logic to allow multiple SoCs to work together. 16" MacBook Pros get either the highest performance "consumer" SoCs with iGPUs, or the HEP SoC with a dGPU. iMac 24" get the same. 30" iMacs get HEP SoCs with a dGPU, with a possible BTO to use two HEP SOCs (if the iMac Pro continues, or as the top 30" iMac). Mac Pros get 4 HEP SoCs, with at least one dGPU on an MPX card, with possibility of adding more dGPUs on MPX cards, as well as the Afterburner accelerator.

This is the way I see the Apple SoCs shaking out. They will use higher performance iGPUs that handily beat out any existing iGPUs (Intel or AMD). When demands exceed the abilities of the iGPUs, there will be dGPUs for those who need to use them, and are willing to pay for them. This simplifies the SoCs down to two variants, simplifying manufacturing, and more importantly, increasing the volume of the HEP SoC. It will probably be very difficult to have any significant volume of a MacPro only SoC, which implies extreme costs.
Why the multiple HEP SoCs on some systems? Because the workloads on those systems will have higher thread counts, and there people running out of threads on the current Mac Pro (hyperthreaded 28 core Intel Xeon, 56 threads), so the upcoming Mac Pro better be able to have at least as many cores as there are currently threads on the present Mac Pro. 64 threads (4 X 16 HP SoCs, or possibly 2 X 32 HP Core SoCs) or 96 threads(4 X 24 HP SoCs) or even 128 (4 X 32 HP Core SoCs) would find very happy buyers.
 
Last edited:

iPadified

macrumors 68020
Apr 25, 2017
2,014
2,257
Apple can do whatever they want. They are one of the few manufacturers to ship a liquid cooled system (didn't work out over time, but that is a different topic). I think Apple won't develop such a limited application SoC when they have a substantial part of their product lineup that could take advantage of am SoC that is usable in more products.

The reason I don't think that Apple WON'T use a 100-300W SoC is because I believe that Apple will develop ONE "High End Performance" (HEP) SoC. It will be somewhere below 40W. It will be used on the MacBook Pro 16", the higher end iMacs (perhaps more than 1), and MacPros (definitely more than 1). This HEP SoC may be binned for higher clock speeds on the iMac Pro (if it continues) and Mac Pros. This is going to be a 16/24/32 HP Core SoC (I honestly don't have any idea which way they will go), no GPU on board, with enhanced ML/AI/Neural Engine, and the logic to interface with an external GPU and PCIe, both for NVME type SSDs and PCIe slots, as well as logic to allow multiple SoCs to work together. 16" MacBook Pros get either the highest performance "consumer" SoCs with iGPUs, or the HEP SoC with a dGPU. iMac 24" get the same. 30" iMacs get HEP SoCs with a dGPU, with a possible BTO to use two HEP SOCs (if the iMac Pro continues, or as the top 30" iMac). Mac Pros get 4 HEP SoCs, with at least one dGPU on an MPX card, with possibility of adding more dGPUs on MPX cards, as well as the Afterburner accelerator.

This is the way I see the Apple SoCs shaking out. They will use higher performance iGPUs that handily beat out any existing iGPUs (Intel or AMD). When demands exceed the abilities of the iGPUs, there will be dGPUs for those who need to use them, and are willing to pay for them. This simplifies the SoCs down to two variants, simplifying manufacturing, and more importantly, increasing the volume of the HEP SoC. It will probably be very difficult to have any significant volume of a MacPro only SoC, which implies extreme costs.
Why the multiple HEP SoCs on some systems? Because the workloads on those systems will have higher thread counts, and there people running out of threads on the current Mac Pro (hyperthreaded 28 core Intel Xeon, 56 threads), so the upcoming Mac Pro better be able to have at least as many cores as there are currently threads on the present Mac Pro. 64 threads (4 X 16 HP SoCs, or possibly 2 X 32 HP Core SoCs) or 96 threads(4 X 24 HP SoCs) or even 128 (4 X 32 HP Core SoCs) would find very happy buyers.
If it will be 40w or 100w is beside the point as we talk about the same thing: a HEP. The important point is that the Mac Pro get many SoCs in order to scale to highly parallel work flow and there we are on the same page. You assume the GPGPU paradigm using dGPU will persist. It is clear that Apple will explore new routes such as dedicated accelerators such as an NPU, en/decoder (afterburner) and possibly ray tracers which reduces GPU usage. It will be expensive to create dedicated chips for each of these functions but a generic HEP might be much cheaper even if you need to buy too many CPU cores in order to get the other function you need. How much does a A12Z cost? $100? There is a lot of combined performance in 10 A12Z and 1000 usd is peanuts.

The idea with 40w standard HEP is attractive as it can be used in many products as single SoCs. However a 100W chip covers iMacs and Mac Pro so the volume is not exactly low. I agree that a 40W part is more versatile but also requires that many more needs to be linked efficiently together to form a compute cluster. 1000W SoC would then be 25 40W SoCs.
 

Waragainstsleep

macrumors 6502a
Oct 15, 2003
612
221
UK
no one in their right mind would buy an equivalent windows ultrabook, unless for very specific software or something, or they absolutely hate macos.

No-one in their right mind absolutely hates MacOS.

And yet they kept using a 2.5” laptop platter drive for a long time. With 5,400 rpm. That had nothing to do with making sense.

I suspect it made economic sense but it they kept it going too long and it was ultimately a stupid move to do so. It made them look very made, shipping a machine that absolutely sucked for even the lowest power basic usage when it cam to real world performance.

Apple will add accelerators to help with video encode/decode, but that only goes so far, as we have just found out with the HEVC encode/decode accelerators on the iPad Pro; the accelerators do help HEVC processing lot; BUT THEY ONLY HELP WHEN USED FOR THE PURPOSE THEY WERE DESIGNED FOR, in other words, they are currently a waste of silicon for the AV1 format.

People seem to forget the A12Z can handle 3 streams of 4K at ~10W. I suspect these accelerators have some real space to grow too. Doesn't the T2 chip handle a fair bit of heavy video and audio lifting? Thats basically an A10. Presumably the T3 or T4 will be an A12 based update capable of being much more handy, then there's whatever dedicated accelerator modules they bolt onto the A14+ SoCs.


Why am I saying all of this? Because more than likely, the Mac Pro SoC is currently under design, and may even have initial samples for test and debug purposes.

The "consumer" SoC is already being fabbed up, and some volumes are already built. They better be, because manufacturing of the AS Macs will be shortly under way, if it hasn't started already, and those SoCs better be in inventory, or at least available in volume before the system assembly can start. Apple needs to have AS Macs available in inventory on the day that the AS Macs are made available. Also note, there is 3-4 weeks of transit time between China and North America. So assuming that the first AS Macs will be out in mid-Oct. to early-Nov., and taking 3 weeks out, means that volume (my estimate 500k units or so) must be on a boat no later than the end of Sept.-early Oct. Manufacturing, assuming that all goes smoothly, at 100K a week, must start in mid to late August. This makes the assumption that all goes smoothly, with acceptable board yeilds and zero surprises. Allowing for a slower start up and some expected production glitches, and the first week or two of slow production (say 50K a week) means that AS Mac production is startring just about now. Which means that the first AS SoCs are avaiable NOW, and may have been available a few weeks ago.

They for sure have more than a few samples in testing for future Mac Pros. The total development time per chip is supposedly 3 years and I suspect with a big one like this, it might be a touch longer.

I'm not sure Apple has to ship AS Macs as soon as they announce. Certainly that's the aim, its something they do with almost everything these days but I think they know a little leeway will be given with such a big change so as long as they announce them before Xmas, no-one will claim they missed their deadline. That said, I do expect them to ship before Xmas. I'm guessing iPhone gets first dibs on the fab so A14 should be in stock in numbers but there was a delay so maybe its still being churned out now.
You'd be surprised how much air freight Apple uses these days. Tim Cook once famously cornered every available cargo flight out of China one Xmas which royally screwed Nintendo who were then unable to ship whatever console it was they were in the process of launching and pushed them back a few months. So I heard anyway.
The iMac boxes were slimmed down in 2012 with the thinner models so Apple could fit more of them on planes. CTO times have dropped substantially since 2010 or so as well.


Its definitely fascinating to see what they choose to do with the AS Mac Pro. I do think they will include a GPU with it, that way professionals who don't need massive graphics don't have to pay for a big card if they don't want to. Might be nothing special, just whatever is in the iPhones should be enough to run an 8K display by then. Frees up a slot for something else. I'm thinking audio and data centre users will appreciate that.
I wonder if they will give it the capacity to work with standard PCI-E cards just in case they can't ramp their own GPUs up sufficiently. That way they can always call AMD last minute and throw in the latest Vega or Navi or whatever those will be in 2022.
People talking about the heat of a big SoC, I imagine the RAM will be removed from the Mac Pro SoC, no way you squeeze 2TB+ of RAM there after all. I'm guessing that frees up some surface area and a fair bit of heat. I know the RAM in some of the cheese grater Pros could cook your dinner for you under any kind of load.
Maybe that allows more room and thermal overhead for better GPU?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.