And remeber the 16” with intel still needs a dgpu that is also power hungry
No, they will not go to nvidia or amd. They’ve been working on a separate GPU for a long time, which will sit in the same package as the CPU die.My guess is they will release a version this round with just integrated graphics and maybe add a more robust dgpu later. The big question is if they would ever go to nvidia/amd for a dgpu again, or if they will only scale up their apple silicon.
Ya, I agree with this, though the main problem is to match top tier gpu's they would need to scale up their AS chips by something like 30x for mac pro.No, they will not go to nvidia or amd. They’ve been working on a separate GPU for a long time, which will sit in the same package as the CPU die.
The only machine that could conceivably involve a conventional third-party DGPU is the mac pro, and even there I seriously doubt it.
Ya, I agree with this, though the main problem is to match top tier gpu's they would need to scale up their AS chips by something like 30x for mac pro.
For the 14"/16" though I think they will already be at around mobile 1050 to 1060 levels, not quite 2060/3080 mobile levels so probably near 0% chance they use any 3rd party for the laptops. They are only off by maybe 30-50% for those I think which they can probably hit by just scaling up.
For graphics rendering, I think Apple’s GPU using TBDR with UMA can outperform a lot of IMR dGPUs as the PCIe bus bottleneck is holding the dGPUs back. That’s the reason why I think most high end dGPUs have massive VRAMs, to mitigate somewhat the bottleneck of the PCIe bus.Ya, I agree with this, though the main problem is to match top tier gpu's they would need to scale up their AS chips by something like 30x for mac pro.
For the 14"/16" though I think they will already be at around mobile 1050 to 1060 levels, not quite 2060/3080 mobile levels so probably near 0% chance they use any 3rd party for the laptops. They are only off by maybe 30-50% for those I think which they can probably hit by just scaling up.
This new 16" macbook is gonna be ridiculousFor graphics rendering, I think Apple’s GPU using TBDR with UMA can outperform a lot of IMR dGPUs as the PCIe bus bottleneck is holding the dGPUs back. That’s the reason why I think most high end dGPUs have massive VRAMs, to mitigate somewhat the bottleneck of the PCIe bus.
Apple is most definitely upping the UMA bandwidth from its existing 68GB/s for the next Mac SoC, and I think it’ll be at least double what the M1 has. With more GPU cores and a larger pipe, graphics rendering prowess will increase. I see a lot of YouTube videos of games running on the M1 Macs with 60 FPS, and many are thru Rosetta 2. Quite impressive for an entry level SoC. I would think most current games can achieve good frame rates with proper optimistation for the M1.
As for GPU compute, it’ll probably be trailing the top end dGPUs like the RTX3090, but the Apple Silicon has other compute engines like the NPUs that could already produce 11 TOPS according to Apple, which IMHO, is not used at all at the moment by third party developers, and probably also due to Apple’s limited APIs to take advantage of it. From what I read, the NPU is essentially a matrix computation unit, so I imagine the NPU and the GPU of the Apple Silicon can be combined together to achieve quite high level of compute performance, without the overhead of PCIe bus memory copy.
the M1 is already at 1050 mobile levels and thats using a 8 core chip.Ya, I agree with this, though the main problem is to match top tier gpu's they would need to scale up their AS chips by something like 30x for mac pro.
For the 14"/16" though I think they will already be at around mobile 1050 to 1060 levels, not quite 2060/3080 mobile levels so probably near 0% chance they use any 3rd party for the laptops. They are only off by maybe 30-50% for those I think which they can probably hit by just scaling up.
Agreed. At best they may offer updated configurations on a couple of lingering models (the Mac Pro is the likeliest), but they aren’t going to come out with new Intel machines.Apple did release some new 68K models more than a year after the PowerPC transition began and carried them forward another year. It is worth noting, though, they had scores of models with numbers and trim code letters and goofy names. The purchase of NeXT brought the simplification system that did away with all that confusing nonsense.
The Intel transition was a line in the sand. No new PowerPC Macs appeared after the Intel models came out. It may also be worth noting that there were no "Pro" models during the PowerPC era, so they could well decide to drop "Pro" entirely. With Rosetta, I seriously doubt that there will be any new Intel Macs at all.
As for GPU compute, it’ll probably be trailing the top end dGPUs like the RTX3090, but the Apple Silicon has other compute engines like the NPUs that could already produce 11 TOPS according to Apple, which IMHO, is not used at all at the moment by third party developers, and probably also due to Apple’s limited APIs to take advantage of it. From what I read, the NPU is essentially a matrix computation unit, so I imagine the NPU and the GPU of the Apple Silicon can be combined together to achieve quite high level of compute performance, without the overhead of PCIe bus memory copy.
I don’t think Apple is advertising 11 TFLOPS but 11 TOPS. So it could be integer ops as well. Anyway, I’m not familiar with these ops, but just think that the NPU has other potential other than for ML type of applications, tho. I understand most NN are matrix multiplications. So for applications in graphics or any field that need high thruput matrix computation, the NPU would be a good target to exploit.Careful here. NPU is a specialized processing unit and its TFLOPs don't mean much in the grand scale of things, unless all you care about is multiplying limited-precision matrices. The NPU cannot be used for general-purpose computation. And 11 TFLOPS is nothing too impressive in the grand scale of things. An RTX 2060 has over 50 "tensor" (matrix) TFLOPS.
Besides, the NPU is absolutely getting used by developers. It's just it's hidden by Apple API and it's not really clear what it can do. It seems to be specialized on a subset of signal processing and won't run every ML model. Apple also has AMX accelerators that is another (more flexible?) matrix multiplication unit, and they also support ML workloads on the GPU as well.
I don’t think Apple is advertising 11 TFLOPS but 11 TOPS. So it could be integer ops as well. Anyway, I’m not familiar with these ops, but just think that the NPU has other potential other than for ML type of applications, tho. I understand most NN are matrix multiplications. So for applications in graphics or any field that need high thruput matrix computation, the NPU would be a good target to exploit.
I understand AMX are the CPUs co-processor where if developers need low latency matrix computation, it’ll be useful. But for high thruput matrix ops, it’s probably not suitable, as it’ll tie down the CPU with the matrix ops. NPU are likely useful where we need bulk matrix computation but there’s a need set up and program the NPU cores, so not good for low latency responses.
It’s an awful waste of die space tho. if the NPU has limited uses.This makes me think that the NPU lacks flexibility and is only designed to accelerate a limited set of use cases such as image classification and audio stuff.
the NPU is useful in some cases and where it is useful it is REALLY good.It’s an awful waste of die space tho. if the NPU has limited uses.
It’s an awful waste of die space tho. if the NPU has limited uses.
It’s an awful waste of die space tho. if the NPU has limited uses.