Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Using Jaguar cores is more of the limiting factor there, especially comparing the One S to the One X.
They all apparently use Jaguar cores, though, with custom GPU’s and GDDR. Though those solutions and Apple’s are geared to be low power, Apple‘s low power solution rises to that level of fidelity without GDDR and two years ago. They’ve continued to improve performance in the A14 series (we’ll have a better idea tomorrow) for iPhone and iPad and, as that 2 year ago number is better than MOST of the laptops people are buying today, I’m thinking they’ll be able to provide a suitable solution without HBM and GDDR.

TBDR decreases the amount of actual math that has to be done significantly by culling before drawing. That’s one reason why TBDR solutions don’t require as much muscle.
 
They all apparently use Jaguar cores, though, with custom GPU’s and GDDR. Though those solutions and Apple’s are geared to be low power, Apple‘s low power solution rises to that level of fidelity without GDDR and two years ago. They’ve continued to improve performance in the A14 series (we’ll have a better idea tomorrow) for iPhone and iPad and, as that 2 year ago number is better than MOST of the laptops people are buying today, I’m thinking they’ll be able to provide a suitable solution without HBM and GDDR.

TBDR decreases the amount of actual math that has to be done significantly by culling before drawing. That’s one reason why TBDR solutions don’t require as much muscle.
Supposedly, since (outside of fortnite) we still haven’t seen anything that actually rivals current Gen consoles in visual quality/details or rendering techniques.
In the case of Fortnite it isn’t actually rendering the same thing the consoles are so it isn’t even a 1 to 1 comparison.

I suspect Apple will use LPDDR4X in AS Macs because bandwidth won’t be at a premium at this time. Maybe the 16” MacBook Pro would require more oomph, but if they are making custom cores any compute could be bolstered by custom ASICS instead of relying on GPGPU.
 
if you really want to know what kind of memory technology Apple will be using on the AS Macs, it is as simple as looking at the memory technology that they will be using on the iPhone 12. While people on this board like to indulge in fantasies of HBM2 and GDDRx, the truth of the matter is that it is a guiding principle of supply management is that you try and maximize volume of fewer parts, rather than buying X amount of one part, and Y amount of another part, and Z amount of a third part. Buying one part type allows for the cheapest price, and flexibility in allocating parts. Tim Cook is a manufacturing guy, and supply managemnt guy. He knows this stuff like he knows how to brush his teeth. Unless there is an overwhelming (not just good or compelling), he is not going to deviate from that. Apple will probably ship with 180-220M iPhones this year. The pricing for the RAM on these phones will be the best in the industry, by far. Why would you throw that possible cost advantage out for the AS Mac? And while there is speculation about GPUs being starved of RAM bandwidth, Apple will do what is cost effective, and still performs well, and that is piggy back RAM technology off of the iPhone.

You are not wrong. I agree with you that Apple will try to optimize component supply over multiple devices. I am quite sure that their most popular laptops (13" MBP etc.) will use the same memory technology as the iPhones and the iPads. My point is hoverer that higher-end Apple PCs will need faster RAM. Simple as that.
 
Why would Apple invest so much R&D into the new Mac Pro design, including MPX modules, and create a pro apps team within the last 2-3 years?

Apple spent quite a lot of money on Aperture and seriously promoted it. The product had a good market share and was popular. Then they dropped it. We can't know why they did this.

If they did the same thing with Final Cut Pro X, simply drop it even if it is popular. Their reason for making the Mac Pro goes away. Seriously, no one buys a MP to run Pages or Safari the MP is bought to run FCP or maybe Logic if those apps wet away the MP would go with them.

Again, why spend so much to promote it, then drop it? They did exactly that in the past. My guess is that someone at Apple looked at the cost vs. profit.
 
I suspect Apple will use LPDDR4X in AS Macs because bandwidth won’t be at a premium at this time.

My bet is LPDDR5. The only concern is whether there is enough supply of LPDDR5.

Maybe the 16” MacBook Pro would require more oomph, but if they are making custom cores any compute could be bolstered by custom ASICS instead of relying on GPGPU.

I doubt it's feasible or reasonable. Custom accelerators are great for specific narrowly defined tasks, but compute has to cover a lot of ground. Sacrificing GPGPU compute in favor of custom accelerators would also cripple Metal, negatively impact existing applications that rely on compute and make developing on Macs more difficult.
 
Apple spent quite a lot of money on Aperture and seriously promoted it. The product had a good market share and was popular. Then they dropped it. We can't know why they did this.

If they did the same thing with Final Cut Pro X, simply drop it even if it is popular. Their reason for making the Mac Pro goes away. Seriously, no one buys a MP to run Pages or Safari the MP is bought to run FCP or maybe Logic if those apps wet away the MP would go with them.

Again, why spend so much to promote it, then drop it? They did exactly that in the past. My guess is that someone at Apple looked at the cost vs. profit.

I think that’s a bit of a slippery slope fallacy that you’re extrapolating one event to the entire Mac Pro (and subsequently the professional market) lineup.

There are many theories and reasons Apple may have killed off aperture = High maintenance, low profit, and honestly Most professionals used Adobe Bridge + Photoshop, and later Lightroom at the time. Technically Apple discontinued iPhoto and Aperture and replaced it with Photos even though I understand photos was not a true replacement.


Again, the Mac Pro is not just for FCPX. Professionals use Adobe Products, Maya, Blender, XCode, Media Composer, Logic Pro X, Pro Tools, Decvinci Resolve, AutoCAD, Sketchup, photoshop and graphic design artists working on massive projects. There’s hundreds of pro apps that professionals use on a Mac Pro everyday outside of just FCPX.

Again, Apple had their chance to abandon the Mac Pro and abandon their pro market share back in 2014. Many professionals like myself were torn on abandoning Mac. But then Apple announced their pro apps team (this was after they abandoned Aperture), and a new commitment to the pro community with the iMac Pro, Mac Mini Update, Mac Pro, etc.

 
Last edited:
  • Like
Reactions: 2Stepfan
if you really want to know what kind of memory technology Apple will be using on the AS Macs, it is as simple as looking at the memory technology that they will be using on the iPhone 12. While people on this board like to indulge in fantasies of HBM2 and GDDRx, the truth of the matter is that it is a guiding principle of supply management is that you try and maximize volume of fewer parts, rather than buying X amount of one part, and Y amount of another part, and Z amount of a third part. Buying one part type allows for the cheapest price, and flexibility in allocating parts. Tim Cook is a manufacturing guy, and supply managemnt guy. He knows this stuff like he knows how to brush his teeth. Unless there is an overwhelming (not just good or compelling), he is not going to deviate from that. Apple will probably ship with 180-220M iPhones this year. The pricing for the RAM on these phones will be the best in the industry, by far. Why would you throw that possible cost advantage out for the AS Mac? And while there is speculation about GPUs being starved of RAM bandwidth, Apple will do what is cost effective, and still performs well, and that is piggy back RAM technology off of the iPhone.
While they may do that on their lower end Macs, (essentially shipping them with an iPad Pro SoC/memory combination), such a memory subsystem doesn’t scale further. iPad Pro level of graphics performance is great for a 10W power envelope, but current iMacs and even MacBook Pros go far beyond that.
Even if we assume 128-bit wide LPDDR5 for the new iPad Pros, high-end MBP and iMac GPUs already offer 4-5 times more. There is no way Apple can match the graphics performance they already offer in their products with an iPad Pro memory subsystem. And the rest of the industry aren’t standing still either.

The reason we speculate as to what Apple will do regarding memory, is a recognition of this dilemma. Apples statements about unified memory and high performance levels require new memory solutions. Or, of course, that they confine themselves to graphics power way below the rest of the PC industry. 5nm lithography will let them keep up reasonably well in terms of computational resources while staying within sane power limits, but suffocating those resources by having them breathe through an itty-bitty memory straw doesn’t make sense. They either have to limit their graphics performance to a bit better than their iPad Pros, or go with a higher performance memory subsystem.

Pick one.
 
Last edited:
  • Like
Reactions: BigSplash
Even if we assume 128-bit wide LPDDR5 for the new iPad Pros, high-end MBP and iMac GPUs already offer 4-5 times more. There is no way Apple can match the graphics performance they already offer in their products with an iPad Pro memory subsystem. And the rest of the industry aren’t standing still either.

And even if they can save on required memory bandwidth for graphical applications with TBDR, there is always GPGPU compute — can't trick too much here. They absolutely will need more faster RAM on the higher-end of their computers.

The reason we speculate as to what Apple will do regarding memory, is a recognition of this dilemma. Apples statements about unified memory and high performance levels require new memory solutions. Or, of course, that they confine themselves to graphics power way below the rest of the PC industry. 5nm lithography will let them keep up reasonably well in terms of computational resources while staying within sane power limits, but suffocating those resources by having them breathe through an itty-bitty memory straw doesn’t make sense. They either have to limit their graphics performance to a bit better than their iPad Pros, or go with a higher performance memory subsystem.

Spot on!
 
Apples statements about unified memory and high performance levels require new memory solutions. Or, of course, that they confine themselves to graphics power way below the rest of the PC industry. 5nm lithography will let them keep up reasonably well in terms of computational resources while staying within sane power limits, but suffocating those resources by having them breathe through an itty-bitty memory straw doesn’t make sense. They either have to limit their graphics performance to a bit better than their iPad Pros, or go with a higher performance memory subsystem.
I wouldn’t say “way below” the rest of the PC industry because I’d put an iPad Pro against the top selling “anything” primarily because “top selling” is likely going to mean “mobile” and “Intel graphics”. :) I’d say it’s surely way below the top end of the rest of the PC industry, but there‘s a huge gulf between ”everyday” computers and that super high end that leaves a lot of room for Apple’s TBDR to swim.
One of Apple’s presentation slides says “High Efficiency DRAM” and I’ve read that the 12x was considered to implement a “poor man’s 2.5D HBM” solution that uses LPDDR. So, it may likely be a new memory solution but, as Apple’s in control of the ENTIRE widget and doesn’t even have to consider an option of using ANYTHING standard on the motherboard, they may well implement an in-house HBM-like power efficient solution.
 
I wouldn’t say “way below” the rest of the PC industry because I’d put an iPad Pro against the top selling “anything” primarily because “top selling” is likely going to mean “mobile” and “Intel graphics”. :) I’d say it’s surely way below the top end of the rest of the PC industry, but there‘s a huge gulf between ”everyday” computers and that super high end that leaves a lot of room for Apple’s TBDR to swim.
You’re quite right that iPad Pro level of graphics performance compare quite favourably against any integrated PC graphics. One of the reasons for that is that 128-bit wide LPDDR4x or (hopefully) LPDDR5 in the next iPad Pro provide higher bandwidth than the standard 128-bit wide DDR4 of current PCs. Add a bit of bandwidth savings from TBDR architecture, and what seems to be a fair bit of on-chip cache as cherry on top, and you’re class leading!
In the "integrated PC graphics class".
Which is why PCs use discrete graphics for anything but basic needs. Those scale over quite a span, so as you point out, there is a lot of middle ground to consider. LPDDR5 scales nicely to 256-bit wide if need be and that would imply roughly low-mid discrete GPU performance from a hypothetic iMac next autumn. And hey, that’s quite decent! But it is also no step up from what Apple already offers from AMD, and even less so from what will be the mainstream offerings from Nvidia and AMD at the same point in time.

I don’t know Apples view on this. I hope that they choose a design that scales further, and if that increases BOM with $50, then so be it. But I guess there’s a case to be made for staying lower cost, lower power and lower performance. I just don’t see that case being compelling for a computer that runs on mains power. For that specific segment of computers, performance is a bigger deal, and staying within a comfortable laptop power envelope seems needlessly confining. That’s why we’re speculating, isn’t it? There is more than one way to skin this particular cat. 😀
 
I don’t know Apples view on this. I hope that they choose a design that scales further, and if that increases BOM with $50, then so be it. But I guess there’s a case to be made for staying lower cost, lower power and lower performance. I just don’t see that case being compelling for a computer that runs on mains power. For that specific segment of computers, performance is a bigger deal, and staying within a comfortable laptop power envelope seems needlessly confining. That’s why we’re speculating, isn’t it? There is more than one way to skin this particular cat. 😀
I think it’s more than just a bit of bandwidth savings. To me, HBM and GDDR both appear to be solutions to the ”Not only are we doing this outside of the CPU’s memory, we have to get a LOT done, what with setting up the geometry AND doing all the the math (including math on triangles that, in a later step, we’re going to determine we don’t need to display) AND shift it to the display hardware”. And, this makes sense for modular PC’s generally because, unlike the Apple Silicon Mac, each vendor can focus on their specialty without handing over any market changing IP. I think what we have the potential to see is where efficiencies were ALWAYS there in the form factor of a PC but were never adopted due to the requirement to have every solution be as “interoperable” as possible.

I don’t think they’re avoiding performance for the sake of efficiency, I think if a developer is using Metal they’ll get the performance they’d expect from Apple’s efficient solution. Sure, they could throw more watts at the problem just because it’s plugged in, but, unlike those laptops with desktop GPU’s for performance, they won’t have to.
 
  • Like
Reactions: 2Stepfan
I think it’s more than just a bit of bandwidth savings. To me, HBM and GDDR both appear to be solutions to the ”Not only are we doing this outside of the CPU’s memory, we have to get a LOT done, what with setting up the geometry AND doing all the the math (including math on triangles that, in a later step, we’re going to determine we don’t need to display) AND shift it to the display hardware”. And, this makes sense for modular PC’s generally because, unlike the Apple Silicon Mac, each vendor can focus on their specialty without handing over any market changing IP. I think what we have the potential to see is where efficiencies were ALWAYS there in the form factor of a PC but were never adopted due to the requirement to have every solution be as “interoperable” as possible.

I don’t think they’re avoiding performance for the sake of efficiency, I think if a developer is using Metal they’ll get the performance they’d expect from Apple’s efficient solution. Sure, they could throw more watts at the problem just because it’s plugged in, but, unlike those laptops with desktop GPU’s for performance, they won’t have to.
How much does being TBDR matter when looking at it from a GPGPU/compute angle? Seems like bandwidth would still be important.
 
I think that’s a bit of a slippery slope fallacy that you’re extrapolating one event to the entire Mac Pro (and subsequently the professional market) lineup.

Agreed. There’s a lot of work Apple put in specifically for the 2019 Mac Pro that makes no sense to invest in at all if you are also working on the Apple Silicon switch in parallel and won’t use it.

That said, I am now wondering a bit on the timing. It took a little over 2 years from announcement that Apple was re-investing in the Mac Pro to ship the 2019. If we assume they had plans for the Apple Silicon switch by that point, then I‘m left wondering if an aspect of the work was also being sunk into the Apple Silicon version. Making sure that whatever new stuff they developed for the 2019 was something the Apple Silicon folks could use, and would integrate. It’ll be interesting to see what comes out in the next 24 months, anyhow.

There’s no real technical reason Apple can’t still support eGPUs or dGPUs on Apple Silicon. Some of the changes described to the boot process make eGPU/dGPU support easier for Apple, not harder.

I’m not discounting that Apple could abandon AMD GPUs for business reasons, but the evidence so far is circumstantial at best. Apple’s not helping the FUD situation by not being crystal clear on this topic though. Apple rarely is crystal clear though, so who knows.
 
How much does being TBDR matter when looking at it from a GPGPU/compute angle? Seems like bandwidth would still be important.
I don’t really think it will affect significantly. The main reason why GPGPU is a thing is NOT because it’s the best way to accomplish the task. The GPU was just sitting there not doing much and folks figured out how to structure payloads for it to perform non-Graphic work. Since it’s their own processor, if they felt they needed high performance matrix operations over and above what a TBDR architecture would be capable of, they could just include that. Actually, the AI processor does data consumption and computation as well.

Either way, the cool thing about Metal is you define it and, the system figures out where would be the best place for it to execute, on the CPU, GPU, AI module, etc.
 
Last edited:
Metal only uses the GPU, unless Apple has changed things recently.

EDIT: To be clear, the Metal driver uses the CPU, but there is no software fallback to Metal, and it does not use the neural engine or other fancy stuff.
 
  • Like
Reactions: Krevnik
I’m not discounting that Apple could abandon AMD GPUs for business reasons, but the evidence so far is circumstantial at best. Apple’s not helping the FUD situation by not being crystal clear on this topic though. Apple rarely is crystal clear though, so who knows.
They’ve been relatively clear, though, haven’t they? For example, they’ve said no discrete GPU’s. However, since we don’t know how they plan to do that exactly (They can’t out perform AMD! or They can’t cut out eGPU’s! or What about professional machines?) we speculate on how their solution will be something that we’re familiar with from the past, something that infers that “no discrete GPU’s” doesn’t REAAALYY mean that.

Once it’s available sometime in November, and if “no discrete GPU’s” really means just that, then, it’s not like we’ll have learned anything new. They will have been saying “no discrete GPU’s“ the entire time, it just felt like they weren’t being clear because they didn’t tell us how!
 
  • Like
Reactions: Jouls
Metal only uses the GPU, unless Apple has changed things recently.

EDIT: To be clear, the Metal driver uses the CPU, but there is no software fallback to Metal, and it does not use the neural engine or other fancy stuff.
I must have misunderstood something from the presentation! I’ll have to go review and confirm exactly how I got off track, likely Metal for Machine Learning or some such.
 
I must have misunderstood something from the presentation! I’ll have to go review and confirm exactly how I got off track, likely Metal for Machine Learning or some such.

You might have gotten it backwards.

CoreML will route your machine learning models to where it makes sense. So it can use Metal, the neural engine, or other accelerators available, so you don’t have to code all that logic yourself for each back end the model might run on.

Metal itself is a graphics and GPGPU compute API. Not a machine learning API.
 
  • Like
Reactions: leman
That said, I am now wondering a bit on the timing. It took a little over 2 years from announcement that Apple was re-investing in the Mac Pro to ship the 2019. If we assume they had plans for the Apple Silicon switch by that point, then I‘m left wondering if an aspect of the work was also being sunk into the Apple Silicon version. Making sure that whatever new stuff they developed for the 2019 was something the Apple Silicon folks could use, and would integrate. It’ll be interesting to see what comes out in the next 24 months, anyhow.

I have been speculating the exact same thing! My guess is Apple has been working on this transition for at least 2-3 years or maybe even longer. I know with the intel transition, they had been working on that for years, and they weren’t having to design their own silicon. That would explain why it took them so long to come up with a solution for the Mac Pro. I have a lot of theories about it, but so many people wondered why it took them so long to basically create the cheese grater 2.0 - don’t get me wrong it’s a great design, but it shouldn’t take Apple engineers that long.

I think you’re right. We all know that Apple Silicon can compete with notebooks and consumer desktops - the iPad Pro and even the iPhone are testament to that. But many like us have always wondered how this scales up to workstation-class workloads and many have been skeptical Apple can do this within 2 years time. I think they’ve been working on this for much much longer behind the scenes and the new Mac Pro was designed with this transition in mind from the very beginning.
 
I don't remember Apple adopting cutting-edge technology that no one else uses (in a given context) unless said tech is user-facing. Yes, Apple were the first to adopt some expensive IO and display techs, but that's different. These make immediate selling points.
GDDR/HMB would have to make a big difference in performance for Apple to use these as main RAM. Because they're not cheap. Apple isn't ready to lower their margins or reduce their sales for no good reason.
I can give you an example of that: In 2016, Apple was using cutting-edge (and probably customized) SSD technology in its laptops that gave them faster disk I/O than that of any competing stock computer.


And this didn't start in 2016 -- they were using cutting-edge SSD tech for a few years before that.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.