Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

mode11

macrumors 65816
Jul 14, 2015
1,452
1,172
London
If someone tells me that CPU is RISC or CISC they don’t really tell me anything. Besides, the CPU and the ISA are two different things.

This recent article at Chips and Cheese (arm-or-x86-isa-doesnt-matter) goes into quite a bit of detail, and echoes your comments. It was a good read and I learned a few things.

Basically, the ISA-specific stuff takes up a comparatively tiny area of the CPU, and the performance / efficiency differences between CPUs are down to their implementation rather than the ISA - it just depends on the markets they're optimised for. ARM has a name for efficiency due to its popularity in portable devices, but Intel's x86 Atom cores have been shown to be just as efficient as ARM when optimised for similar workloads. Both ISA's contain legacy instructions, but it doesn't really matter as CPU designs are always optimised for the commonly used ones anyway.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,674
This recent article at Chips and Cheese (arm-or-x86-isa-doesnt-matter) goes into quite a bit of detail, and echoes your comments. It was a good read and I learned a few things.

Basically, the ISA-specific stuff takes up a comparatively tiny area of the CPU, and the performance / efficiency differences between CPUs are down to their implementation rather than the ISA - it just depends on the markets they're optimised for. ARM has a name for efficiency due to its popularity in portable devices, but Intel's x86 Atom cores have been shown to be just as efficient as ARM when optimised for similar workloads. Both ISA's contain legacy instructions, but it doesn't really matter as CPU designs are always optimised for the commonly used ones anyway.

Good article, although I do not agree with every conclusion they make. A smartly designed ISA can help the CPU to avoid some work (no matter how you look at it Aarch64 decode is cheaper than x86 decode, contrary to what the article claims) and make some other work easier. It has been observed for example that Aarch64 seems to be designed to facilitate modern out-of-order execution and is particularly suited for optimisation via instruction pre-decoding (e.g. certain common patterns can be trivially detected and optimised). It also helps that Aarch64 is a fresh slate redesign that is build for ground up with modern processing in mind rather than an evolution of a decade-old encoding. And sure, these things don't matter if you have an unlimited die and power budget, but that's never really the case.

And these differences do show in practice. The current crop of Intel's E-cores do have very similar performance to the current crop of AMD's CPUs, but they still use around twice as much power.
 

maflynn

macrumors Haswell
May 3, 2009
73,682
43,740
What are the odds that Apple will move away from integrated graphics for their Mac Pro and iMac Pro?
5 pages of discussions, so I'm sure I missed some things that I'll bring up myself, so bear with me, and sorry if I'm rehashing stuff as I'm late to the party.

One basic advantage of a discrete GPU is that its replaceable. You have a desktop computer that came with a lower performing GPU, the consumer/hobbyist/professional can easily replace it. As the PC industry aged, and Nvidia/AMD invested in R&D the biggest winners for powerful GPUs were gamers - until very recently with miners and of course offloading processing to Nvidia's cuda cores.

My point is, what advantage does Apple have if they produce a discrete GPU? Also, where will they plug it in? Their board design lacks the expansion slots PC motherboards have. They'll have to design a PCIe like slot and have it connect directly to the CPU. Why not keep improving the on-die GPU which has advantages of the tight interconnection?
 

mi7chy

macrumors G4
Oct 24, 2014
10,622
11,294
I don't have any special relationship with Maier, I just appreciate his advice. I will accept that you didn't mean to offend, and apologize for being overly harsh myself.

I accept your apology for being overly sensitive and protective of Meier when stating facts about resume experience.

Keller is the guy who can't stay in one place for more than a few years

Did you copy and paste this from Meier because he made the exact same statement about Keller?
 

Colstan

macrumors 6502
Jul 30, 2020
330
711
I accept your apology for being overly sensitive and protective of Meier when stating facts about resume experience.
You're like the MacRumors mascot, @mi7chy, this place wasn't the same without you when you took your two unplanned vacations.
Did you copy and paste this from Meier because he made the exact same statement about Keller?
No, that was entirely my statement. He defended Keller when I criticized him.
 

Colstan

macrumors 6502
Jul 30, 2020
330
711
My point is, what advantage does Apple have if they produce a discrete GPU? Also, where will they plug it in? Their board design lacks the expansion slots PC motherboards have. They'll have to design a PCIe like slot and have it connect directly to the CPU. Why not keep improving the on-die GPU which has advantages of the tight interconnection?
That's basically been the argument going on since the M1 debuted. Some folks think that Apple's cadence will follow that of Apple Silicon thus far, and that the next Mac Pro will basically be a doubling of the M2 Ultra, an M2 "Extreme" with four Max dies, and a handful of PCIe slots.

From what I gather, the evidence for this are the rumors from Gurman, who doesn't mention dGPUs, but that the Mac Pro will follow that same pattern. There's also a poster on here who leaked the Mac Studio a week before it was announced, who evidently has a friend who receives test boards. He got some hands on time with an Apple Silicon Mac Pro prototype that had a single PCIe slot on it, and tested an AMD graphics card, but it didn't work.

On the other side are folks who believe that Apple will build a machine to match the 2019 Mac Pro in expandability and features. The evidence I've seen pointed to are the rumors of "Lifuka" from the China Times from two years ago. We don't really know much about it, whether it was a codename for on-die graphics, or separate project. Nor do we know if Apple ever plans on releasing it as a product, or if it even really existed, since this is one source.

The other bit of proof is the apology tour that Apple did back in 2017. This is used to substantiate how important Apple considers the pro market to be and that they wouldn't abandon them with the next Mac Pro.

A lot of big brain people have been debating this repeatedly on this forum and there's been no consensus. Despite being a small brain person, I have my own thoughts, for what it is worth. The Mac is a much different product than it was five years ago. Apple Silicon has significantly changed things and Apple has put its weight behind a highly integrated approach. I don't think they are going to invest the resources into the next Mac Pro to make it as expandable as the previous version. The 2019 Mac Pro was a result of Intel's design philosophy, not Apple. Apple just made the case and MPX modules, while the rest was Intel's high-volume Xeon platform. The Apple Silicon Mac Pro is going to be a niche among a niche.

I don't say that with any joy, I just purchased a Mac Pro three weeks ago and am currently waiting for a 6900XT to arrive today to put inside it. I just think that's the most likely outcome, and that the 2023 Mac Pro will be an M2 "Extreme", basically a quad die Max, with a handful of PCIe slots for non-GPU additions.

Again, just my thoughts on the matter, I could very well be wrong.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,674
The evidence I've seen pointed to are the rumors of "Lifuka" from the China Times from two years ago. We don't really know much about it, whether it was a codename for on-die graphics, or separate project. Nor do we know if Apple ever plans on releasing it as a product, or if it even really existed, since this is one source.

I'm quite sure that "Lifuka" referred to A15's/M2 GPU. Since there has been no new GPU developer information for A16 I'd expect it to be a copy of A15.

A lot of big brain people have been debating this repeatedly on this forum and there's been no consensus. Despite being a small brain person, I have my own thoughts, for what it is worth. The Mac is a much different product than it was five years ago. Apple Silicon has significantly changed things and Apple has put its weight behind a highly integrated approach. I don't think they are going to invest the resources into the next Mac Pro to make it as expandable as the previous version. The 2019 Mac Pro was a result of Intel's design philosophy, not Apple. Apple just made the case and MPX modules, while the rest was Intel's high-volume Xeon platform. The Apple Silicon Mac Pro is going to be a niche among a niche.

There is a lot of merit in that line of thought I think. Chasing after the Mac Pro market will be very expensive and unlikely to generate profit. Would Apple do it merely for prestige? Or maybe they have some other secret plan, who knows...

What I am most curious about is whether the upcoming Apple Silicon will continue the horizontal scaling theme of M1 series or whether we will see some vertical scaling and feature differentiation. As I mentioned before, Apple has a lot of thermal headroom and could dramatically improve desktop performance just by revisiting their TDP targets. But then again, most of their revenue is laptop, so again, I wouldn't be surprised if that's what they focus on and give up the high-performance desktop market altogether. Time will tell.

P.S. There are also speculations that Apple might be working on another type of high-power/high-performance core which would make its debut in their prosumer hardware. That would make some sense to me but who knows.
 
  • Like
Reactions: Colstan

maflynn

macrumors Haswell
May 3, 2009
73,682
43,740
A lot of big brain people have been debating this repeatedly on this forum and there's been no consensus. Despite being a small brain person
LOL, from one small brain person to another, I think the proof will be in the pudding. Once Apple gets off the snide and deals with the last vestiges of the intel Mac line, we'll have a better understanding
and Apple has put its weight behind a highly integrated approach
That's my thinking as well, especially given the continued growth towards cloud based services, there's less and less reasons to have upgradeable storage. GPU and other components are harder to pin down to say the integrated approach is better, but I do think Apple's strength is designing integrated products that are more or less sealed
 
  • Love
Reactions: Colstan

spaz8

macrumors 6502
Mar 3, 2007
492
91
I think the 4x Max Extreme combo is most likely also for what the AS Mac Pro 8,1 will be. Part of me wonders why its taking so long to show up if that's the case though? Perhaps its very naive to think you could pretty much stack two Mac Studio ultra's together?

That said.. I'm not sure Apple can compete with Nvidia or even AMD GPUs if they go that way and don't have some sort of additional GPU centric processor. Is AS only in the same zip code graphics processing wise as Nvidia currently because Apple is ahead in die shrinks? I guess the next question is does Apple care to compete in that space? or is the graphics compute good enough for 90% of apple users? I assume apple can't keep bolting chips together as a strategy to keep up GPU performance wise, as I expect the gap to widen.

Then again, Nvidia etc are getting their gains by upping the wattage each cycle.. so perhaps they are gonna hit a wall?
 

Colstan

macrumors 6502
Jul 30, 2020
330
711
I'm quite sure that "Lifuka" referred to A15's/M2 GPU. Since there has been no new GPU developer information for A16 I'd expect it to be a copy of A15.
That's what I assumed, but I'm just repeating what others have claimed to support their position.
But then again, most of their revenue is laptop, so again, I wouldn't be surprised if that's what they focus on and give up the high-performance desktop market altogether.
I think it depends on what you define as "high performance". An RTX 3060 is a perfectly performant graphics card. That's the level that most people buy at, and I don't see why Apple can't continue to match the mid-range without targeting the insane halo products that almost nobody buys. It's not like the M1 Ultra is a slouch, even though it doesn't actually get close to the 3090.

I think this is going to be a simple calculation done by bean counters. It's one thing to shove a Xeon and Radeon cards into a pretty case. (I've got one right next to me, it's really, really pretty.) It's another to depart from their current design strategy just for a single low-volume halo product. My assumption is that they will try to design an Apple Silicon Mac Pro that will hit as many of their target markets as possible, and unfortunately, sacrifice those who need ultra performance and expandability.

I'm just not sure that market is important enough to Apple. While never confirmed, I think that the iMac Pro was originally supposed to replace the Mac Pro, instead of being a failed one-off.

Don't get me wrong, I'd love a pimped out Mac Pro with all the bells and whistles, featuring new and interesting technologies to delight us. What I think we can all agree upon is that it would be nice if Apple would finally announce the damn thing so we can get off this hamster wheel.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,674
I think it depends on what you define as "high performance". An RTX 3060 is a perfectly performant graphics card. That's the level that most people buy at, and I don't see why Apple can't continue to match the mid-range without targeting the insane halo products that almost nobody buys. It's not like the M1 Ultra is a slouch, even though it doesn't actually get close to the 3090.

Oh, absolutely, and I fully expect the Apple GPU performance to improve faster than other GPUs. Apple is a relative newcomer in this segment after all and there are surely more tricks they can pull.

My assumption is that they will try to design an Apple Silicon Mac Pro that will hit as many of their target markets as possible, and unfortunately, sacrifice those who need ultra performance and expandability.

I'm just not sure that market is important enough to Apple. While never confirmed, I think that the iMac Pro was originally supposed to replace the Mac Pro, instead of being a failed one-off.

Yes, this is a realistic scenario. But then again, given all the resources Apple puts into optimising Blender... they have no chance to position Apple Silicon as a machine of choice for 3D artists with the current performance (not even on mobile), but they still focus on Blender. It kind of tells me that something big is coming. Hardware RT is a certainty, but maybe there is more.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
I assure you that TSMC N3 or N3E will be Apple first to implement. N4 was still part of the N5 family and not real node jump like N3E.


N3 there is a decent chance. At least amoung the large , 'name recognition' players. There are other folks out there that are not looking for splotlights.

N3E is far more wishy washy. There are a fair number of players who looked at N3 and said "heck no, not going to touch that". If skip N3 then actually have a quicker path to N3E (and skipping some of the extra complexity of N3. Some folks don't want FlexFin. ). Apple puts their some of their SoCs on non technical , arbitrary release schedules. (every September for iPhone for no technical reason whatsoever. ) Same reason why MediaTek got to N4 at least a quarter before Apple did.

The deeper Apple exploits the unique features of N3 actually might increase the chance they'll skip N3E and pick up another N3-family variant that is better aligned going down that rabbit hole.
 

NT1440

macrumors Pentium
May 18, 2008
15,092
22,158
N3 there is a decent chance. At least amoung the large , 'name recognition' players. There are other folks out there that are not looking for splotlights.

N3E is far more wishy washy. There are a fair number of players who looked at N3 and said "heck no, not going to touch that". If skip N3 then actually have a quicker path to N3E (and skipping some of the extra complexity of N3. Some folks don't want FlexFin. ). Apple puts their some of their SoCs on non technical , arbitrary release schedules. (every September for iPhone for no technical reason whatsoever. ) Same reason why MediaTek got to N4 at least a quarter before Apple did.

The deeper Apple exploits the unique features of N3 actually might increase the chance they'll skip N3E and pick up another N3-family variant that is better aligned going down that rabbit hole.
My understanding is that few companies, if any, are actually going to use N3. It seems it was more a proving ground and TMSC saw that there were production bottlenecks and moved forward on the N3E family as it’s producing better yields (in testing).Most volume orders will be one of the N3E family.

Apple MAY be the only one using N3 depending where they “locked in” on that development timeline.
 

singhs.apps

macrumors 6502a
Oct 27, 2016
660
400
That's my thinking as well, especially given the continued growth towards cloud based services, there's less and less reasons to have upgradeable storage.
Yet the ios devices come with better specs including larger and larger storage ..and oh all those MBP YouTubers + video pros ..might they also move to the cloud ?
 

maflynn

macrumors Haswell
May 3, 2009
73,682
43,740
Yet the ios devices come with better specs including larger and larger storage
Macs come with larger and larger storage - like the iPhone its soldered on. ITs not upgradeable but I'm saying its less important today then in the past

those MBP YouTubers + video pros ..might they also move to the cloud ?
Definitely, or do what many professionals do use NAS devices.
 

singhs.apps

macrumors 6502a
Oct 27, 2016
660
400
Macs come with larger and larger storage - like the iPhone its soldered on. ITs not upgradeable but I'm saying its less important today then in the past


Definitely, or do what many professionals do use NAS devices.
Doesn’t matter if it’s soldered or not. If moving to cloud why bother with all these cpu speeds and storage increase and GPU configs ? Apple might as well make simple front end devices then.
 

maflynn

macrumors Haswell
May 3, 2009
73,682
43,740
Doesn’t matter if it’s soldered or not. If moving to cloud why bother with all these cpu speeds and storage increase and GPU configs ? Apple might as well make simple front end devices then.
Hey, what ever you say.
 

JouniS

macrumors 6502a
Nov 22, 2020
638
399
That's my thinking as well, especially given the continued growth towards cloud based services, there's less and less reasons to have upgradeable storage. GPU and other components are harder to pin down to say the integrated approach is better, but I do think Apple's strength is designing integrated products that are more or less sealed
Upgradeable storage is useful in data intensive applications, where you sometimes need a lot of fast disk space for temporary files.

I often use AWS i4i.16xlarge instances in my work. If I was in the market for a workstation, I would currently look for something similar. It's effectively half a computer. The CPU is ~30% slower single-threaded and ~30% faster multi-threaded than than the M1 Ultra. There is 512 GiB of RAM, 4x 3750 GiB of SSD (4x 4 TB in marketing terms), and 37.5 Gbps of network bandwidth. Storage is probably the component that would need upgrading first, as the largest amount of disk space I've needed for a single job has already been ~12 TB.

The price gap between high-end consumer desktops and proper workstations has so far been steep enough that I haven't seriously considered getting a workstation. The Mac Studio was kind of promising, but its RAM and storage capacities are too low for my purposes.
 

Boil

macrumors 68040
Oct 23, 2018
3,477
3,173
Stargate Command
My point is, what advantage does Apple have if they produce a discrete GPU? Also, where will they plug it in?

A PCIe slot...?

Their board design lacks the expansion slots PC motherboards have.

So you have seen the board for the as-yet-to-be-revealed ASi Mac Pro...?

They'll have to design a PCIe like slot and have it connect directly to the CPU. Why not keep improving the on-die GPU which has advantages of the tight interconnection?

Or, and hear me out here, they could just use a PCIe slot (for an add-in GPGPU card)...

There is a lot of merit in that line of thought I think. Chasing after the Mac Pro market will be very expensive and unlikely to generate profit. Would Apple do it merely for prestige? Or maybe they have some other secret plan, who knows...

What I am most curious about is whether the upcoming Apple Silicon will continue the horizontal scaling theme of M1 series or whether we will see some vertical scaling and feature differentiation. As I mentioned before, Apple has a lot of thermal headroom and could dramatically improve desktop performance just by revisiting their TDP targets. But then again, most of their revenue is laptop, so again, I wouldn't be surprised if that's what they focus on and give up the high-performance desktop market altogether. Time will tell.

P.S. There are also speculations that Apple might be working on another type of high-power/high-performance core which would make its debut in their prosumer hardware. That would make some sense to me but who knows.

New SoCs for the ASi Mac Pros...

Higher clocks, higher power limit...

More GPU cores in ratio to CPU cores...

Yes, this is a realistic scenario. But then again, given all the resources Apple puts into optimising Blender... they have no chance to position Apple Silicon as a machine of choice for 3D artists with the current performance (not even on mobile), but they still focus on Blender. It kind of tells me that something big is coming. Hardware RT is a certainty, but maybe there is more.

Work on Blender is to show all the other DCC software developers what can be done if the software is optimized for the hardware and the TBDR model...?
 

leman

macrumors Core
Oct 14, 2008
19,521
19,674
Work on Blender is to show all the other DCC software developers what can be done if the software is optimized for the hardware and the TBDR model...?

It’s not that simple. First, Apple rolls out a fully featured state of the art RT API with capabilities that clearly target production renderers. Then they spend significant manpower on the showcase software. It’s not just to show that their GPUs are “fast”
(because quite honestly, no matter how they optimize it M1 will never match Nvidia’s offerings). Apple is clearly after the market itself. And that doesn’t make any sense unless they have the hardware.
 
  • Like
Reactions: singhs.apps

Gudi

Suspended
May 3, 2013
4,590
3,267
Berlin, Berlin
Yes. There are still benefits of that model due to new GPU techs Apple is likely not going to implement near or medium term. Denying it is foolish.
I can tell, you still haven't entered Steve Job's reality distortion field. There was no benefit in waiting on Intel in the past and there won't be any benefit in becoming dependent on other tech companies in the future. Apple already showed us how they plan to scale their silicon technology. This is the way!

Apple-M1-chip-family-lineup-220308_big.jpg.medium_2x.jpeg
 

spaz8

macrumors 6502
Mar 3, 2007
492
91
Hmm, looking at the chip lineup above ^ .. makes me wonder if the M2 Extreme will actually be an individual larger Chip/SOC than the M1 MAX.. and we'll see 2 of those bolted together in MP vs. 2 Ultra's (4 MAX's) .. though I assume 4x MAX's is the more economical way production-wise.
 

singhs.apps

macrumors 6502a
Oct 27, 2016
660
400
Also the max dies in the ultra are flipped on the x axis. For some reason I always pictured them mirrored on the y axis.

Any particular insight on this ?
 

leman

macrumors Core
Oct 14, 2008
19,521
19,674
Also the max dies in the ultra are flipped on the x axis. For some reason I always pictured them mirrored on the y axis.

Any particular insight on this ?

How would it work out otherwise? You take two of the same chip and align them along the connector. Basically one chip has to be rotated 180 degrees. You can’t just vertically “mirror” it, it won’t be the same chip.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.