Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

innerproduct

macrumors regular
Jun 21, 2021
222
353
I remember last WWDC when we collectively absolutely believed we would get a MacPro reveal. Lol! How times flies! Apple actually managed to match the lack of updates from 2013 to 2017 with the 2019 MacPro. That's bloody amazing considering how messed up that was. But that time they actually apologised for the patheticness (is that a word? Should be trademarked by this fruit company by now).
 
  • Like
Reactions: ZombiePhysicist

mode11

macrumors 65816
Jul 14, 2015
1,452
1,172
London
Having abandoned Mac OS for Windows during roughly the G5 era, to return with C2D Macs, I agree with the spirit of your post. I would push back on a few things though:

No, its *just* a variation of ARM, that reinvents shared graphics memory, which was described by Apple as "vampire graphics memory" when they weren't using it. Just because an attractive hairstyle in a polo-neck shirt says it's "innovative" on a recorded slide deck, and a cottage industry of press-release-regurgitators cosplaying as journalists repeat it, doesn't mean it is.

CPUs need very low latency access to main memory, but don't need / get high bandwidth (with only a few cores, they can only crunch through so much data). With thousands of cores, GPUs need massive memory bandwidth (so use on-board GDDR VRAM), but aren't so bothered about RAM latency (the data can just be streamed to VRAM in advance). So in a sense, their needs are polar opposites. Traditional 'vampire graphics' has resided on the CPU, and is therefore bandwidth starved. This is one of the main reasons iGPUs are weak - there would be no point making them powerful, because they'd be choked by memory bandwidth anyway.

Apple's approach is different in that the SoC has a very wide path to nearby DDR5 modules, providing similar bandwidth to a graphics card's VRAM. This would be an unnecessarily expensive approach for a traditional CPU, but does mean M-series unified memory is fundamentally different to that of a typical iGPU. It's both high bandwidth and low latency.

As for it being "scalable", in comparison to what? IA-64 goes from embedded NAS processors to Workstations and Severs. "Minimally scalable" is the most accurate definition for Apple Silicon at the moment - high performance cellphone, to low performance computer is not the great breadth of scale you may think.

Low performance computer may be a bit unfair. As has often been the case with Apple, the CPU is strong - it's the GPU that's weak (including on the Ultra, given it's a £4K+ machine).

Except that X86 offers higher absolute performance at the top end, lower power draw for performance on mobile, and wider hardware & software compatibility.

Gods, it sounds like an absolute nightmare to which one is sentenced :eek:.

X86 hasn't caught up in terms of power draw on mobile has it? I was under the impression that most Windows laptops need to be on mains power to allow full performance, and aren't as long-lived on battery?

Why do you think the fate of the Mac will be any different on Apple Silicon, to that which it experienced when it was on m68k, and PowerPC. It's still a minority platform, that's now back to having to have an entire dedicated codebase (that never matures because Apple are always changing things, to keep developers "on their toes").

Yes, my concern too. Part of the issue with m86k and PPC was that each generation would start strong, then inexorably slide backwards vs. x86. ASi does have the advantage of being based on the architecture of the iPhone, which rakes in loads of cash and reliably gets an annual update. It also has a long track record of dominating the mobile industry in terms of performance. OTOH, a mobile phone isn't a PC, and Apple will always prioritise their main cash cow when it comes to architectural trade-offs.

Apple's problem during the x86 era was never that their computer processors weren't fast enough, and it was never that they drained their batteries too quickly. Apple's problem was consistently that they designed bad cooling solutions, that they included weak graphics, and made inefficient, low-performance & buggy software.

None of that is going to change as Apple removes the ability to directly compare their products with their competitors'.

Also my concern. ASi's power efficiency may avoid the need for substantial cooling solutions (iOS devices will always be passively cooled), though on the flip side, that may limit performance - especially with GPUs. I suspect the main reason for the Mac's traditionally weak GPUs has been Apple's refusal to compromise on slim / quiet form factors.

Which pro software industry? Graphics, Film & Video, Gamedev, anything to do with content creation, engineering, architecture, 3D, visualisation... none of these industries are tied to macOS or Apple Silicon. None of them were crying out for a new processor architecture to support.

Yes. Aside from FCP and Logic, pretty much everything else is cross-platform, and often faster / better featured on Windows (e.g. the Arnold renderer is GPU accelerated on the Windows version of Maya, but not on macOS).
 

leman

macrumors Core
Oct 14, 2008
19,521
19,678
@mattspace I think I understand where you are coming from. This is a certain kind of pragmatic approach that is shared by many users in the Mac Pro forums. It's also very understandable. You are professionals who have your specific needs and established workflows, and you want your equipment to do its job and stay out of your way otherwise. Software and hardware compatibility is the usual argument and "don't change what's not broken" is the primary idea.

The thing is, however, it's not about you or users with similar needs or concerns. It's about viable business strategies for a certain product vision. And this is the sole context I am concerned with. The fact — Apple has decided to pursue a certain computing paradigm which is fairly unique in the consumer market and requires adaptations from the software makers. Whether this paradigm is a good idea or a bad idea, or whether it inconveniences you — that doesn't matter, that's a different question and a matter for a separate discussion. What matters is that they made their decision and so far are committing to it (albeit moving slower than some would wish). That is obviously a very big risk for them, and while the initial response has been overwhelmingly positive, they are not out of the woods yet. The worst thing they can do now is split the platform. Maybe this entire Apple Silicon won't work out, who knows. But if they backtrack on their efforts the modern Mac will go the way of the Windows RT. Why develop for a platform that not even the platform owner takes seriously?
 

leman

macrumors Core
Oct 14, 2008
19,521
19,678
Low performance computer may be a bit unfair. As has often been the case with Apple, the CPU is strong - it's the GPU that's weak (including on the Ultra, given it's a £4K+ machine).

Not to mention that describing IA-64 as scalable is massively bending the truth. Intel's strategy is not feature unification but feature segregation. And this hinders software adoption. AVX-512 for example was introduced in 2013 and is virtually unused by the consumer software, because even the newest Intel consumer chips don't support it. Matrix acceleration instructions? Latest Xeons only. While Apple offers exactly the same hardware capability across the entire family — from the mobile phone to the compact workstation.

X86 hasn't caught up in terms of power draw on mobile has it? I was under the impression that most Windows laptops need to be on mains power to allow full performance, and aren't as long-lived on battery?

AMD made some very impressive advances in terms of power efficiency. Zen4 is probably only 20-30% less efficient than Apple Silicon at the same power draw per core. And since AMD uses more high-performance cores you can get an AMD laptop that is faster on multi-core workloads while nominally being in the same power category (the real power draw will obviously be 1.5-2x higher for 15-30% higher performance compared to M1).
 

leman

macrumors Core
Oct 14, 2008
19,521
19,678
Also my concern. ASi's power efficiency may avoid the need for substantial cooling solutions (iOS devices will always be passively cooled), though on the flip side, that may limit performance - especially with GPUs. I suspect the main reason for the Mac's traditionally weak GPUs has been Apple's refusal to compromise on slim / quiet form factors.

Precisely! I think this is the actual concern (and a very real one!), with the rest being projection. People don't believe that Apple's approach can scale to HPC, which is why they advocate for using the proven x86 platform in the MP.
 

jdb8167

macrumors 601
Nov 17, 2008
4,859
4,599
Not to mention that describing IA-64 as scalable is massively bending the truth.
Not to mention that the post shows some ignorance about Intel architecture considering IA-64 refers to the failed Itanium and not x86-64 (or AMD64).
 
  • Like
Reactions: Mago

OhSEx

Suspended
May 27, 2023
8
5
Not to mention that describing IA-64 as scalable is massively bending the truth. Intel's strategy is not feature unification but feature segregation. And this hinders software adoption. AVX-512 for example was introduced in 2013 and is virtually unused by the consumer software, because even the newest Intel consumer chips don't support it. Matrix acceleration instructions? Latest Xeons only. While Apple offers exactly the same hardware capability across the entire family — from the mobile phone to the compact workstation.



AMD made some very impressive advances in terms of power efficiency. Zen4 is probably only 20-30% less efficient than Apple Silicon at the same power draw per core. And since AMD uses more high-performance cores you can get an AMD laptop that is faster on multi-core workloads while nominally being in the same power category (the real power draw will obviously be 1.5-2x higher for 15-30% higher performance compared to M1).
I’d be interested in your opinion on where this leaves Apple Silicon?

If traditional x86 machines are catching up in efficiency, and lead on performance overall, while having advantages in terms of flexibility and cost, while having wide software support, what does mean for the future of ASi?

If I recall, when the M1 arrived, apple had a healthy lead in efficiency while being competitive in single core perf. How has this disappeared? Is it a case of Apple squandering their advantage, or have their competitors over performed?
 
  • Like
Reactions: mattspace

mattspace

macrumors 68040
Jun 5, 2013
3,344
2,975
Australia
It's about viable business strategies for a certain product vision. And this is the sole context I am concerned with. The fact — Apple has decided to pursue a certain computing paradigm which is fairly unique in the consumer market and requires adaptations from the software makers. Whether this paradigm is a good idea or a bad idea, or whether it inconveniences you — that doesn't matter, that's a different question and a matter for a separate discussion. What matters is that they made their decision and so far are committing to it (albeit moving slower than some would wish). That is obviously a very big risk for them, and while the initial response has been overwhelmingly positive, they are not out of the woods yet. The worst thing they can do now is split the platform. Maybe this entire Apple Silicon won't work out, who knows. But if they backtrack on their efforts the modern Mac will go the way of the Windows RT. Why develop for a platform that not even the platform owner takes seriously?

No, the worst thing they can do, is produce another 2013 Mac Pro - a constrained, sub-standard-performance, locked down proprietary appliance. That's the worst they can do.

Saying "we discovered that for the Mac Pro, which has no typical user, and therefore needs to be able to be reconfigured for any use case, Apple Silicon isn't there yet, so we're keeping that machine on Xeon / Moving to Epyc, and keeping our compilers dual platform fat binary for the next 5+ years" would be literally the best thing they could do for the market who buy the Mac Pro because this market wants a predictable, reliable, consistent product strategy.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,678
I’d be interested in your opinion on where this leaves Apple Silicon?

If traditional x86 machines are catching up in efficiency, and lead on performance overall, while having advantages in terms of flexibility and cost, while having wide software support, what does mean for the future of ASi?

If I recall, when the M1 arrived, apple had a healthy lead in efficiency while being competitive in single core perf. How has this disappeared? Is it a case of Apple squandering their advantage, or have their competitors over performed?

I don't think that the base equation has changed much. If you look at the single core performance of most recent mobile CPUs (M2 family, AMD Phoenix Range, Intel Raptor Lake), they are pretty much the same. It's just that Apple needs 5-6 watts to get there, AMD needs roughly 10-15 watts, and Intel needs 20-25 watts. Apple still has a fairly commanding lead in per/watt in this segment.

There are few things that promote the idea of x86 CPUs catching up. In no particular order:

- x86 CPU makers tend to be misleading with the power consumption figures; e.g. an Intel 45W CPU or an AMD 35W CPU will often consume significantly more power when running popular benchmarks
- both Intel and AMD have introduced new "mobile performance" power brackets, with significantly higher base TDP; Intel 13980HX for example has the base TDP at the same level as performance-class desktop CPUs just a few years ago, and it can draw over 150W under load; what we are seeing here is essentially desktop CPUs being rebranded as "mobile" to project an illusion of progress
- Intel is aggressively relying on lower-performance area-efficient CPU cores to claim much improved multicore performance; AMD uses their energy efficiency advantage to clock they cores lower and thus achieving high nominal performance/watt; this works well for some applications and less well for some others
- Apple's CPU performance gains from M1 to M2 have been rather conservative, suggesting stagnation; it doesn't look good for Apple if Intel claims 50% improvement in a single generation and Apple only claims 20% (never mind that the Intel also increased the TDP by 50%)

There are of course also real technological advances. AMD for example has moved to 5nm, which allowed them to boost the clocks without increasing the power consumption and fix their long-standing issues with subpar single-core performance (that is now on par with Apple or Intel). There are also some rumours that Intel is exceeding expectations with their new Intel 4 process and that the new mobile CPUs (to come out next year) might be very good. So who knows how the landscape will change.

The end result is a bit of a mixed bag (for Apple at least). They are still leaders in the perf/watt, but you can also buy an AMD laptop that will likely better multi-core performance in your workload for cheaper (sure, you'll sacrifice some battery life and build quality, but many users won't care). In high-performance desktop, Apple Silicon is currently not really viable.

Where does it leave Apple Silicon? The way I see it, the thing to watch out for is the next product. There have been voices suggesting that Apple is stagnating and that x86 will outpace Apple Silicon, I don't find these arguments very convincing personally. However, if the next product release does not feature a new microarchitecture with significantly improved performance on both mobile and desktop, Apple might be in trouble.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,678
No, the worst thing they can do, is produce another 2013 Mac Pro - a constrained, sub-standard-performance, locked down proprietary appliance. That's the worst they can do.

Saying "we discovered that for the Mac Pro, which has no typical user, and therefore needs to be able to be reconfigured for any use case, Apple Silicon isn't there yet, so we're keeping that machine on Xeon / Moving to Epyc, and keeping our compilers dual platform fat binary for the next 5+ years" would be literally the best thing they could do for the market who buy the Mac Pro because this market wants a predictable, reliable, consistent product strategy.

If they are not ready, they can always move the AS MP release to next year. I don't see how that would be worse than throwing breaks on their platform for 5+ years.
 

mattspace

macrumors 68040
Jun 5, 2013
3,344
2,975
Australia
Apple's approach is different in that the SoC has a very wide path to nearby DDR5 modules, providing similar bandwidth to a graphics card's VRAM. This would be an unnecessarily expensive approach for a traditional CPU, but does mean M-series unified memory is fundamentally different to that of a typical iGPU. It's both high bandwidth and low latency.

And every ARM vendor will have their own take on it within 12-24 months and it'll be just another ARM variant, with no real distinction in performance. Remember when Apple was going to become self-reliant for cellular modems, because they could do everything in-house better than everyone else? Now they have the multi-year customer deal with Broadcom.

If Apple's paradigm actually produced a GPU that was competitive in performance with traditional PCI GPUs, it would be wonderful. But that's not what's going to happen.

Standard PCI GPUs will get full bandwidth full performance access to main system ram in PC-Land to an extent that negates any advantage Apple's paradigm provides, before Apple builds an on-die GPU that outperforms the mainstream contemporary high performance PCI GPUs.

But that's been Apple's thing for ever - very big on promises that become obsolete before the promised land is reached.
 
  • Love
Reactions: prefuse07

mattspace

macrumors 68040
Jun 5, 2013
3,344
2,975
Australia
If they are not ready, they can always move the AS MP release to next year. I don't see how that would be worse than throwing breaks on their platform for 5+ years.

What do you think is going to change in 12 months, when the problem is likely to be a paradigm (eg the 2013 Mac Pro) that will never work in the market it's supposed to serve?

If they don't have an AS processor that can do standard ram and GPUs this year, it's likely they won't have one next year, because they already knew that was the paradigm Mac Pro customers had been demanding post 2013. The market isn't going to change to suit Apple. The slotbox is the platonic ideal for a professional workstation, not some begrudging compromise that only persists because its cheap.

The only reason people who don't want to upgrade their Mac Pros have the option of a Mac Pro at all, is because there are enough upgrade-requiring buyers to provide the critical mass necessary for the product to be viable. For some strange reason, the "studio on steroids" advocates seem to think that's the other way around.
 

Mago

macrumors 68030
Aug 16, 2011
2,789
912
Beyond the Thunderdome
It's just a variation of ARM
Not even a variation
it's a scalable unified compute architecture that requires substantial software overhaul to be used effectively. Especially if we talk about the pro software.

It's just ARM v8,5a with few undocumented instrucción included for coprocessor and x86-64 binary compatibility.

Apple M Silicon (actual silicon) IP is based on arm neoverse "downgraded" to arm v8.5a instructions set and custom l2 cache and non arm GPU. (GPU is not a neoverse part).

Maybe on M3 or M4 apple finally implement full arm v9 and drops some of all undocumented extensions.

As "unified" architecture, it's just an fancy rebrand for old shared memory architecture, there's is zero architectural difference among apple's APU and Intel or AMD APU, both shares cache and system RAM resources with CPU cores and memory cores, follows the same logical behavior etc.

And NO UNIFIED MEMORY is not main memory baked into soc die, it'd just discreet memory modules without noise/socket interface chip directly connected to SOC by copper or gold wires on PCB, no Mac yet includes it's RAM backed into SOC, neither does AMD/Nvidia GPU, while GPU (and some Apu) manufacturer includes RAM into its SOC complex it's an manufacturing solution not an architectural question, fact Is with proper soldering/desolder tools you can upgrade memory on AMD/Nvidia GPU as well in apple silicón system, if baked into same silicon substrate it would be impossible.

Further it not even prevent apple to adopt socketed RAM in the future, even with current m2-pro and M2-max SOC.

Only thing Apple does different is it monster silicon-expensive L2 cache it mostly remedy all the performance drawbacks of shared ram vs dedicated ram, but not even a silver bullet some algorithms still requires full dedicated RAM to avoid bottlenecks, but for general purpose it works and actually is wonderful for cryptography.
 
Last edited:
  • Haha
Reactions: jdb8167

leman

macrumors Core
Oct 14, 2008
19,521
19,678
And every ARM vendor will have their own take on it within 12-24 months and it'll be just another ARM variant, with no real distinction in performance.

In datacenter market, sure. That's the path both Intel and Nvidia took. But we are talking about chips that alone cost $10k+ (actually $30k+ for Nvidia). I don't see this coming to the consumer market any time soon.

If Apple's paradigm actually produced a GPU that was competitive in performance with traditional PCI GPUs, it would be wonderful. But that's not what's going to happen.

Their GPUs are competitive. They perform better than dGPUs with the same power consumption. And sure, Apple doesn't have a high-performance desktop GPU. Maybe they will make one, maybe they won't. It's not a matter of technology (which they have), it's a matter of making a business decision.

One thing is clear — if Apple does not intend to make a desktop-class CPUs and GPUs, they should just give up the Mac Pro.

Standard PCI GPUs will get full bandwidth full performance access to main system ram in PC-Land to an extent that negates any advantage Apple's paradigm provides, before Apple builds an on-die GPU that outperforms the mainstream contemporary high performance PCI GPUs.

I am puzzled that you believe in PC industry solving a problem it hasn't even started tackling before a company that already has working technology. What magical PCI GPU technology with full bandwidth access to main system RAM are you do you have in mind? PCIe 7 offers 242GB/s for a 16x interface — that's slower than my laptop had a year ago! And the PC industry is barely transitioning to PCIe 5...

By the time PCIe reaches speeds to negate any advantage of the UMA paradigm Apple could shipping stacked SoCs with multi-tier hybrid RAM for years.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,678
It's just ARM v8,5a with few undocumented instrucción included for coprocessor and x86-64 binary compatibility.

You guys are missing a forest behind the trees. I am talking about the system. You keep zooming in onto the CPU. Who cares about the CPU? Forget about the GPU. Look at Appel Silicon as a compute platform for solving problems. You have a bunch of compute modules with certain properties and capabilities, a shared memory hierarchy that offers certain guarantees, and a bunch of APIs. Trying to solve a specific problem optimally will often require different algorithmic approach on an Apple Silicon platform compared, say, to an Intel Xeon (AVX-512) platform with an Nvidia GPU (CUDA). That's what this is all about.
 

mattspace

macrumors 68040
Jun 5, 2013
3,344
2,975
Australia
In datacenter market, sure. That's the path both Intel and Nvidia took. But we are talking about chips that alone cost $10k+ (actually $30k+ for Nvidia). I don't see this coming to the consumer market any time soon.

There's plenty of ARM-based Linux computers out there. But still, AS is not going to be measurably ahead of Intel and AMD cpus on performance of power consumption going forward. Apple got a jump, but that'll evaporate.

Their GPUs are competitive. They perform better than dGPUs with the same power consumption.

And if power consumption was important for a (high performance) professional-use desktop system, that would mean something. But electricity is literally a rounding error in costs for a professionally used system, when you take into account hardware costs, software costs, rent etc.

They perform worse, in absolute terms, than NVidia and AMD's products for the same price.


And sure, Apple doesn't have a high-performance desktop GPU. Maybe they will make one, maybe they won't. It's not a matter of technology (which they have), it's a matter of making a business decision.

I disagree that they have the technology. I do not believe Apple can produce a GPU with the equivilent performance, and sell it for the equivalent price, to that produced by Nvidia and AMD.
One thing is clear — if Apple does not intend to make a desktop-class CPUs and GPUs, they should just give up the Mac Pro.

Best case scenario, Apple is hit with a regulatory slam that breaks every tie that binds macOS to iOS / iPadOS / iCloud, and creatives can ditch Macs for Linux / Windows workstations, without losing any synergies.



I am puzzled that you believe in PC industry solving a problem it hasn't even started tackling before a company that already has working technology. What magical PCI GPU technology with full bandwidth access to main system RAM are you do you have in mind? PCIe 7 offers 242GB/s for a 16x interface — that's slower than my laptop had a year ago! And the PC industry is barely transitioning to PCIe 5...

Trends and Forces will always bury the Great Men of history, and that's what will happen here. Microsoft is already doing direct access for GPUs to storage, NVidia is already dong direct access to GPU VRAM for the CPU.

To believe that one company will be able to maintain a significant performance advantage, when arrayed agains the entire industry is ahistoric.

While the rest of the industry may not do things the same way Apple does them (perhaps an MPX style PCB addition to PCI, which is just a connector to the system ram), Apple has never managed to make a unique new advantage translate into a long term dominant position. Their bets on themselves do not, in general, pan out.

By the time PCIe reaches speeds to negate any advantage of the UMA paradigm Apple could shipping stacked SoCs with multi-tier hybrid RAM for years.

Insufficient multi-tier hybrid ram, whose real-world performance delta compared to everyone else will be less than the inflexibility and engineered obsolescence costs that come with it. That is Apple's cultural DNA.
 
  • Like
Reactions: prefuse07

leman

macrumors Core
Oct 14, 2008
19,521
19,678
I disagree that they have the technology. I do not believe Apple can produce a GPU with the equivilent performance, and sell it for the equivalent price, to that produced by Nvidia and AMD.

Equivalent performance? Not too difficult. Equivalent price? Unlikely, after all, Apple doesn't cut costs on memory interfaces or pushing the clocks beyond a reasonable thermal limit like gaming GPUs. Then again, Nvidia's GPUs with wide RAM interfaces aren't cheap either.

Trends and Forces will always bury the Great Men of history, and that's what will happen here. Microsoft is already doing direct access for GPUs to storage, NVidia is already dong direct access to GPU VRAM for the CPU.

Via the regular PCI-e bus, sure. But where should the "full bandwidth" you were mentioning come from?

To believe that one company will be able to maintain a significant performance advantage, when arrayed agains the entire industry is ahistoric.

There is no such thing as "entire industry". There are different companies, who pursue different strategies and develop competing products. Apple has an advantage in certain key areas and I don't see why they should lose these advantages any time soon. Nor do I see how their competitors are supposed to rapidly overcome these advantage.

For example, Nvidia was the first to bring real-time performing RT to the GPU market, and now, years later, they are still the undisputed leader. Same goes for operations with sparse matrices. Maybe next year someone (AMD, or Apple, or Intel) can catch up or even outmatch them, but that's not something you'd be willing to bet on, right? Apple has a proved track record of designing outstanding silicon, so you distrust in their ability to continue what they successfully did for over a decade is hardly rational.
 

mattspace

macrumors 68040
Jun 5, 2013
3,344
2,975
Australia
Via the regular PCI-e bus, sure. But where should the "full bandwidth" you were mentioning come from?

Look at infinity fabric bridges on existing MPX modules. Same deal, but plugging into the motherboard.

For example, Nvidia was the first to bring real-time performing RT to the GPU market, and now, years later, they are still the undisputed leader.

Yeah, but Nvidia aren't analogous to Apple - People who might be buying AMD can buy Nvidia without changing their workflows - it's one market ecosystem. An advantage unique to Apple is a significant workflow change, and a separate market ecosystem. The x86/Win/Linux world isn't going to just let Apple walk away with a performance advantage (not that I think anything Apple does will produce an actual real-world performance advantage in the long run), whereas Apple's advances are primarily directed at increasing the lock-in of their ecosystem.
 
Last edited:

Boil

macrumors 68040
Oct 23, 2018
3,478
3,173
Stargate Command
Look at infinity fabric bridges on existing MPX modules. Same deal, but plugging into the motherboard.

I envision the SoC (M3 Ultra/Extreme/whatever) on a daughtercard, flanked by two ASi (GP)GPUs; all three using this Infinity Fabric bridge on the backplane, with five PCIe slots above the three daughtercards...

Watch apple tomorrow, i was tipped that M2 MBA 15 to launch tomorrow, being honest with you I give it an 50% chance while it's an 100% logic move.

If 15" MBA tomorrow, WWDC hardware announcements to be the Mixed Reality Headset & ASi Mac Pro...! ;^p
 

mode11

macrumors 65816
Jul 14, 2015
1,452
1,172
London
Precisely! I think this is the actual concern (and a very real one!), with the rest being projection. People don't believe that Apple's approach can scale to HPC, which is why they advocate for using the proven x86 platform in the MP.

X86 is not just proven, it's the standard for the vast majority of the computer industry. An industry that is filled with huge, competent, competing players, and one that the world relies on for its computing needs. The x86 ISA itself isn't especially significant either, with no particular technical detriment or advantage (other than massive, entrenched compatibility). Essentially, x86 is too big to stall, let alone fail. One way or another, progress is relentless.

As I see it, the Mac platform has two main problems. 1) It's second-fiddle to iOS, which dwarfs it in terms of revenue, and hence Apple's design priorities. I believe the entire Mac platform brings in less cash than a category called something like "AirPods + other" in the earnings reports. 2) Apple has demonstrated time and time again that they only make what they feel like making. If you want e.g. a reasonably priced, expandable desktop box, you can **** off as far as Apple are concerned. Whereas Dell or HP could never get away with that, Apple have no competition on their platform.

In summary, the concern is that ASi will always be based on an SoC architecture that primarily targets phones and tablets. It's design trade-offs naturally provide awesome power efficiency, but preclude scaling and expansion. And when weighing up the choice between consolidating their entire product range on one closed platform, or being able to provide customers with competitive GPU solutions (something Apple wasn't that bothered about even on x86), it's not hard to guess their preference. As the first (and only) Mac that's not mobile or mobile-adjacent, the Mac Pro will be extremely revealing of Apple's future plans for ASi. Next week should be interesting!
 

theluggage

macrumors G3
Jul 29, 2011
8,015
8,449
Yeah, but Nvidia aren't analogous to Apple - People who might be buying AMD can buy Nvidia without changing their workflows - it's one market ecosystem. An advantage unique to Apple is a significant workflow change, and a separate market ecosystem.
NVIDIA are now making CPUs, and they sure aren't x86s for running legacy workflows:


There's two ways of looking at this: it's what Apple could do with Apple Silicon if they felt like investing a fortune on new silicon and breaking in to the data centre market (the parallels with Apple Silicon are obvious...)... but they're not going to be able to compete at that level by super-gluing four M2 Pros together.

The question is, where does a $10k-$50k personal super-workstation like the 2019 Mac Pro fit in the marketplace when a $3k laptop can do what your Mac Pro was doing a few years ago and, if you need more, you can just rent an order of magnitude more computing power in the cloud?
 
  • Like
Reactions: NC12

mattspace

macrumors 68040
Jun 5, 2013
3,344
2,975
Australia
The question is, where does a $10k-$50k personal super-workstation like the 2019 Mac Pro fit in the marketplace when a $3k laptop can do what your Mac Pro was doing a few years ago and, if you need more, you can just rent an order of magnitude more computing power in the cloud?

No Apple laptop, nor any cloud soution can do today, what a Workstation desktop can do today (or even what a Gaming PC can do, and could do years ago) - drive a high definition, high frame rate XR headset.

So there's a market fit - realtime reality simulation.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.