Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

When will there be an arm mac that exceeds a 2019 mac pro in performance?

  • 2 Years

    Votes: 102 64.6%
  • 4 Years

    Votes: 31 19.6%
  • 6 Years

    Votes: 6 3.8%
  • 6 Months

    Votes: 11 7.0%
  • 8 Years

    Votes: 8 5.1%

  • Total voters
    158

konqerror

macrumors 68020
Dec 31, 2013
2,298
3,701
The *entire point* of Apple investing in their own silicon designs is that they control the entire stack top to bottom. They’re simply not going to outsource a higher end design.

First, it's a flaw thinking that Apple doesn't outsource. The majority of Apple's chips today are licensed IP. Things like the video codecs, secure element, AOP, Lightning controller, even the Neural Engine is licensed from third parties.

Fundamentally the ARM architecture is licensed IP. Apple has to pay good money to use that.

Second, it costs a huge amount of money to engineer things. I think you don't understand how much. Half of the cost of each Intel chip is paying for engineering costs.

I simply cannot see Apple having the budget to engineer a server-class chip for a insignificant sliver of their sales, particularly when they can buy it.
 

vigilant

macrumors 6502a
Aug 7, 2007
715
288
Nashville, TN
Probably doesn't. If it already existed there would be no reason for the 2 year transition timeline.
Most likely Apple is going to put Mac Pro "work" in last. ( Just like last after doing iMac Pro work first in 2016-2018 timeframe ). And that would likely mean next year. ( to roll something out the year after).

Perhaps something faster than the 8-12 core models with standard configuration RAM capacity. But triple digit RAM installed and 28+ cores .... probably not now. Maybe better chance by end of this year. Something running early in 2021? yes.

Apple shifted to Apple Silicon in part to chase after the enclosures they wanted to chase the most. Thiner laptops. Desktop performance in a Mac mini case. Perhaps thinner iMacs. The Mac Pro enclosure doesn't have much constraints on it. The power supply feeding it is already limit of normal household plug. macOS can't handle more than 64 threads anyway ( more than 64 threads benefits iOS devices how???? It doesn't ... so probbaly not high up on the list. ) . The Mac Pro represent the place where there was the least problem with the x86-64 solutions. Put AMD on the table and not really much of problem at all in 2020-2022 time frame CPU wise.

So since the least 'pressed to flip ' CPU , then probably the last one Apple is going to work on. It is far more strategically critical for the Mac ecosystem they Apple gets the others ( laptops -> basic iMac) 100% correct than try to expand too quickly into the iMac Pro - Mac Pro zone and suffer a miss on one the other systems.

Pretty good chance system won't show up until second half of 2022 ( if not Q4 2022) if go through the broad spectrum 3rd party and integration validations the workstation-server class hardware typically goes through. Apple's track record over the last 10 years in the Mac Pro space says nothing about super duper speed delivery at all.

Even with 2 years out, it’s not hard to imagine A15 chipsets are making their way around a lab today for testing and prototyping.

It’s mostly definitely not the final design, but more so today than before, hardware needs t exist to start optimizing around, and the like..

I’m almost positive it exists in varying forms of stability
 

vigilant

macrumors 6502a
Aug 7, 2007
715
288
Nashville, TN
First, it's a flaw thinking that Apple doesn't outsource. The majority of Apple's chips today are licensed IP. Things like the video codecs, secure element, AOP, Lightning controller, even the Neural Engine is licensed from third parties.

Fundamentally the ARM architecture is licensed IP. Apple has to pay good money to use that.

Second, it costs a huge amount of money to engineer things. I think you don't understand how much. Half of the cost of each Intel chip is paying for engineering costs.

I simply cannot see Apple having the budget to engineer a server-class chip for a insignificant sliver of their sales, particularly when they can buy it.

That is a fairly flat assumption, especially if you compare realworld performance. Apple may pay for patents, and pay to license instruction sets, they aren’t slopping together “what’s on the truck” into their phones.
 

konqerror

macrumors 68020
Dec 31, 2013
2,298
3,701
That is a fairly flat assumption, especially if you compare realworld performance. Apple may pay for patents, and pay to license instruction sets, they aren’t slopping together “what’s on the truck” into their phones.

Lightning, for example, is nothing but a USB OTG controller licensed from Synopsys. You can see it in iPhone console logs. The Neural Engine was bought from CEVA. I forget who the video codecs were bought from, but someone figured it out at one time.

Don't believe Apple's marketing BS. Most of their chip is licensed IP.
 

leman

macrumors Core
Oct 14, 2008
19,517
19,664
Lightning, for example, is nothing but a USB OTG controller licensed from Synopsys. You can see it in iPhone console logs. The Neural Engine was bought from CEVA. I forget who the video codecs were bought from, but someone figured it out at one time.

Don't believe Apple's marketing BS. Most of their chip is licensed IP.

You mean “Apple uses a third party controller to implement USB over Lightning”. They certainly didn’t license their in-house designed connector from anybody. And Lightning supports much more than USB. As to Neutal Engine, yes, Apple used to integrate CEVA designs (or at least, that’s what is widely believed), but they replaced them with their own design in A12. Apple CPUs are entirely their own designs and the GPUs are based on PowerVR stuff with major parts replaced by Apples own designs.

While one can argue that many elements of Apple SoC are originally based on third-party IP, I don’t see how this is relevant for this discussion. The fact is that Apple designs these things themselves nowadays, they don’t buy it from anybody.
 

leman

macrumors Core
Oct 14, 2008
19,517
19,664
Fundamentally the ARM architecture is licensed IP. Apple has to pay good money to use that.

They pay for the ISA license, they don’t use ARM chip designs, they have much better ones of their own.

Second, it costs a huge amount of money to engineer things. I think you don't understand how much. Half of the cost of each Intel chip is paying for engineering costs.

I simply cannot see Apple having the budget to engineer a server-class chip for a insignificant sliver of their sales, particularly when they can buy it.

Except that Apple is sitting on a pile of cash bigger than the GDP of some countries. You are forgetting that Apple does not have to compete with anyone. They are not selling chips. They can absolutely afford blowing a ton of money on making a workstation chip just for “in your face“ moment. It does not have to be profitable. It would be a powerful statement and it would cement the perception of Apple caring about pro market. Already this would make it worth it for them.

And I highly doubt they would license server ARM chips to sell as their own. Apple has too much of their own proprietary stuff in their SoC, like their custom DSP AEM ISA extensions. Not to mention that Apple’s architecture is more performant than that of Marvel or any other ARM chip currently in the market. Finally, workstations and servers have different performance requirements. A server chip is a poor fit for the Mac Pro.

[automerge]1595756438[/automerge]
At this time they attempted to compare the A12Z as more powerful than the vast majority of laptops (ignoring any GPU comparisons) only a demo of Shadow of the Tomb raider running in Rosetta 2 emulation at 1080P.


The A12Z performs within a 10% margin of the flagship Ice Lake and has significantly faster graphics than any currently shipping iGPU. Given the fact that most laptops are lower-end parts, the statement of A12Z being faster is technically accurate.
 
Last edited:

Waragainstsleep

macrumors 6502a
Oct 15, 2003
612
221
UK
They've done that with Intel multiple times.

When they did this with Intel it was a collaboration. They told Intel what they wanted in a chip and Intel indulged them, particularly early in their relationship and hence we got the MacBook Air. I don't think they ever claimed to have designed an Intel chip, just participated in the process which they do with a whole lot of tech. Thunderbolt, USB, MPEG, I don't pretend to know all of them but its lots.

First, it's a flaw thinking that Apple doesn't outsource. The majority of Apple's chips today are licensed IP. Things like the video codecs, secure element, AOP, Lightning controller, even the Neural Engine is licensed from third parties.

Fundamentally the ARM architecture is licensed IP. Apple has to pay good money to use that.

Everyone outsources (or more accurately licenses IP), you don't have a choice in this industry. Its so critical there are plenty of technologies that the owners are required to license because you can't compete without that tech. If you knew how many patents were involved in one Mac these days you'd be stunned. Apple probably own less than half of them and that's probably higher than any other manufacturer of computers in the world.
Apple rebrands other tech but they typically write their own drivers for it and build on it somewhere so it works the way they need it to. Plus Apple customers don't care about dry 80X.XXX standards numbers. Give them the summary of what it does and a friendly name so they can remember to ask for it on their next Mac. This is not a bad thing, its how technologies get adopted more widely.

Second, it costs a huge amount of money to engineer things. I think you don't understand how much. Half of the cost of each Intel chip is paying for engineering costs.

I simply cannot see Apple having the budget to engineer a server-class chip for a insignificant sliver of their sales, particularly when they can buy it.

Apple has the budget to do literally anything they want. They were looking at building cars FFS, you don't think they can stretch from small CPUs to bigger CPUs? They could probably lobby the government enough to get permission to buy you if they wanted to. Their cash reserves are monstrous. They could have bought Catalina as marketing for their OS if they wanted. Then flown their UFO HQ there and renamed AppleLand or Jobsylvania. Maybe iCatalina.



Apple is sitting on a pile of cash bigger than the GDP of some countries.

Probably most countries.

You are forgetting that Apple does not have to compete with anyone. They are not selling chips. They can absolutely afford blowing a ton of money on making a workstation chip just for “in your face“ moment. It does not have to be profitable. It would be a powerful statement and it would cement the perception of Apple caring about pro market. Already this would make it worth it for them.

If Apple builds a Server CPU that smashes anything else on the market, they will gain market share. (They'd probably need to allow Linux to be run on it). The G5 Xserves did quite well because they offered features no other maker did. The Intel ones never lived up to them because Dell and HP offer the same hardware, updated more often at half the price.

If Apple makes a new Xserve with their own uniquely specced CPUs, they could break back into that space. Even if the primary selling point is power consumption. Data centres would love to cut their power and cooling requirements significantly. If they can do the same job with 4 Mac Minis as they can with one 3U monster from Dell or Sun, they may well be happy to do so.

And I highly doubt they would license server ARM chips to sell as their own. Apple has too much of their own proprietary stuff in their SoC, like their custom DSP AEM ISA extensions. Not to mention that Apple’s architecture is more performant than that of Marvel or any other ARM chip currently in the market. Finally, workstations and servers have different performance requirements. A server chip is a poor fit for the Mac Pro.

Beyond power consumption Apple can offer more bespoke features if they choose to. If they adopt the much hyped chiplet structure, making the components on their dies very modular, it may be that someone will want to build a supercomputer with as much Apple Neural Engine power as possible so Apple could build a chip with 10x the Neural Engine hardware on it. If they order enough of them. Just an example. I'm sure there are plenty of features researchers would love to request on a CPU. If there is a market, Apple can now build a Mac to fit that market in a way that no other PC maker can do. Thats why they did Apple Silicon and its where they've been spending their mountains of cash for the last decade or so.
 

vigilant

macrumors 6502a
Aug 7, 2007
715
288
Nashville, TN
Lightning, for example, is nothing but a USB OTG controller licensed from Synopsys. You can see it in iPhone console logs. The Neural Engine was bought from CEVA. I forget who the video codecs were bought from, but someone figured it out at one time.

Don't believe Apple's marketing BS. Most of their chip is licensed IP.

For full transparency, I think it’s important to to acknowledge that a lot of this stuff does come from other places. At the same time, it’s also important to take into consideration the fact that even though some of these things may have been licensed in some form or another, it does appear at some point Apple can and will make modifications toe the originally licensed design.

Where they are, and how pronounced are they? I think thats a much bigger question.

Considering the CPU in my 2018 iPad Pro is about 90% as fast as my 16in MacBook Pro I think does a good job of showing that. The fact that it crushes the Surface Pro X is also a good indicator considering those are both ARM.
 
  • Like
Reactions: FriendlyMackle

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
Even with 2 years out, it’s not hard to imagine A15 chipsets are making their way around a lab today for testing and prototyping.

A15 for the phone ( iiPhone 13 or whatever if they skip '13' ) ? Sure. First, because it isn't 2 years out it more pragmatically close to 1 year out. ( they'd to start production of the A15 in June-July to hit the September release. ).
Second, it is a substantially simpler SoC. There won't be :

64 (or more ) PCI-e v4 lanes on it.
It won't handle 100+ GB DIMMs. Or RAM capacities topping out in quad digit gigabytes.
It won't handle ECC base Memory.
It won't handle SATA. (more PCI-e lanes and a 3rd party discrete controller probably)
It won't handle wired Ethernet.
It won't handle discrete Thunderbolt v4 controllers
etc.


It is the title of this thread that isn't the minimal bar. The first post says specially questioning whether this is going to cover the top end range of the current Mac Pro.

Some iPad Pro RAM capacity sized render model isn't really the metric. If there is either a "host cores only , very large render model working-space " ( 20+ cores & 384+ GB ) or needed lots of cores to keep 3-6 large discrete GPUs feed with commands. So no, the iPhone 13 phone chip probably isn't going to drive that.


The fact that the A15 is on a hard 12 month cycle deadline ( and the future A series ) are also on a 12 month deadline . precisely leads into why Apple probably is not pulling the Mac Pro forward in the transition plan. It is a bigger job but probably doesn't have the top priority on resources. So as folks have "extra time" it will get worked on. if the A15 or A16 gets off track and Apple needs resources to fill gaps then more than likley they'll be pull off the super low volume Mac Pro SoC if helpful in closing the gap. The Mac Pro SoC probably isn't going to push A-series out of the way on physical resources either ( sim time , FPGA time , etc. )

Apple has worked on the A1_X chips only on process shanks the last couple of cycles. 12X (skip A13) 14X etc. The Mac Pro SoC probably would be on an even slower cycle. ( personally wouldn't be shocked if every 4 years but perhaps around every 3 years. If the product isn't being updated every year then not going to work on a SoC every year. The iMac Pro hasn't moved since 2017 ( about 3 years now). MP 2013-2019 6 years MP 2010-2013 3 years. It is highly likely going to be a long cycle product. ).



It’s mostly definitely not the final design, but more so today than before, hardware needs t exist to start optimizing around, and the like..

Optimizing? So what happened with Mac Pro class CPUs from 2014-2016 that Apple skipped there was optimizing around those? Apple optimizes USB output in macOS? Apple maximizes OpenCL throughput relatively to other operating systems' results ? The foul ups , bloops and blunders of the last couple macOS releases ... Apple should be more concerned about 'working' than 'optimized'.


I wouldn't be surprised if there was an iMac SoC 'design mule' board designed to drop into an Mac Pro case for convenient camouflage ( along lines of stuffing a modified iPad Pro board inside a Mac Mini case. ). Two 8-pin connectors to aux power one standard PCI-e slot for a AMD prototype add-in-card. ( so don't have to embedd the pre-production AMD GPU chip board. And one slot gets rid of PCI-e v4 retimer needs when put relatively huge gap between CPU package and primary GPU slot. ). 64-128GB sized RAM . That system probably would have decent chance of covering part of the lower half of the performance spectrum the Mac Pro currently covers.

However, a small scale board stuffed inside a Mac Pro case is huge stretch to classify as a Mac Pro prototype.


I’m almost positive it exists in varying forms of stability

The A-Series is not the Mac Apple Silicon series. the Mac SoC are, at minimal, going to have different 'stuff' wrapped around the basic building blocks in the SoC.

To lower R&D costs on the Mac Pro SoC Apple probably will borrow whatever they can get away with from the A15. But that also means should finished off the design after have the major kinks worked out of the A15 ( since leveraging "hand me down" design elements. ). Consequently, that means most likely will tape out substantially later than the A15 does.

Putting the 3-4x bigger 5nm SoC on more mature fab process tech a year later will also help control costs. It is way more expensive to roll out the biggest dies on the the most bleeding edge fab tech.
 
  • Like
Reactions: BigSplash

vigilant

macrumors 6502a
Aug 7, 2007
715
288
Nashville, TN
A15 for the phone ( iiPhone 13 or whatever if they skip '13' ) ? Sure. First, because it isn't 2 years out it more pragmatically close to 1 year out. ( they'd to start production of the A15 in June-July to hit the September release. ).
Second, it is a substantially simpler SoC. There won't be :

64 (or more ) PCI-e v4 lanes on it.
It won't handle 100+ GB DIMMs. Or RAM capacities topping out in quad digit gigabytes.
It won't handle ECC base Memory.
It won't handle SATA. (more PCI-e lanes and a 3rd party discrete controller probably)
It won't handle wired Ethernet.
It won't handle discrete Thunderbolt v4 controllers
etc.


It is the title of this thread that isn't the minimal bar. The first post says specially questioning whether this is going to cover the top end range of the current Mac Pro.

Some iPad Pro RAM capacity sized render model isn't really the metric. If there is either a "host cores only , very large render model working-space " ( 20+ cores & 384+ GB ) or needed lots of cores to keep 3-6 large discrete GPUs feed with commands. So no, the iPhone 13 phone chip probably isn't going to drive that.


The fact that the A15 is on a hard 12 month cycle deadline ( and the future A series ) are also on a 12 month deadline . precisely leads into why Apple probably is not pulling the Mac Pro forward in the transition plan. It is a bigger job but probably doesn't have the top priority on resources. So as folks have "extra time" it will get worked on. if the A15 or A16 gets off track and Apple needs resources to fill gaps then more than likley they'll be pull off the super low volume Mac Pro SoC if helpful in closing the gap. The Mac Pro SoC probably isn't going to push A-series out of the way on physical resources either ( sim time , FPGA time , etc. )

Apple has worked on the A1_X chips only on process shanks the last couple of cycles. 12X (skip A13) 14X etc. The Mac Pro SoC probably would be on an even slower cycle. ( personally wouldn't be shocked if every 4 years but perhaps around every 3 years. If the product isn't being updated every year then not going to work on a SoC every year. The iMac Pro hasn't moved since 2017 ( about 3 years now). MP 2013-2019 6 years MP 2010-2013 3 years. It is highly likely going to be a long cycle product. ).





Optimizing? So what happened with Mac Pro class CPUs from 2014-2016 that Apple skipped there was optimizing around those? Apple optimizes USB output in macOS? Apple maximizes OpenCL throughput relatively to other operating systems' results ? The foul ups , bloops and blunders of the last couple macOS releases ... Apple should be more concerned about 'working' than 'optimized'.


I wouldn't be surprised if there was an iMac SoC 'design mule' board designed to drop into an Mac Pro case for convenient camouflage ( along lines of stuffing a modified iPad Pro board inside a Mac Mini case. ). Two 8-pin connectors to aux power one standard PCI-e slot for a AMD prototype add-in-card. ( so don't have to embedd the pre-production AMD GPU chip board. And one slot gets rid of PCI-e v4 retimer needs when put relatively huge gap between CPU package and primary GPU slot. ). 64-128GB sized RAM . That system probably would have decent chance of covering part of the lower half of the performance spectrum the Mac Pro currently covers.

However, a small scale board stuffed inside a Mac Pro case is huge stretch to classify as a Mac Pro prototype.




The A-Series is not the Mac Apple Silicon series. the Mac SoC are, at minimal, going to have different 'stuff' wrapped around the basic building blocks in the SoC.

To lower R&D costs on the Mac Pro SoC Apple probably will borrow whatever they can get away with from the A15. But that also means should finished off the design after have the major kinks worked out of the A15 ( since leveraging "hand me down" design elements. ). Consequently, that means most likely will tape out substantially later than the A15 does.

Putting the 3-4x bigger 5nm SoC on more mature fab process tech a year later will also help control costs. It is way more expensive to roll out the biggest dies on the the most bleeding edge fab tech.

Lets kinda realign on what I originally meant as it was poorly worded.

I expect that there are prototype chips and boards that exist centered around reusable components from the A15 to help build and develop a Mac Pro.

We know that the Macs that ship won’t ship with the same hardware from the iPad Pro. But I don’t think we know for certain that it’s not going to be an A-series chip.

I do however expect that we’ll see the components of the A-series to be reused in whatever chip we end up seeing. I kind of expect a 6x6 Performance to Efficiency cores and a substantially more GPU cores to let the line be more competitive with AMD and Nvidia. I largely expect to see the same thing with the Neural Engine.

What’s the most interesting to me is how Apple will get all of the components on a single die, while reducing bottlenecks.

The current Mac Pro is a beautiful architecture. I hope Apple continues the tradition of offering expandability and flexibility we have today.

A couple of things I’d love to see happen, would be a smaller scale version of their FPGA.

The ability to expand out using cards additional GPU, FPGA, and Neural Engines as expansion cards for the Mac Pro.
 

Alan Wynn

macrumors 68020
Sep 13, 2017
2,385
2,408
Probably doesn't. If it already existed there would be no reason for the 2 year transition timeline.

Based on their previous transitions, I expect this one will not go the full 2 years (although I guess that also depends on whether one starts the clock at WWDC or when the first consumer box ships). I would guess that it will complete late in 2021.

Most likely Apple is going to put Mac Pro "work" in last.

I would expect it will be built on work done for other products, so that makes sense.

The Mac Pro represent the place where there was the least problem with the x86-64 solutions. Put AMD on the table and not really much of problem at all in 2020-2022 time frame CPU wise.

This is true, but I would also expect it would produce the greatest bragging rights/marketing story. If Apple ships a machine that out classes anything from AMD/Intel, you can be sure it will be the subject of lots of ads and news stories.
 

burgerrecords

macrumors regular
Original poster
Jun 21, 2020
222
106
Even if the GPU manages to scale, it’s going to be interesting to see how general purpose memory is used to match existing discrete gpu performance at the Workstation graphics level.

We know:

“The GPU and CPU on Apple silicon share memory.”

“Don’t assume a discrete GPU means better performance. The integrated GPU in Apple processors is optimized for high performance graphics tasks.”

I wonder how broad that second statement is at the high/workstation end? It’s hard for me to believe that both AMD and nVidia are just doing it wrong and there’s a pathway to skip using specialized memory. The AMD APU in the PS5 uses GDDR as its shared memory (I assume because it prioritizes rendering above other computing tasks?)

Particularly since amd make gpu cores and cpu cores they could really create a disruption in workstation PCs Themselves?
[automerge]1595784625[/automerge]
If Apple ships a machine that out classes anything from AMD/Intel, you can be sure it will be the subject of lots of ads and news stories.

that’s the big “if” - at that point do we start to see Like a Pixar switch away from their animators X86_64 workstations and Apple just dominates high end computing?
 
  • Like
Reactions: vigilant

vigilant

macrumors 6502a
Aug 7, 2007
715
288
Nashville, TN
Even if the GPU manages to scale, it’s going to be interesting to see how general purpose memory is used to match existing discrete gpu performance at the Workstation graphics level.

We know:

“The GPU and CPU on Apple silicon share memory.”

“Don’t assume a discrete GPU means better performance. The integrated GPU in Apple processors is optimized for high performance graphics tasks.”

I wonder how broad that second statement is at the high/workstation end? It’s hard for me to believe that both AMD and nVidia are just doing it wrong and there’s a pathway to skip using specialized memory. The AMD APU in the PS5 uses GDDR as its shared memory (I assume because it prioritizes rendering above other computing tasks?)

Particularly since amd make gpu cores and cpu cores they could really create a disruption in workstation PCs Themselves?
[automerge]1595784625[/automerge]


that’s the big “if” - at that point do we start to see Like a Pixar switch away from their animators X86_64 workstations and Apple just dominates high end computing?

I completely “hear” you on the topic of Apple being competitive with AMD and NVIDIA.

I think there is a real possibility that part of the new Apple Silicon will be an enhanced memory controller that will help increase the speed of access to memory to help alleviate the differences. The other question is how do these GPU cores scale, within various hardware devices.

The other big thing that Microsoft and Sony are bringing to the table is hardware accelerated asset decompression, with Sony appearing to have a leg up on Microsoft at least from the specs we see. Apple has so much control over the hardware, I’d be curious to see if something similar could happen on Mac hardware. Sufficiently fast access to SSD or NVMe can help with not having enough memory as we’ve seen with the demo of Ratchet and Clank. On the face of what they’ve done, it appears as if it’s dynamically loading a whole new level in a moments notice, and purging the previous level from memory. It’s an interesting trick, but it’s hard to see how practical that is in real life. In terms of how we use these devices we have today, I think we are largely there in the case of an iPad workload. Bringing that kind of intentionality to the Mac could be a game changer for what I can only call medium professional workloads. The side benefit of that would indirectly benefit games as well.
 

burgerrecords

macrumors regular
Original poster
Jun 21, 2020
222
106
think there is a real possibility that part of the new Apple Silicon will be an enhanced memory controller that will help increase the speed of access to memory to help alleviate the differences. The other question is how do these GPU cores scale, within various hardware devic

It’s exciting that because of apples control And consistency in their spaces that perhaps they will be able to get to these places by virtue of being able to shed legacy designs.
 

vigilant

macrumors 6502a
Aug 7, 2007
715
288
Nashville, TN
It’s exciting that because of apples control And consistency in their spaces that perhaps they will be able to get to these places by virtue of being able to shed legacy designs.

Completely agree.

The way I’m trying to think of it is like this. The iPad Pro has 8 GPU cores. It isn’t outside of the realm of reason that for a MacBook Air it could have 12, MacBook Pro 13 could have 24, and a MacBook Pro 16 could have 36+ depending on configuration.

The memory systems on the iPad Pro I believe have always been a substantial step above the non X or Z of the A series.

To me, that immediately begs the question, if we’re looking at a 6x6 performance to efficiency cores, and 24 or 36+ GPU cores not even taking into consideration the Neural Engine or possibility for either on silicon FPGA or external to the silicon FPGA, how do you adequately feed these chips in the “spirit” of their enclosure? If we assume some form of cooling it gets really interesting considering the iPad Pro that is 100% passively cooled and gets 10 hours of battery life, what can we do with more battery and active cooling?
 

Boil

macrumors 68040
Oct 23, 2018
3,477
3,173
Stargate Command
The ability to expand out using cards additional GPU, FPGA, and Neural Engines as expansion cards for the Mac Pro.

I just used that idea to expand my Mac Pro Cube specs...! ;^p

Perhaps dedicated FPGA modules handle audio / video DSP duty (as assigned / programmed), and the TB3 / TB4 ports handle A/V I/O breakout boxes? Rather than dedicated full-length PCIe cards (like Pro Tools HDX cards) from third-party vendors, but third-party vendors can access the FPGA(s), just as FCPX & Logic X will be doing...

Mac Pro Cube - starting at US$4,999.00

48 P cores / 4 E cores / 96 GPU cores - CPU / GPU Chiplets & RAM on interposer / package design
HBM3 Unified Memory Architecture - 128GB / 256GB / 512GB
NVMe RAID 0 (dual NAND blades) 4TB / 8TB / 16TB
Four USB4 (TB3) ports
Four TB4 ports
Two 10Gb Ethernet ports
One HDMI 2.1 port
Three MPX-C slots (for use with asst. MPX-C expansion modules)


Apple MPX-C Expansion Modules - starting at US$499.00

NVMe RAID Storage Module (Quad NAND blades)
GPGPU Module
FPGA Module
Neural Engine Module
 

konqerror

macrumors 68020
Dec 31, 2013
2,298
3,701
Where they are, and how pronounced are they? I think thats a much bigger question.

Which is exactly what I'm saying. The Mac Pro, and its CPUs are a completely niche product.

Even Intel doesn't engineer or produce large single-socket CPUs. Look it up: when you buy a Mac Pro, Intel sells you a 28-core Xeon SP die that was originally intended for 4-socket enterprise servers.

If Intel can't justify a separate product for the combined market of Apple, Lenovo, HP, and Dell workstations, Apple sure can't.

Considering the CPU in my 2018 iPad Pro is about 90% as fast as my 16in MacBook Pro I think does a good job of showing that. The fact that it crushes the Surface Pro X is also a good indicator considering those are both ARM.

Again, you miss the fact that it's non-trivial to scale to large core counts. Just compare the size of the package on the Mac Pro CPU versus a Macbook.
 

leman

macrumors Core
Oct 14, 2008
19,517
19,664
If Intel can't justify a separate product for the combined market of Apple, Lenovo, HP, and Dell workstations, Apple sure can't.


As I said before, Apple does not sell the individual chips. And they can certainly afford it to subsidize a niche high-visibility product with strong psychological impact like Mac Pro
 

Boil

macrumors 68040
Oct 23, 2018
3,477
3,173
Stargate Command
Which is exactly what I'm saying. The Mac Pro, and its CPUs are a completely niche product.

To the larger Intel 'ecosystem', sure. But when it is just Apple making 'CPUs' for their flagship product? Not really a niche, more a halo...

Even Intel doesn't engineer or produce large single-socket CPUs. Look it up: when you buy a Mac Pro, Intel sells you a 28-core Xeon SP die that was originally intended for 4-socket enterprise servers.

If Intel can't justify a separate product for the combined market of Apple, Lenovo, HP, and Dell workstations, Apple sure can't.

Intel has been too busy resting on it's laurels to care about any particular market segment, "you want workstation chips or single socket servers? then take this Xeon & make do, we ain't got no time for that"...

Apple might just give us a solid workstation that outperforms the current offerings, while running cooler & quieter & using much less power. Add in a plethora of custom DSPs, ASICs, FPGAs, GPGPUs, etc., all tightly tied to the appropriate software, it could be a new era of workstation performance across a multitude of fields...!
 

vigilant

macrumors 6502a
Aug 7, 2007
715
288
Nashville, TN
Which is exactly what I'm saying. The Mac Pro, and its CPUs are a completely niche product.

Even Intel doesn't engineer or produce large single-socket CPUs. Look it up: when you buy a Mac Pro, Intel sells you a 28-core Xeon SP die that was originally intended for 4-socket enterprise servers.

If Intel can't justify a separate product for the combined market of Apple, Lenovo, HP, and Dell workstations, Apple sure can't.



Again, you miss the fact that it's non-trivial to scale to large core counts. Just compare the size of the package on the Mac Pro CPU versus a Macbook.

I’ve worked in the data center for years. I fully understand the ramifications of the design choices they’ve made.

Look, there is no assumption of what we’ve seen in the past that makes sense in the workloads we could potentially see.

Intel has been trying to reuse their designs the best way they could.

Did they succeed? I guess that depends on who you ask.

I think Apple has shown they understand where those differences are and make accommodations. Will we see that consideration in the first round of products? Hard to tell.

If you look at the backplane for the Mac Pro today, I think it speaks volumes for where they want to go if they weren’t tethered to Intel.

To me the bigger question is can they satisfy what we have today in a meaningful way.

The A-X series has very specific accommodations. I expect them to come back with the thermal envelope and power considerations to try to make something that works. I’ll admit it will probably take time.

Am I the only one that had 1 generation of 32 bit Intel Macs that got replaced in one revision for x64?

They have far more control now. What does that look like long term? I don’t know, I don’t think anyone of us do.
 
  • Like
Reactions: Mojo1019

burgerrecords

macrumors regular
Original poster
Jun 21, 2020
222
106
As I said before, Apple does not sell the individual chips. And they can certainly afford it to subsidize a niche high-visibility product with strong psychological impact like Mac Pro

very few potential customer care about this though; I'm not certain Apple cares about Mac enthusiast's bragging rights if they can make more money (or not lose money) in what they deem are better ways. Apple bringing a workstation class machine in two years or not bringing a workstation class machine in two years, at this point it really appears based on the inferences that can be made it's a toss up.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.