Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Unregistered 4U

macrumors G4
Jul 22, 2002
10,612
8,636
On the contrary, everyone has been predicting the Mac transition to ARM.
Not everyone, as there were still those thinking it would be AMD, understanding that Apple couldn’t possibly do that with their A-series. To clarify my statement, there were folks discounting what Apple could potentially do with the CPU, mainly because they didn’t quite understand how, exactly, they‘d be able to do it. The same can apply to their GPU.
Having spent years building support for all the GPU you can fit/attach to a system, they're not going to throw all that away.
IF they’re willing to throw even Intel away...
Apple will certainly use their integrated graphics on all their low end machines / ultra light laptops, and perhaps all the way up to their top end MacBook Pro’s. Smaller iMacs too. Larger iMacs could use a ‘Pro’ 8-core ARM chip featuring only CPU cores (+ related logic), plus a midrange AMD GPU.
Again, you have to watch the presentations to understand that their GPU isn’t a stand in for more capable graphics, like Intel. Apple are tying key features and capabilities to the Apple Silicon GPU. So, even in a Mac Pro, even if they DO have a discrete GPU, the Apple Silicon GPU still needs to be there to enable those features and capabilities.

Actually, expanding options, I would guess it‘s possible that Apple may be working secretly with AMD for them to provide a part that’s as performant with Metal as Apple Silicon’s GPU.
[automerge]1593350068[/automerge]
A long time ago there were the PowerVR Kyro graphic cards.
That’s one of the ones I found in my searching. Considering how performant it was then, the fact that the tech is not around now says more about the company to me than the technology. There’s really only one big standalone graphics card vendor as ATI couldn’t even survive as a standalone company and had to be purchased by AMD. 3dfx had some TBDR tech that NVIDA would have gotten access to after buying them
 
Last edited:

mode11

macrumors 65816
Jul 14, 2015
1,452
1,172
London
There may have been some "folks" somewhere who didn't think it was possible, but it had been widely predicted in the media for some time. Benchmarks have been around for a year or two that show Apple's A-series chips are faster than many of the Intel chips in Mac laptops. So this did not come out of nowhere.

On the other hand, there is nothing to suggest that a low power SoC can rival the performance of a decent GPU. I think you're getting a bit swept up in the Apple Silicon hype. Apple's engineering team certainly has a good track record, but the engineers at AMD and Nvidia aren't clueless. It's not like Apple is somehow going to produce a GPU with equivalent performance to e.g. a 5700XT, whilst using 1/10 the power and / or 1/10 the die size. Do you think the entire GPU industry has missed some massive low-hanging fruit that only Apple has spotted?

I agree that Apple may retain the GPU on mostly-CPU SoCs that are intended to be used with external GPUs. It would make sense to use the exact same chip in both integrated graphics and discrete graphics machines, for the economies of scale. If Apple Silicon has GPU features that AMD GPUs can't substitute for, this would be the solution.

I highly doubt AMD need Apple's help to make a GPU that's as performant with Metal as A-series GPU though. I would fully expect a Navi GPU to absolutely crush an A-series in Metal performance. I also doubt AMD will make a custom version of their architecture just for Apple. They could add new functional blocks to incorporate the special GPU features of Apple Silicon, but only if these would be selling points in the wider computer market.
 
Last edited:

teagls

macrumors regular
May 16, 2013
202
101
On the other hand, there is nothing to suggest that a low power SoC can rival the performance of a decent GPU. I think you're getting a bit swept up in the Apple Silicon hype. Apple's engineering team certainly has a good track record, but the engineers at AMD and Nvidia aren't clueless. It's not like Apple is somehow going to produce a GPU with equivalent performance to e.g. a 5700XT, whilst using 1/10 the power and / or 1/10 the die size. Do you think the entire GPU industry has missed some massive low-hanging fruit that only Apple has spotted?

I agree that Apple may retain the GPU on mostly-CPU SoCs that are intended to be used with external GPUs. It would make sense to use the exact same chip in both integrated graphics and discrete graphics machines, for the economies of scale. If Apple Silicon has GPU features that AMD GPUs can't substitute for, this would be the solution.

This is spot on. Nvidia's entire company is built around GPU development, that's their sole focus. For Apple it is not. It's not like Apple is going to bust out a GPU with 2080ti level performance at lower power.

Makes sense Apple would utilize an internal GPU across product lines to maintain feature parity and reduce cost. By having their own GPU in every machine they can guarantee certain OS features. Maybe hardware video decode/encode & ML tasks. But I think if you want more power they will let you use an external AMD GPU.
 
  • Like
Reactions: ssgbryan

jeanlain

macrumors 68020
Mar 14, 2009
2,464
958
We don't have direct comparisons between Apple's GPUs and discrete GPUs, but we know that the A12Z can sustain 30 fps on Shadow of the Tomb Raider at 1080p, probably with many graphical options disabled.
Only the best iGPUs from AMD (Renoir) and no current iGPU from intel can pull this off. The Xbox One S plays the game at 900p to ensure 30 fps (but with better graphical fidelity overall I suppose).
This is not a small feat. The macOS version is a port (of a Windows port) that has never been intended to be run on a mobile GPU, and is was demoed running with a CPU translation layer and GPU validation layer that decrease performance. Yet it ran as well as it does on Windows using GPUs that draw much more power.
I think it bodes quite well for native apps using Apple GPUs specifically designed for Macs. They may bot perform as well as on the latest Quadros and stuff, but I expect Apple to be leaders in terms of graphical performance per Watt.
 
  • Like
Reactions: AlphaCentauri

Unregistered 4U

macrumors G4
Jul 22, 2002
10,612
8,636
There may have been some "folks" somewhere who didn't think it was possible
“Somewhere” of course, being here on this forum. Then, take a trip back two years and you’ll likely see phrases that age as well as today’s phrases will have aged two years from now!
Apple's engineering team certainly has a good track record, but the engineers at AMD and Nvidia aren't clueless. It's not like Apple is somehow going to produce a GPU with equivalent performance to e.g. a 5700XT, whilst using 1/10 the power and / or 1/10 the die size. Do you think the entire GPU industry has missed some massive low-hanging fruit that only Apple has spotted?
1/10? Probably not, as there’s physics involved. And, there likely IS well known low hanging fruit. BUT, if it calls for developers to rearchitect their graphics engines or requires the OS to handle graphics calls differently, then it’s not going to be implemented. Neither AMD nor NVIDIA is in a position to force an alteration of the technology stack like that.
This is spot on. Nvidia's entire company is built around GPU development, that's their sole focus.
It’s not so much hype as it is the fact that “a company doing a thing for longer than Apple” has not always had lot of bearing on Apple’s ability to perform. Apple’s goal isn’t even to one-up or outperform Intel, AMD, OR NVIDIA (even though they may inadvertently do so). It’s to ensure that, say, macOS runs better with more features and Final Cut Pro is more performant when running on Apple Silicon. There’s lots of ways to accomplish this goal that doesn’t require repeating what other companies have done.
 

mode11

macrumors 65816
Jul 14, 2015
1,452
1,172
London
We don't have direct comparisons between Apple's GPUs and discrete GPUs, but we know that the A12Z can sustain 30 fps on Shadow of the Tomb Raider at 1080p, probably with many graphical options disabled.

Well, by comparison, a 1050 Ti (a pretty lowly GPU by todays standards) gets about 30fps at 1080p - at Maximum settings and with anti-aliasing enabled: https://www.eurogamer.net/articles/digitalfoundry-2019-06-18-geforce-gtx-1650-benchmarks-7001. No idea what it would get on Low, but probably a lot higher, especially if not using any AA. Granted, this is running natively on Windows, with likely a decent desktop CPU. Sure, an Intel iGPU would be worse, but no one ever accused those of being strong GPUs. Likewise, the Xbox One is seven years old and has always had relatively weak graphics - equivalent to GTX 750 (non-Ti).

It's still an impressive feat, no doubt, especially running through translation. I've got no doubt Apple's iGPUs are performant and would beat most others. I just don't think they'll ever be a substitute for a proper GPU on a PCIe card (nor do Apple intend them to be).

“Somewhere” of course, being here on this forum.

Well, this is only a forum where people casually discuss Apple technology. Not everyone's going to be right all the time.

BUT, if it calls for developers to rearchitect their graphics engines or requires the OS to handle graphics calls differently, then it’s not going to be implemented. Neither AMD nor NVIDIA is in a position to force an alteration of the technology stack like that.

Really? They write their own drivers, and Nvidia in particular works closely with games developers. Nvidia also provides widely used APIs such as CUDA. Microsoft also actively incorporates new developments such as Nvidia's RTX into the DirectX standard: https://www.pcworld.com/article/353...ting-graphics-tricks-across-pcs-and-xbox.html. In addition, the Vulkan API was specifically created to provide lower overhead and more direct access to GPU hardware.

Apple’s goal isn’t even to one-up or outperform Intel, AMD, OR NVIDIA (even though they may inadvertently do so). It’s to ensure that, say, macOS runs better with more features and Final Cut Pro is more performant when running on Apple Silicon.

You're conflating different things here. Outperforming an Intel iGPU is easy, and Apple already does. Outperforming powerful discrete GPUs with an SoC (or even their own discrete GPU) is a whole different ball game. It's also not just about enabling macOS features or letting FCP in particular handle multiple streams of 4K video. For general software, especially cross-platform applications like Adobe CC and Maya, there is no substitute for actual GPU grunt.

There’s lots of ways to accomplish this goal that doesn’t require repeating what other companies have done.

Apple does have more flexibility than most, as it writes the OS. I think the advantage is most likely to manifest in terms of efficiency, as opposed to outright performance. Great for laptops, certainly, but not particularly relevant to a workstation like the one discussed in this forum.
 
Last edited:
  • Like
Reactions: ssgbryan

ZombiePhysicist

Suspended
May 22, 2014
2,884
2,794
It took 7 months in 2006 (from the MacBook Pro release on January 10 to the release of the Mac Pro on August 7).

big difference was Apple was moving to intel that had a full and existing product line including Xeon chips. This time Apple is moving to an embryonic product line that they themselves have to create. They probably have the low to mid end processors ready or close to ready, but the upper middle to high end still needs to be made. I would be shocked if this transition is as fast. past performance is not necessarily a predictor of future returns.
 

mode11

macrumors 65816
Jul 14, 2015
1,452
1,172
London
They may not have the 'big Apple Silicon' completely taped out, but they would certainly have resolved any obstacles to doing so. There is no way they would announce this transition if they weren't sure that AS could be the basis of the whole range for years to come. They would look ridiculous if the MP had to wind up staying on Intel.

Further, they wouldn't want to be forced to just drop the MP through the lack of a suitable CPU - after all the fanfare of refocussing on the Mac, and the Mac workstation in particular. It would leave macOS looking like a glorified iOS, and would be the final nail in the coffin for high end Mac users.
 

t90

macrumors newbie
Apr 15, 2020
14
4
They may not have the 'big Apple Silicon' completely taped out, but they would certainly have resolved any obstacles to doing so. There is no way they would announce this transition if they weren't sure that AS could be the basis of the whole range for years to come. They would look ridiculous if the MP had to wind up staying on Intel.

Further, they wouldn't want to be forced to just drop the MP through the lack of a suitable CPU - after all the fanfare of refocussing on the Mac, and the Mac workstation in particular. It would leave macOS looking like a glorified iOS, and would be the final nail in the coffin for high end Mac users.

I think this is exactly it. I’m sure they know exactly what they are doing and the challenges involved.

I really just wish Apple would be more transparent and give us more of a roadmap so we could better plan our business purchases.
 

mode11

macrumors 65816
Jul 14, 2015
1,452
1,172
London
I really just wish Apple would be more transparent and give us more of a roadmap

Lol, you're not alone. Apple seems to have a pathological objection to doing so, however. I think they developed this culture under Jobs when they were the wee upstart that needed to maximise the impact of any PR. They kept everything secret, then released it in big splashes a few times a year - that way journalists would reliably report on it en masse. Can't say it's been a bad strategy for them, I guess.
 

ZombiePhysicist

Suspended
May 22, 2014
2,884
2,794
I think this is exactly it. I’m sure they know exactly what they are doing and the challenges involved.

I really just wish Apple would be more transparent and give us more of a roadmap so we could better plan our business purchases.

like they knew exactly what they were doing with the trash can Mac. ?
[automerge]1593361488[/automerge]
They may not have the 'big Apple Silicon' completely taped out, but they would certainly have resolved any obstacles to doing so. There is no way they would announce this transition if they weren't sure that AS could be the basis of the whole range for years to come. They would look ridiculous if the MP had to wind up staying on Intel.

Further, they wouldn't want to be forced to just drop the MP through the lack of a suitable CPU - after all the fanfare of refocussing on the Mac, and the Mac workstation in particular. It would leave macOS looking like a glorified iOS, and would be the final nail in the coffin for high end Mac users.

I agree they have a plan. I disagree you’ll see an arm Xeon chip replacement 7 months after the first Mac arm machine ships. They said they’ll be making Intel macs for years for a reason.
 
  • Like
Reactions: Adult80HD

teagls

macrumors regular
May 16, 2013
202
101
It’s not so much hype as it is the fact that “a company doing a thing for longer than Apple” has not always had lot of bearing on Apple’s ability to perform. Apple’s goal isn’t even to one-up or outperform Intel, AMD, OR NVIDIA (even though they may inadvertently do so). It’s to ensure that, say, macOS runs better with more features and Final Cut Pro is more performant when running on Apple Silicon. There’s lots of ways to accomplish this goal that doesn’t require repeating what other companies have done.

Apple can't even maintain parity.. let alone 1 up Nvidia. Best example is machine learning. Look at the effort Apple has poured into Machine learning the last few years. CoreML is still massively behind anything Nvidia has to offer. Both in software and hardware.

I say this from personal experience. Trying to port anything slightly advanced in ML to iOS is like pulling teeth. Very little is actually supported. Most of the newer stuff can not run on the neural engine bc it's too advanced. If you want more advanced things you have to manually implement them yourself in Metal on GPU to get any performance. Anything that Apple demo's related to ML is child's play compared to Nvidia. It's pitiful.
 
  • Like
Reactions: ssgbryan

ssgbryan

macrumors 65816
Jul 18, 2002
1,488
1,420
We don't have direct comparisons between Apple's GPUs and discrete GPUs, but we know that the A12Z can sustain 30 fps on Shadow of the Tomb Raider at 1080p, probably with many graphical options disabled.
Only the best iGPUs from AMD (Renoir) and no current iGPU from intel can pull this off. The Xbox One S plays the game at 900p to ensure 30 fps (but with better graphical fidelity overall I suppose).
This is not a small feat. The macOS version is a port (of a Windows port) that has never been intended to be run on a mobile GPU, and is was demoed running with a CPU translation layer and GPU validation layer that decrease performance. Yet it ran as well as it does on Windows using GPUs that draw much more power.
I think it bodes quite well for native apps using Apple GPUs specifically designed for Macs. They may bot perform as well as on the latest Quadros and stuff, but I expect Apple to be leaders in terms of graphical performance per Watt.

So, less than a console available today - and sadly pathetic with what is arriving this Xmas. 8 cores/16 threads and 2080Ti performance will be the new baseline for games. The 1080p market is shrinking every day, so it won't be relevant for that much longer.

Right now, we have 1 datapoint. A A12z (8 cores/8 threads) has a Geekbench 5 multicore score of 4615.

That is more performance than the bottom of the stack mini (i3), a 4 core/4 thread CPU that is 2 generations old. (3265)

It isn't more performance than the middle mac mini (i5) which has a 6 core/6 thread CPU that is 2 generations old. (4772)

It isn't more performance than the top of the stack mac mini (i7) which has 6 cores/12 threads CPU that is 2 generations old. (5621)

Apple isn't looking to compete in the Personal Computer space - they are looking to expand their iOS userbase.

It isn't about performance.

It is about rebuilding the walled garden. (and getting a 30% cut of all software sold for every Apple device).
 
  • Like
Reactions: blackadde

Pressure

macrumors 603
May 30, 2006
5,182
1,546
Denmark
like they knew exactly what they were doing with the trash can Mac. ?
[automerge]1593361488[/automerge]


I agree they have a plan. I disagree you’ll see an arm Xeon chip replacement 7 months after the first Mac arm machine ships. They said they’ll be making Intel macs for years for a reason.

To their defence Intel roadmaps at the time painted a different picture. They also expected developers to utilise the dual GPUs better.

I mean, looking at it historically it does paint a somewhat troubling picture. It doesn't excuse Apple for not upgrading it with what was available other than them knowing they did not want to support that specific form factor any longer.

Intels desktop roadmap for example ...

Sandby Bridge32nm2011
Ivy Bridge22nm2012
Haswell22nm2013-2014
Broadwell14nm2015
Skylake14nm2015
Kaby Lake14nm+2017
Coffee Lake14nm++2017
Coffe Lake (refresh)14nm++2018
Comet Lake14nm++2020
Rocket Lake14++2021
 

mode11

macrumors 65816
Jul 14, 2015
1,452
1,172
London
like they knew exactly what they were doing with the trash can Mac.

It's a fair point. The Mac Pro was/is a low-selling model (that they didn't particularly want to make), so they probably felt more able to take a risk with it. Making it much smaller meant this slow-selling model took up less space in inventory. They were also trying to promote Thunderbolt expansion as the way forward, which would benefit their laptops too. In the aftermath of Jobs passing, Apple were also a bit too eager to prove to everyone they could still 'innovate'. Whatever Apple might claim, though, they blatantly designed the shape / concept first, which then dictated the need for two medium sized graphics cards, rather than one big one. It wasn't because they believed passionately in GPU processing - and if that really was the case, why preclude the use of multiple, large graphics cards? It was a styling exercise, very similar in spirit to the G4 Cube, and about as commercially successful. I look forward to picking one up for my bookcase one day, though.

Of course, Apple may be taking a risk with a low selling model again, only this time with not being able to deliver it at all, if the CPU doesn't materialise. How much of a loss of face would it be? The general public wouldn't notice, but it would make Apple look incompetent / negligent to high-end customers. Apple's still on probation as it is with regard to its commitment to that end of the market.

They said they’ll be making Intel macs for years for a reason.

They actually said they will have completed the transition within two years. They said they have unreleased Intel Macs, but that likely just means the odd laptop will get a CPU refresh. They will, however, support Intel Macs for years to come. Given that the most popular models are the lower end laptops that will benefit most from the ARM transition, however, I can see them being big sellers. Once a large proportion of the user base is on ARM machines, I'm not sure how much effort Apple will put into the legacy (Intel) version of the OS. PPC only made it as far as Leopard before being left to wither.
 
Last edited:

Boil

macrumors 68040
Oct 23, 2018
3,479
3,175
Stargate Command
So Apple is finally transitioning to arm64-based APUs, and shifting their production orders from TSMC to the 5nm processes. I would think all the future Apple Silicon SoC will be APUs, and only Pro models get the addition of discrete GPUs (MacBook Pro / iMac Pro / Mac Pro). Mobile-class dGPUs in the MacBook Pros, desktop-class dGPUs in the iMac Pros, & workstation-class dGPUs in the Mac Pro. The Apple Silicon-based Mac Pro should have an Apple version of those very high core count ARM server chips that are being announced as of late (that 80 core Johnny).

But what I really want to see from the transition to Apple Silicon is a return of the Cube! Could do a Mac Cube Pro with a 64 core APU & cut-down high-end dGPU (like a 72CU RDNA3-based GPU w/16GB HBM2e)? I dunno, just kind of excited to see where Apple goes with it all!

Who knows, maybe Apple just makes a monster APU; 80+ assorted (Big/little, Performance/Efficiency, whatever) CPU cores, a buttload of on-die HBM2e, & a massive amount of GPU cores!
 

mode11

macrumors 65816
Jul 14, 2015
1,452
1,172
London
The problem with those huge ARM chips is that you just get a massive amount of slow cores. That's great for serving up web pages in the cloud, but on a desktop good single threaded performance is generally more useful. Massively parallel workstation tasks (e.g. machine learning) are best done with GPUs anyway.

As you might guess, I'm a fan of the Cube, but after the commercial failure and cooling issues of both that and the 2013 MP, I'd be worried if Apple unveiled another one as the 8,1. They would also look schizophrenic, having recently released a machine with a completely opposite design philosophy.

I think an APU might be within Apple's reach for the MacBook Pro. There would surely be big power / space savings to be had by not using discrete graphics if possible. The same chip would also be ideal for the iMac 27" (perhaps clocked a bit higher, due to the available cooling capacity).
 
  • Like
Reactions: ssgbryan

Kpjoslee

macrumors 6502
Sep 11, 2007
417
269
So, less than a console available today - and sadly pathetic with what is arriving this Xmas. 8 cores/16 threads and 2080Ti performance will be the new baseline for games. The 1080p market is shrinking every day, so it won't be relevant for that much longer.

The demo isn't meant to showcase the graphics capability, but how well existing codebase runs via Rosetta 2. Also, you are comparing Tablet SoC(>15W) to the console with TDP well over 200W. And none of the next gen console has 2080Ti tier performance, stop exaggerating on that.

Right now, we have 1 datapoint. A A12z (8 cores/8 threads) has a Geekbench 5 multicore score of 4615.

That is more performance than the bottom of the stack mini (i3), a 4 core/4 thread CPU that is 2 generations old. (3265)

It isn't more performance than the middle mac mini (i5) which has a 6 core/6 thread CPU that is 2 generations old. (4772)

It isn't more performance than the top of the stack mac mini (i7) which has 6 cores/12 threads CPU that is 2 generations old. (5621)

Apple isn't looking to compete in the Personal Computer space - they are looking to expand their iOS userbase.

It isn't about performance.

It is about rebuilding the walled garden. (and getting a 30% cut of all software sold for every Apple device).

There is a good reason why Apple won't allow people to run benchmarks on DTK with A12z, because that is not going to be the SoC that will be used on Apple Silicon Macs.
 
  • Like
Reactions: Mojo1019

t90

macrumors newbie
Apr 15, 2020
14
4
like they knew exactly what they were doing with the trash can Mac. ?

Touché. Perhaps “know what they are planning” is more apt.

The uncertainty of how they will scale the new architecture up to the Mac Pro tier is really making me consider making the switch to Windows. Never thought I would be saying that but I’m getting fed up with the lack of a roadmap, and having the feeling that Apple could pull the rug from under my feet at any point because they want to change things up.

I was going to buy a 7,1 once the dust settled with the Catalina and driver teething problems. I have stuck with apple for a long while because it’s supposed to “just work” and that for me has always justified the apple tax. But they only get my money when they keep up their “it just works” end of the deal!

Now the 7,1 feels like buying into a dead end. Yes, it will probably work for years to come, but how well it will feels uncertain too. If Apple focus on ARM code, Intel optimisation may suffer. And will they really keep updating navi drivers if they start focusing on their own GPUs? So potentially I go back to being stranded on and old OS. If I want the latest software updates I need to upgrade my machine on Apples schedule, not when I need more power? It’s madness.

As a long time user of Logic and Final Cut I’m kind of locked in unless I find alternative software though. That’s the only thing keeping me even considering Apple at this point.
 

jeanlain

macrumors 68020
Mar 14, 2009
2,464
958
So, less than a console available today - and sadly pathetic with what is arriving this Xmas. 8 cores/16 threads and 2080Ti performance will be the new baseline for games. The 1080p market is shrinking every day, so it won't be relevant for that much longer.
Why do you compare the A12Z with next-gen consoles that will draw 20 times more power? The real metric here is perf/W, and the A12Z appears to perform as good (on emulation !) as the current best APUs (AMD Renoir).
Note also that he A13 single core performance is right up there with the very best desktop CPUs. Add more cores and you have a monster.

EDIT: Here's a relevant comparison.
You have a Phone CPU beating a high-frequency 7nm Ryzen in single core performance. The multicore score is lower (the A13 only gas two high-performance cores) but it's TDP is ten times less than that of the desktop CPU.
 
Last edited:
  • Like
Reactions: BarbaricCo

ssgbryan

macrumors 65816
Jul 18, 2002
1,488
1,420
Why do you compare the A12Z with next-gen consoles that will draw 20 times more power? The real metric here is perf/W, and the A12Z appears to perform as good (on emulation !) as the current best APUs (AMD Renoir).
Note also that he A13 single core performance is right up there with the very best desktop CPUs. Add more cores and you have a monster.

I compare it because it is all we have. The data doesn't support your claims.

The best Renoir (4900HS) has a multicore Geekbench 5 result of 7008.
The Best A12z has a multicore Geekbench 5 result of 4615.

Last time I checked - 7008 is a lot more than 4615.

About that A13; Single thread results are a function of clock speed. You may have single core applications, but I haven't had any in about a decade or so.

A13 multicore Geekbench 5 results - 3,388.

I own a Ryzen 9 3950x - 11,794

TR 3990x multicore Geekbench 5 result - 27,874

Those Ryzens are approaching the end of their life-cycle - Zen 3 by Xmas.

A lot of this boils down to use cases - ARM looks great if you are coming from a laptop or a mini - they are not designed for performance. Media consumption - sure. Content creation? If your time is of no value, you could - although I wouldn't recommend it.

As a Mac Pro user for 15 years - I have no confidence in Apple's ability to deliver anything in the way of high performance computing. AFA they are concerned - High performance computing is limited to Final Cut X & Logic. Move outside of that, and yeah..........

They haven't made a current general purpose workstation in over a decade. The idea that Apple engineers will suddenly be able to outperform AMD on the GPU side, and Nvidia on the GPU side is ludicrous.

I would be happy if they could just walk and chew gum at the same time.
 

iFan

macrumors regular
Jan 3, 2007
248
723
So, less than a console available today - and sadly pathetic with what is arriving this Xmas. 8 cores/16 threads and 2080Ti performance will be the new baseline for games. The 1080p market is shrinking every day, so it won't be relevant for that much longer.

Right now, we have 1 datapoint. A A12z (8 cores/8 threads) has a Geekbench 5 multicore score of 4615.

That is more performance than the bottom of the stack mini (i3), a 4 core/4 thread CPU that is 2 generations old. (3265)

It isn't more performance than the middle mac mini (i5) which has a 6 core/6 thread CPU that is 2 generations old. (4772)

It isn't more performance than the top of the stack mac mini (i7) which has 6 cores/12 threads CPU that is 2 generations old. (5621)

Apple isn't looking to compete in the Personal Computer space - they are looking to expand their iOS userbase.

It isn't about performance.

It is about rebuilding the walled garden. (and getting a 30% cut of all software sold for every Apple device).

Saving this one for "Claim Chowder." Apple was explicit in different convos this week that they will not be putting anything even closely resembling the A12Z in their Macs. That their team "wasn't even trying" with that. That they are creating a "family" of new chips. The fact that you can even COMPARE their non-effort extremely low power iPad chip to mobile Intel chips is amazing as a stand-alone statement.

There's nothing preventing Apple from creating chips that are more powerful than the vast majority of Intel (that isn't even hard in 2020.) Not just per watt but actual performance, too. GPU is a separate story and will be interesting to see what they decide.

Absolutely pumped to see what Apple launches later this year with their own chips, I think it will shock people even more than their initial 64bit announcement with the A7. This will change the industry. The best of the best have been working on this for years now, and we are about to find out what they are capable of.

Edit added: I agree with some of your points toward the end. They have neglected pro users in certain industries for a long time. They do have a problem multi-tasking with various things. But I still have a LOT more confidence in their chip team than other parts of the company.
 
Last edited:

jeanlain

macrumors 68020
Mar 14, 2009
2,464
958
A13 multicore Geekbench 5 results - 3,388.

I own a Ryzen 9 3950x - 11,794
You're comparing a 6-W iPhone ship to a 105-W CPU. The latter has 16 cores. Sixteen A13 performance cores would still not draw 105 Watts.

The A13 CPU already matches the best desktop CPU in single-treaded tasks. More performance cores would make a good workstation CPU. Yeah, you need to figure out how to interconnect many cores, but I don't see why Apple could not do it.

[automerge]1593374892[/automerge]
A lot of this boils down to use cases - ARM looks great if you are coming from a laptop or a mini - they are not designed for performance. Media consumption - sure. Content creation? If your time is of no value, you could - although I wouldn't recommend it.
There are HPC servers using ARM CPUs.
 
Last edited:

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
To their defence Intel roadmaps at the time painted a different picture. They also expected developers to utilise the dual GPUs better.

Apple still does expect developers too make better use of dual GPUs. eGPU , the Vega DUO, 3-4 x16 slots to put a GPU into. Getting everyone to do it? No. Gettng a pretty large fraction to do it ? yes.

I mean, looking at it historically it does paint a somewhat troubling picture. It doesn't excuse Apple for not upgrading it with what was available other than them knowing they did not want to support that specific form factor any longer.

On much of the Mac line up though it is more the form factors that Apple is choosing that are more detached from the rest of the market which opens the door for Apple to do their own Silicon so can crawl into those corners. Not many others were going down the butterfly key path. Stuff the power supply inside the Mini ( versus external transformer). Pack as much horsepower into an all-in-one with few vents as can away with . etc.

Intels desktop roadmap for example ...

Sandby Bridge32nm2011
Ivy Bridge22nm2012
Haswell22nm2013-2014
Broadwell14nm2015
Skylake14nm2015
Kaby Lake14nm+2017
Coffee Lake14nm++2017
Coffe Lake (refresh)14nm++2018
Comet Lake14nm++2020
Rocket Lake14++2021

Rocket Lake bench specs are starting to leak.


Decent chance that is not a 2021 part. Close but it might make it out the door before end of December if there are no major hiccups. Not in super high volume , but out the door from some vendor(s). Depends upon how much Intel will stockpile before releasing. And that probably depends up where Adler Lake is going in 2021. ( either 2020 or 2021 is going to get "two" desktops CPUs. ) .

[ Gen 12 iGPU scaled down to 32EUs wasn't going to change the tide. ]


I suspect though Apple will keep pointing at Intel though as if that was their only possible roadmap or option. "See that is messed up, so we gotta go". And it is somewhat also about the roadmap of phone peaking out. More places to run iPhone apps is contributing part of the puzzle here. ( which the Mac Pro probably won't add anything substantive too that "growth" in the big picture. )
 

ssgbryan

macrumors 65816
Jul 18, 2002
1,488
1,420
You're comparing a 6-W iPhone ship to a 105-W CPU. The latter has 16 cores. Sixteen A13 performance cores would still not draw 105 Watts.

The A13 CPU already matches the best desktop CPU in single-treaded tasks. More performance cores would make a good workstation CPU. Yeah, you need to figure out how to interconnect many cores, but I don't see why Apple could not do it.

[automerge]1593374892[/automerge]

There are HPC servers using ARM CPUs.

When you have the facts on your side - pound on the facts.

When you don't have the facts on your side - pound on the table.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.