Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Kpjoslee

macrumors 6502
Sep 11, 2007
417
269
When you have the facts on your side - pound on the facts.

When you don't have the facts on your side - pound on the table.

Your data doesn't support your claims either because A12z or A13 isn't going to be used on ARM macs either. Why are you keep bringing that up when that isn't going to be the representative of how ARM macs are going to perform in the first place? Just because that is only data we have available doesn't make it a valid comparison.

Can you be skeptical? sure. But you are not providing any evidence how ARM can't be scaled up to provide good performance on high end task. Especially when purely ARM based supercomputer is sitting on top of Supercomputer chart.
 

jeanlain

macrumors 68020
Mar 14, 2009
2,464
958
When you have the facts on your side - pound on the facts.

When you don't have the facts on your side - pound on the table.
My side:
Apple CPUs will beat intel's and AMD's desktop CPUs.
Facts:
- An A13 CPU core performs as well as the best intel/AMD CPU core.
- An whole A13 CPU draws many times less power than these desktop CPUs.


Your side:
Apple will be unable to compete with AMD/Intel.
Facts:
A 105-W desktop CPU beats a 6-W Phone SoC in multicore score.

Anyone here can see which facts are more relevant.
 

iFan

macrumors regular
Jan 3, 2007
248
723
Too much Kool-Aid.

What's the opposite of Kool-aid?

Unlike other parts of Apple that have had failures (slow redesigns, keyboard reliability, AirPower, white MacBook top case iPhone 6 battery, etc) the chip team has only surpassed expectations. Every. Single. Year. Now we get to see what they can do without the same thermal envelopes or restrictions. You'd have to be intellectually deficient to not get excited about that possibility. You can bet that Intel, AMD, Qualcomm, Samsung, and others will be watching closely.

Either way, we will wait and see. I'd rather be optimistic based off historical performance than the "comic book guy" persona any day of the week. This may be the chip team's "best episode ever."
 

AidenShaw

macrumors P6
Feb 8, 2003
18,667
4,677
The Peninsula
Anyone here can see which facts are more relevant.
We can?

It depends on the workflow and constraints. (For example, there's no value to me in having a 6 watt CPU over a 100 watt CPU in my desktop.)

And it's completely unknown as to whether a ARM cpu scaled up to 32 cores and Xeon-class performance will be anywhere close to 6 watts.

Put down the Kool-Aid.

mac-steve_koolaid-big.jpg


 
Last edited:
  • Like
Reactions: ssgbryan

Kpjoslee

macrumors 6502
Sep 11, 2007
417
269
We can?

It depends on the workflow and constraints. (For example, there's no value to me in having a 6 watt CPU over a 100 watt CPU in my desktop.)

And it's completely unknown as to whether a ARM cpu scaled up to 32 cores and Xeon-class performance will be anywhere close to 6 watts.

Put down the Kool-Aid.

Given 150w~250w TDP, at least 48 core should be doable with A14 class cores, though the question would be, will Apple care enough to do that.
 

jasoncarle

Suspended
Jan 13, 2006
623
460
Minnesota
Here is something I was just thinking of that I haven't seen mentioned in this entire thread. The entire purpose of the Mac Pro was that it can be customized to the users needs and desires, and that there will be an ecosystem of accessories for it.

I don't see that happening and I think the transition to ARM will ensure it doesn't happen for the Intel Mac Pro.
 
  • Like
Reactions: defean

bobbie424242

macrumors 6502
May 16, 2015
366
696
There is no way Apple can make a dGPU that compete with what AMD and NVIDIA have to offer...
As for using Apple Silicon for workstation grade hardware such as the Mac Pro, I am highly skeptical.
Especially since Intel and AMD are not going to stand still.
 

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
ARM v9 will be something different, than any design we have seen to this day, in terms of performance.

And yes, this is me writing this. A person who was blatantly sceptical about viability of ARM CPUs.

Any next gen CPU architecture from Apple will be based on ARM v9 and it is pretty much a game changer for this arch's viability.

The only thing is software. So resident dinasaurs will have to do two things. Either they will rewirte their software for ARM's architecture, which will bring more efficient execution(YAY!).

Or will get aneurism because their software will not work on ARM(YAY!).

I changed my mind about ARM's future. Apple did something huge for this arch. They will provide effectively a development platform, for ARM, which will bring tons of software to any ecosystem, not only to Apple's.

ARM devices will not be constrained by software, anymore.

Nobody should resist what Apple is doing. Or at least everybody should discuss ARM ecosystem, with, for example Jon Masters.
 
  • Like
Reactions: sirio76

AidenShaw

macrumors P6
Feb 8, 2003
18,667
4,677
The Peninsula
ARM v9 will be something different, than any design we have seen to this day, in terms of performance.

And yes, this is me writing this. A person who was blatantly sceptical about viability of ARM CPUs.

Any next gen CPU architecture from Apple will be based on ARM v9 and it is pretty much a game changer for this arch's viability.

The only thing is software. So resident dinasaurs will have to do two things. Either they will rewirte their software for ARM's architecture, which will bring more efficient execution(YAY!).

Or will get aneurism because their software will not work on ARM(YAY!).

I changed my mind about ARM's future. Apple did something huge for this arch. They will provide effectively a development platform, for ARM, which will bring tons of software to any ecosystem, not only to Apple's.

ARM devices will not be constrained by software, anymore.

Nobody should resist what Apple is doing. Or at least everybody should discuss ARM ecosystem, with, for example Jon Masters.
Any links to support those projections?
 
  • Like
Reactions: ssgbryan

blackadde

macrumors regular
Dec 11, 2019
165
242
Given 150w~250w TDP, at least 48 core should be doable with A14 class cores, though the question would be, will Apple care enough to do that.

What's your napkin math on this? Where are you getting numbers for Apple's target TDP, and how are you deriving 'at least 48' cores from that?

I'm sure it goes without saying that you can't just divide a target desktop TDP by an existing mobile SOC's TDP and arrive at some linear performance multiplier. For example, AMD can't just scale their mobile 4800U (8c/15W) to the TR3990X's TDP (280W) and shove 150 cores into a desktop package (280W/15W*8c = ~150c).
 

hardw0od

macrumors newbie
Jun 22, 2020
11
0
What we are seeing here is the end of Moore's Law in the 2010's, so the performance of desktop computers from the early part of the last decade are still reasonably good when compared to the latest models. This wasn't so much the case in the 2000's.

The same thing can be seen in data centres, where there are racks full of Sandy Bridge Xeon powered machines, because they are still good enough for the job.

You are somewhat fortunate with your Mac Pro 5,1 as I think you can run a recent version of macOS if you have a Metal-capable GPU. Others can't upgrade beyond macOS High Seirra, despite Ram upgrades, SSD upgrades and having a decent enough intel cpu.

Staying with an intel cpu isn't going to protect our machines from obsolescence.


Oh, I'm definitely lucky. I have Catalina running on my 5,1 with 64GB of ram and Radeon 5700 and 2TB of Nvme storage on an OWC card. I can stretch out 1-2 more years out of it and I'm fine with that as it will get me to the next ARM Mac Pro.
 

Kpjoslee

macrumors 6502
Sep 11, 2007
417
269
What's your napkin math on this? Where are you getting numbers for Apple's target TDP, and how are you deriving 'at least 48' cores from that?

I'm sure it goes without saying that you can't just divide a target desktop TDP by an existing mobile SOC's TDP and arrive at some linear performance multiplier. For example, AMD can't just scale their mobile 4800U (8c/15W) to the TR3990X's TDP (280W) and shove 150 cores into a desktop package (280W/15W*8c = ~150c).

Of course not. But there is no reason to believe that the cores that are power efficient enough to use on smartphones all the sudden proves to be horribly inefficient cores when scaled up to many cores to be used on Server grade hardware.

Amazon's own Graviton and Ampere's Altra processors proven that ARM has no problem scaling up to many cores and managed to retain great efficiency. I don't expect it will be much different for Apple.

Plus, Apple's hypothetical Mac Pro ARM processor will be based on 5nm, which is about 1.8x denser and about 30% more power efficient than 7nm.
AMD just about managed to put 64 cores on 7nm. Do you think it would be technically impossible to put at least 48 cores on 5nm process?

It is not about whether 48+ cores are technically possible or not, it is more about Apple is interested in making such a CPU for Mac Pro.
 

goMac

macrumors 604
Apr 15, 2004
7,663
1,694
The problem with those huge ARM chips is that you just get a massive amount of slow cores. That's great for serving up web pages in the cloud, but on a desktop good single threaded performance is generally more useful. Massively parallel workstation tasks (e.g. machine learning) are best done with GPUs anyway.

Apple’s tablet ARM cores are actually already significantly faster than Intel’s desktop cores.

The problem is actually the reverse of what you imply. They don’t ship enough cores. So they beat Intel on single thread but suffer on multi thread. But... shipping more cores is an easier problem to solve as you start moving towards larger devices.

Apple’s ARM cores are very different than generic ARM cores in that they are really quite fast per core. Apple basically took the ARM instruction set but designed a desktop/laptop style core with it.

To make things more complicated, Apple’s cores beat Intel with approximately half the clock speed, or more. So we’re going to be back to clock speed not being a workable measure.
 
Last edited:

Santabean2000

macrumors 68000
Nov 20, 2007
1,886
2,050
I just can’t imagine an ARM CPU replacing a Xeon with any adequacy.

I’d bet Mini, MacBook(maybe a new MacBook Air), and the lowest-end iMac get Apple CPU’s while higher end iMacs, maybe a Mini-Pro, MacBook Pro and Mac Pro keep Intel.
I can see Apple Silicon making Intel look weak and impotent within just a few years. Performance will be from the tight integration of all their technologies, not just raw CPU alone.
 

codehead1

macrumors regular
Oct 31, 2011
117
98
A lot of good discussion on processors—not to rain on anyone's parade, just to add a point of view:

Apple stated, clearly, they are shooting for both lower power consumption and higher performance (you've seen the chart). The first is pretty much a given and the second is what's being argued. So, some day we'll be able to look back and see if Apple was right or wrong. But their intentions are clear—no should assume they will be happy with sub-par performance. I think if the first attempts aren't stunning, they will put a lot money and effort in to getting it to target. Sure, it's possible they are overpromising, but I doubt it will be because they didn't have smart enough people in the room making the projections. They know Intel's roadmap.

Anyway, the bottom line is that laptops will have the biggest benefit, and that's a big market for Apple. Not just the CPU but the other systems that go long with it.

Personally, I'm not concerned about having bought a 7,1, for a number of reasons. For one, a Mac Pro killer/replacement will not be high on their priority list. I need to do stuff now.
 

jeanlain

macrumors 68020
Mar 14, 2009
2,464
958
We can?

It depends on the workflow and constraints. (For example, there's no value to me in having a 6 watt CPU over a 100 watt CPU in my desktop.)

And it's completely unknown as to whether a ARM cpu scaled up to 32 cores and Xeon-class performance will be anywhere close to 6 watts.

Put down the Kool-Aid.
I'd be nice if you quoted the arguments you were supposed to reply to. Because I never suggested that a 32-core ARM CPU should draw 6 W.
It'd also be nice if you defended your position that Apple CPUs will not scale up to workstation-class performance instead of posting condescending images. How old are you?
All you can say is "show me that workstation Apple CPU", which obviously is not released yet.
But performance results of the A13 and the fact that many-core ARM CPUs exist and are very efficient, do not point to any roadblock.
 
Last edited:

jeanlain

macrumors 68020
Mar 14, 2009
2,464
958
Apple’s tablet ARM cores are actually already significantly faster than Intel’s desktop cores.
I don't know if they are faster. But they are competitive.
Unfortunately, we don't have have power consumption numbers. We don't know the tdp of the A13. Some indicate 6W, but it's apparently a guess from Anandtech's review.
Results from this review are quite confusing. This page shows values in the 0.5W, which seems too low even for a single thread.
Then this page shows results in the 5W range, which make more sense. But it's still unclear whether the test used multiple threads. If it does, that could explain the 10X increase in power consumption. But then they show results from the intel/AMD CPUs, which should be much higher given the higher number of cores.
I'm not even sure how they measure the power consumption of the SoC. Is it deduced from the power consumption of the whole phone? The testing methodology is pretty obscure.
 
Last edited:

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
Plus, Apple's hypothetical Mac Pro ARM processor will be based on 5nm, which is about 1.8x denser and about 30% more power efficient than 7nm.
AMD just about managed to put 64 cores on 7nm. Do you think it would be technically impossible to put at least 48 cores on 5nm process?
Ahem. 5 nm TSMC process is EUV. Which means: bye bye monolithic large dies.

Welcome Chiplets. Which is brilliant if you think about it.

Apple’s tablet ARM cores are actually already significantly faster than Intel’s desktop cores.
Considering that dual core MacBook Air is achieving 1000 Pts in Single threaded, and 2000 pts multithreaded score I would not go that far.

And do not bring the topic of emulation ;).

Apple demoed Shadow of the Tomb Raider in their presentation, running on this very silicon at 1080p, with at least 30 FPS. And you know what this means?

Renoir based Vega 6, averages in 1080p, Medium settings 14 FPS in the very, same game. So Rosetta2 is pretty darn efficient in translating the code.

If the scores from GB5 are anything to go by, MacOs itself hampers a lot of ARM chips performance. Or... iOS is extremely well coded, and extracts every, last bit of performance out of those CPUs. Apple may actually have still pretty steep hill to climb, to overtake x86. Because its not Intel who they have to beat. Its AMD who they have to beat. And that will be way harder than good, old Intel.
 
Last edited:
  • Like
Reactions: Unregistered 4U

goMac

macrumors 604
Apr 15, 2004
7,663
1,694
Considering that dual core MacBook Air is achieving 1000 Pts in Single threaded, and 2000 pts multithreaded score I would not go that far.

And do not bring the topic of emulation ;).

But uhhhh... it is emulated.

Rosetta won't emulate AVX instructions, which is going to kill performance of things like Geekbench under Rosetta.

If the scores from GB5 are anything to go by, MacOs itself hampers a lot of ARM chips performance. Or... iOS is extremely well coded, and extracts every, last bit of performance out of those CPUs. Apple may actually have still pretty steep hill to climb, to overtake x86. Because its not Intel who they have to beat. Its AMD who they have to beat. And that will be way harder than good, old Intel.

I don't really get this. iOS is built on top of macOS. We know what macOS on ARM looks like. We've been running it for years. Pick up any iPhone or iPad. That's macOS on ARM.

macOS and iOS share the same kernel, same drivers, same file system, same libraries, same threading, same networking, same graphics stack... same everything except for UI. That's why ARM Macs can just run iOS applications _because it's all the same_.

Your benchmarks aren't going to change. Performance isn't going to change. Finder.app is not going to be the straw that breaks ARM's back.
 

Kpjoslee

macrumors 6502
Sep 11, 2007
417
269
Ahem. 5 nm TSMC process is EUV. Which means: bye bye monolithic large dies.

Welcome Chiplets. Which is brilliant if you think about it.

Possibly for the cost reasons. But I still expect many of 5nm designs will remain monolithic until price becomes too prohibitive.

Considering that dual core MacBook Air is achieving 1000 Pts in Single threaded, and 2000 pts multithreaded score I would not go that far.

And do not bring the topic of emulation ;).

Apple demoed Shadow of the Tomb Raider in their presentation, running on this very silicon at 1080p, with at least 30 FPS. And you know what this means?

Renoir based Vega 6, averages in 1080p, Medium settings 14 FPS in the very, same game. So Rosetta2 is pretty darn efficient in translating the code.

If the scores from GB5 are anything to go by, MacOs itself hampers a lot of ARM chips performance. Or... iOS is extremely well coded, and extracts every, last bit of performance out of those CPUs. Apple may actually have still pretty steep hill to climb, to overtake x86. Because its not Intel who they have to beat. Its AMD who they have to beat. And that will be way harder than good, old Intel.

75% of native performance is not bad at all for AoT translation. Strictly going by Geekbench score, if A14 based core achieves 1600 on Single core store, you would be looking at 1200 single core score under emulation, which would be right in Macbook pro 16 territory.
 

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
Possibly for the cost reasons. But I still expect many of 5nm designs will remain monolithic until price becomes too prohibitive.
Majority of designs will be monolithic. But for EUV processes in general, the Reticle limit is much, much smaller, than for "standard processes".

Reticle limit for N7 from TSMC is 830 mm2. Reticle Limit for 5 nm EUV may be 500 mm2, or something like that.

This is the very reason why Hopper, from Nvidia, is an MCM GPU.

Prepare for the housefires with 1KW GPU devices ;).
 

Kpjoslee

macrumors 6502
Sep 11, 2007
417
269
Majority of designs will be monolithic. But for EUV processes in general, the Reticle limit is much, much smaller, than for "standard processes".

Reticle limit for N7 from TSMC is 830 mm2. Reticle Limit for 5 nm EUV may be 500 mm2, or something like that.

This is the very reason why Hopper, from Nvidia, is an MCM GPU.

Prepare for the housefires with 1KW GPU devices ;).

Reticle limit for current EUV is 858mm2. Reticle limit will not change for 5nm. It is 3nm that will use 2nd generation EUV that will have smaller reticle limit. At least for next couple of years, most will remain monolithic.
 

jeanlain

macrumors 68020
Mar 14, 2009
2,464
958
Apple demoed Shadow of the Tomb Raider in their presentation, running on this very silicon at 1080p, with at least 30 FPS. And you know what this means?

Renoir based Vega 6, averages in 1080p, Medium settings 14 FPS in the very, same game. So Rosetta2 is pretty darn efficient in translating the code.
... and the Vega 11, which may be AMD's best iGPU (AFAIK) averages 19 fps.

Here we have a game that has been ported from DX12, a port that probably cost performance. It was coded for desktop GPUs, not for TBDR mobile GPUs. This mismatch also costs performance. Then it ran under emulation, which costs about 30% CPU performance, and some GPU performance due the Metal validation layer to ensure the absence of visual artifacts.
And even after all that, the A12Z matches or beats the best iGPUs running the game native?
So, either:
The A12Z/Metal combination beats the competition (i.e, mobile GPUs and iGPUs).
The scene they demoed was much less taxing that the benchmark.
The quality settings were lower than medium (definitely, but that can't explain everything).
The WWDC demo did not run on an A12Z, but some other prototype. (We shall know when someone tries to replicate the demo on the dev kit.) EDIT: Federighi confirmed in an interview that the demos were running on the dev kit, hence the A12Z.
Any combination of the above.
 
Last edited:
  • Like
Reactions: ZombiePhysicist
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.