Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

raftman

macrumors member
Original poster
Apr 15, 2020
38
53
I’m wondering, is this the end of the iMac getting “desktop” CPUs? I don’t see Apple having as many CPUoptions as Intel.

On the other hand, If this is a two year transition, then should we expect Apple Silicon processors in the Mac Pro. That means they will have to out perform a 28 core Xeon in that time.
 

Realityck

macrumors G4
Nov 9, 2015
11,409
17,202
Silicon Valley, CA
I’m wondering, is this the end of the iMac getting “desktop” CPUs? I don’t see Apple having as many CPUoptions as Intel.

On the other hand, If this is a two year transition, then should we expect Apple Silicon processors in the Mac Pro. That means they will have to out perform a 28 core Xeon in that time.
Only people that use iPhones and iPads seem to think their devices have so much performance it seems.
 
  • Haha
Reactions: Trusteft

verticalines

macrumors newbie
Mar 12, 2015
28
14
If there's core parity, assuming a Apple core is close enough to a Intel core in IPC, then it's simply a matter of Apple scaling up count until it closes or exceeds the difference.

The better question is if Apple can use 24 cores to outperform what Intel needs 28 to do. Or even 12 with accelerators/co-processors onboard under optimized programs. What if Apple needs more but at way less power draw? That's still a big win. It's not cheap to design say a 28-core for limited user cases so there's a line they will settle on. AMD does chiplets and it's likely Apple and Intel go that route regarding scale-up.

And the whole graphics thing. iGPU can replace low-to-somewhat midrange options. But how do you dethrone say a Navi flagship? At least Nvidia as of late late last year has said their cards can run in conjunction with ARM.

I think Apple has something in the works for a powerful GPU type SoC section. In the latter end of the 2 year transition period than now. If not, there's always bundling up with others.

[Edit: I doubt Apple is going to dive deep into threading design as ARM isn't deisgned for that. So we can probably just assume they will scale core count and think of other ways to speed processes up like their accelerators).
 
Last edited:

jgorman

macrumors regular
Jul 16, 2019
186
108
We do not know about retail Apple Silicon chips yet, but an existing ARM CPU, the 32-core Ampere eMAG, compares nicely with Intel's 28-core Xeon Gold 5120 and AMD's 24-core EPYC 7401P in benchmarks in terms of performance and power efficiency.

The Ampere eMAG was designed for servers, but a workstation recently came out that has it.

Ampere has an 80-core ARM CPU set to hit the market by the end of the year.
 

leman

macrumors Core
Oct 14, 2008
19,516
19,664
[Edit: I doubt Apple is going to dive deep into threading design as ARM isn't deisgned for that. So we can probably just assume they will scale core count and think of other ways to speed processes up like their accelerators).

Apple’s basic recipe for performance (or at least, part of it) seems to be going really wide. Their designs are ridiculously superscalar, with estimated 13 execution ports (compare this to 8 ports on Skylake and 10 on ice lake) and tons of cache to feed them all. If they can scale the frequency of their CPUs, they will probably outperform Intel and AMD by 10-25% in single threaded applications.

And as you say, it’s reasonable to assume that they can stack multiple cores together to scale the multidimensionalere performance. Memory aces will be an issue, but I suppose they have some sort of solution there, given how well their unified memory SoC perform in an iPad. I wouldn’t be surprised to see some sort of stacked DRAM design with very wide bus...

And the whole graphics thing. iGPU can replace low-to-somewhat midrange options. But how do you dethrone say a Navi flagship? At least Nvidia as of late late last year has said their cards can run in conjunction with ARM.

If their GPU scales, they could easily build something that rivals a 2060-2070 under 50watt TDP. Of course, this would probably mean using dedicated video memory (something that Apple never did before). Or they could again go for a stacked design with high-bandwidth RAM to feed both the CPU and the GPU (like consoles do). Video RAM is a hack to begin with and Apples GPUs need much less bandwidth because if their design... but at this point I’m probably too far in the realm of wishful thinking. I would love to have a fast TBDR GPU in a desktop, that would allows some really neat applications in games.
[automerge]1593211332[/automerge]
We do not know about retail Apple Silicon chips yet, but an existing ARM CPU, the 32-core Ampere eMAG, compares nicely with Intel's 28-core Xeon Gold 5120 and AMD's 24-core EPYC 7401P in benchmarks in terms of performance and power efficiency.

The Ampere eMAG was designed for servers, but a workstation recently came out that has it.

Ampere has an 80-core ARM CPU set to hit the market by the end of the year.

Ampere is very good in perf per $, but it’s raw performance is not too impressive... also, these are server workloads. The Xeon W in a Mac Pro for example need a different performance profile.

Personally, I believe that for professional computing, single threaded performance is still the way to go. Not all tasks are embarrassingly paralleYou want designs that maximize it while providing a large amount of cores for scaling. Incidentally, I think that Apple (and ARM) could have an advantage here with the asymmetric CPU design. Say 4 high performance cores and 32 high efficiency ones. Kind of reminds me of the Cell ;) Cell failed since it was too complex to program, but these days, it’s basic design principles could map very well to what the pro market needs.
 
Last edited:

verticalines

macrumors newbie
Mar 12, 2015
28
14
Apple’s basic recipe for performance...

Thanks for providing some development/technical info. Haven't been keeping up with all that's coming out in recent years except for more general terms and ideas.

I think Apple's advantage is being able to take control of their own destiny going forward. Like Intel/AMD has to account for everything as they just make chips and someone else does software. AMD plays both games in processors and graphics. Nvidia is the same kinda but they do push CUDA pretty far in terms of support and development. That's probably about as vertically integrated as a player like Nvidia can ask for.

I don't think Apple could realistically stack memory and chips yet. It's taking Intel a long time to do Foveon and despite their fabbing woes, they're still Intel. Who really knows how A chips scales--has anyone slapped a cooler on it and ran it? Or are the chips hard limited to not go past a point? Apparently each generation designs to the node for tradeoffs so given the cooling overhead, who knows what the cores can boost up to. Long as IPC improves, it wouldn't need the highest clockspeeds either (Intel's woes right now).

I mentioned Nvidia because it offers an option for Apple to just focus on SoC for the mainstream and allow someone else with far more experience to handle the power graphics. They can develop essentially everything like Intel with their iGPU and then boost off elsewhere via peripherals. Not putting too many eggs in one basket trying to cater to all needs.

I'm hesitant to say 2060 is easily achievable, there's a reason Nvidia has 75W for their 1660 range cards. But replacing say the Intel Iris options which are quite speedy, that's a good starting point.
 

leman

macrumors Core
Oct 14, 2008
19,516
19,664
I'm hesitant to say 2060 is easily achievable, there's a reason Nvidia has 75W for their 1660 range cards. But replacing say the Intel Iris options which are quite speedy, that's a good starting point.

A12Z GPU already outperforms Intel Iris by a healthy margin - even in the iPad form factor. If it scales - an 16-core GPU based on this core could be competitive with high-mid-range offerings from Nvidia or AMD.

There are two things that make Apple GPUs stand out. First, they are tile based deferred renderers (which means less wasted work overall). Second, they seem to be very good at asynchronous work. And these two things feed each other. Apple already have a clear performance per watt advantage over the immediate renderers style GPUs from Nvidia. The question is whether they can scale.
 
  • Like
Reactions: Moonjumper

verticalines

macrumors newbie
Mar 12, 2015
28
14
I mention Iris because Intel charges a hefty premium for it while all the other non-eDRAM Iris are kind of lacking. Just getting to that eDRAM without it and at lower power is a solid baseline performance across the board. Further leveraging efficiency as it doesn't entirely dissolve the need of a iGPU type for power efficiency.

Nvidia has been using tile-based since Maxwell in 2016. AMD started Vega and later. I'm waiting to see what Apple does here because they have options to take starting from anew. There's a lot more extrapolation in the graphics benchmarks and I doubt Apple is seriously thinking just they with a few years of graphics experience can overcome the two powerhouses. It's one thing to do a SoC, another to really push it beyond to surpass a card that pulls 250W because it's running like 2500 cores and 16GB of its own memory.
 

Pressure

macrumors 603
May 30, 2006
5,178
1,544
Denmark
Apple is already king of performance per watt. The A13 (and the A12X/Z) is desktop class in performance.

Apple Silicon will be much faster having shed the restriction of low thermal design power and form factor limitations.

spec2006-a13_575px.png
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.