Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

docbot

macrumors newbie
Original poster
Apr 28, 2011
29
3
Considering the great performance of the Macbook Air and with rumors flying around of 32 Core M1 versions and 64 core GPUs I kinda wonder if Apple will go all out to annihilate Intel performance per watts wise or if they're going to throttle themselves a bit now that they can easily outpace the competition.

I still can't quiet imagine what performance their 3000 usd pro laptop will deliver if their 900 usd one is already running circles around their old 3000 usd machines in certain conditions.

what do you think? will we really already see a 16 or even 32 core Macbook Pros or do you think it will be more like 10 or 12 cores + better gpus?
 

mcnallym

macrumors 65816
Oct 28, 2008
1,210
938
Apple will deliver the performance that they feel they need.
they aren’t competing with intel for sales and with more specialist silicon then the raw cpu/GPU grunt doesn’t have to match.

Apple about system usability not performance benchmarks etc
 

Raima

macrumors 6502
Jan 21, 2010
400
11
Intel aren't worried about the m1 as long as apple doesn't allow windows to run natively on apple silicon. Apple can't steal intel customers that apple intentionally blocks out.
 

bill-p

macrumors 68030
Jul 23, 2011
2,929
1,589
what do you think? will we really already see a 16 or even 32 core Macbook Pros or do you think it will be more like 10 or 12 cores + better gpus?

16" MacBook still has dedicated GPU that runs circles around M1. So in order for Apple to claim "similar GPU performance" as the current-gen 16", the M1X needs like... 3-4x the GPU performance of M1. If Apple wants to claim "2x GPU performance as last generation", they need at least a 64-core GPU, plus dedicated video memory to keep up with demands. That means power consumption of this theoretical M1X chip will be worse than M1. So I'd think M1X will have worse performance per watt than M1.

As performance scales up, performance per watt gets worse, not better. M1 is most likely already the absolute best performance per watt that Apple can achieve. It's all downhill from here... all in pursuit of better performance.
 

ekwipt

macrumors 65816
Jan 14, 2008
1,067
362
AMD is Apple's main competitor ATM, AMDs new laptop chips are nothing to be sneezed at (still beating Apple M1 in multicore scores)
 

Maconplasma

Cancelled
Sep 15, 2020
2,489
2,215
AMD is Apple's main competitor ATM, AMDs new laptop chips are nothing to be sneezed at (still beating Apple M1 in multicore scores)
I don't even see how AMD is a competitor. Plain and simple Apple's chips will run on Macs and AMD will not be running on Macs so there's no contest because even if AMD proves to be faster that wouldn't matter unless a person would prefer Windows to Mac.
 
  • Like
Reactions: psychicist

bobcomer

macrumors 601
May 18, 2015
4,949
3,699
Apple's chips probably don't even worry Intel, intel's still the king for Windows, and windows is x64. Businesses will buy what they need to run the software they need...

The M1 is impressive, it's true, but I know I can't buy it for where I work. (But I have Mac's at home. :)
 

Madhatter32

macrumors 65816
Apr 17, 2020
1,478
2,949
I doubt Apple views Intel as a competitor one way or the other. They probably simple view them as a component vendor that they relied upon in the past and no longer need to do so. I mean it is not as they they are selling the M1 chip as a stand alone product. I think Apple views the Mac as competitive against the Lenovo, Dell, HP, Acer, MSI and others in terms of their computer products. They are probably not going to hold back performance wise because the winds can change rather quickly in the technology world. If they have an advantage I think they plan to exploit it.
 

ChrisA

macrumors G5
Jan 5, 2006
12,919
2,172
Redondo Beach, California
what do you think? will we really already see a 16 or even 32 core Macbook Pros or do you think it will be more like 10 or 12 cores + better gpus?

It all depends on the software. On low-end Macs, I think most users are only running the web browser most of the time. 4-cores works well enough for that.

But as you move up to higher-performance computers more and more people starting running non-Apple software like maybe the Adobe suite. Apple can ensure that its software can run on 16 or 32 cores but 3rd parties are not going to do this. So adding more cores will not make (say) Photoshop run faster.

Adding cores only works if you modify the software to take advantage of them. Apple will have a hard time convincing others to re-write their software.

So if Apple sold a 16-core Arm-based Mac, it might be good for running Final Cut Pro or even Logic but not much else would run faster on 16 cores than on 4 cores.
 

senttoschool

macrumors 68030
Nov 2, 2017
2,626
5,482
There is so much wrong here.

1. Apple is competing with Intel and AMD. People do choose computers based on value and performance, not just MacOS.

2. Apple will try to pummel Intel and AMD to the ground. iOS + faster SoCs is how the iPhone competes against the phone competition. MacOS + faster SoCs is how Apple will now compete against computer competition.

3. Apple does need to try hard to continue to stay ahead of AMD, Intel, and other upstart ARM competitors. Intel isn't going to hold back now that it has a technical CEO again. And AMD is a formidable competitor. Apple's M1 is comfortably beating the competition now but Apple will need to be aggressive to keep its large lead.

4. Mac market share is small compared to Apple's market share in phones, tablets, wireless headphones, Watches. In order to gain market share, Apple needs to be ultra aggressive.
 
Last edited:

JouniS

macrumors 6502a
Nov 22, 2020
638
399
Adding cores only works if you modify the software to take advantage of them. Apple will have a hard time convincing others to re-write their software.
I think memory is a bigger issue.

Most tasks don't parallelize well, and the apps for those tasks rarely benefit from more than a few CPU cores. On the other hand, if a task parallelizes well enough to use tens of CPU cores efficiently, it probably runs even better on a GPU. Especially with unified memory.

Systems with high core counts are primarily useful for running many independent tasks in parallel. That may require a lot of memory, because the independent tasks often run on separate data. As a rule of thumb, 2 GB/core is a low-memory system, 4 GB/core is normal, 8 GB/core is high, and 16 GB/core is very high. Apple could release a 16-core MBP, but then 32/64 GB memory options would be comparable to the 8/16 GB options for M1 Macs. A 32-core CPU would go to waste on a laptop limited to 64 GB memory.
 
  • Like
Reactions: psychicist

Krevnik

macrumors 601
Sep 8, 2003
4,101
1,312
Adding cores only works if you modify the software to take advantage of them. Apple will have a hard time convincing others to re-write their software.

So if Apple sold a 16-core Arm-based Mac, it might be good for running Final Cut Pro or even Logic but not much else would run faster on 16 cores than on 4 cores.

And it's not even if Apple can convince developers to make apps heavily multithreaded, but can the app itself benefit from it. And if it can benefit from it, can it benefit even more from Metal? Final Cut, Affinity Photo and Lightroom all benefit more from Metal acceleration more than throwing CPU cores at the problem. For things that can be computed on the GPU, it is just going to be easier to get massive scale using the GPU instead of CPU cores. So on that front I agree.

That said, there are some professional use cases where scaling up in CPU cores may be desirable, and and GPU compute isn’t viable. Larger Xcode projects benefit. VMs benefit. Anything you want to let churn in the background while also doing work in the foreground benefits, to a point.

All that said, I do expect to see more cores in a 16” ‘M1X’ MBP compared to the 13” M1 MBP, but that the count of power cores will likely be comparable to the Intel version. So 6-8 power cores, and then 4-8 efficiency cores added in. I’d be surprised if we don’t see a “12 core” MBP, but 16 is probably a stretch unless they go heavy on efficiency cores. The thing I wonder about is how Apple will split the cores for these higher end SoCs. Doing some rough napkin math, power cores scale better for performance vs die space. But efficiency cores scale better for performance vs power. So I’d be interested to see how Apple balances these competing desires.

And since Apple has to be pushing at least 16 GPU cores just to compete with the 5600M, there’s that to consider as well. Doubling or tripling die size of the GPU cores adds another thing that needs to be balanced with the rest.
 

rui no onna

Contributor
Oct 25, 2013
14,916
13,261
4. Mac market share is small compared to Apple's market share in phones, tablets, wireless headphones, Watches. In order to gain market share, Apple needs to be ultra aggressive.

iPhone market share is partly subsidized/supported by wireless carriers offering promotions and 2+ year device installment plans.

Majority of x86 computers sold are of the cheap variety and I don't think Apple has any plans to compete in that tier.
 

Solomani

macrumors 601
Sep 25, 2012
4,785
10,477
Slapfish, North Carolina
Considering the great performance of the Macbook Air and with rumors flying around of 32 Core M1 versions and 64 core GPUs I kinda wonder if Apple will go all out to annihilate Intel performance per watts wise or if they're going to throttle themselves a bit now that they can easily outpace the competition.

I still can't quiet imagine what performance their 3000 usd pro laptop will deliver if their 900 usd one is already running circles around their old 3000 usd machines in certain conditions.

what do you think? will we really already see a 16 or even 32 core Macbook Pros or do you think it will be more like 10 or 12 cores + better gpus?

When Apple develops products or technologies, they don't think a lot about what the competition is doing. Apple focuses on what Apple thinks is progress and advancement for THEIR OWN ecosystem.

That's why Apple, for decades, has done things that totally break tradition with the rest of the PC/Windows world.... like when Apple just decided to stop having floppy drives (when they first introduced iMacs), the rest of the PC world said "HUH? What the hell are you doing, Apple? That's crazy. Everyone needs and loves floppy drives."

So Apple's moves with its processors is not to intentionally "destroy" or put Intel out of business.

IF Intel does go kaput and runs into the ground, it's because Intel itself failed to compete with whatever it is Apple (or AMD, etc) was doing. It would be Intel's own fault. Not Apple's.
 

iPadified

macrumors 68020
Apr 25, 2017
2,014
2,257
16" MacBook still has dedicated GPU that runs circles around M1. So in order for Apple to claim "similar GPU performance" as the current-gen 16", the M1X needs like... 3-4x the GPU performance of M1. If Apple wants to claim "2x GPU performance as last generation", they need at least a 64-core GPU, plus dedicated video memory to keep up with demands. That means power consumption of this theoretical M1X chip will be worse than M1. So I'd think M1X will have worse performance per watt than M1.

As performance scales up, performance per watt gets worse, not better. M1 is most likely already the absolute best performance per watt that Apple can achieve. It's all downhill from here... all in pursuit of better performance.
Apple could "kill" Intel/AMD high end computing if they want on pure core count.The modularity of the new ASi Mac Pro will be interesting. Will we see two-four M processors working together? Will we see Apple branded dedicated GPU and neural engine card? iMac and MBP are less interesting for understanding Apples ambitions in the high end computing space unless of course they use 2-4 M1 in them but that is unlikely. That Mac Pro was designed for a larger purpose than putting Intel chips inside.

Adding more cores scale nearly linearly with performance as does the power used. So adding twice as many cores doubles the performance and power used and the performance per watt stays essentially the same. Therefore scaling cores will have very little impact on the performance per watt but some overhead to manage all these cores and increased power usage of memory is expected. If, however, Apple increases the frequency, the performance per watt will decrease rapidly even if the performance go up. Performance per watt is a difficult concept as it depends on what you include. The individual core of a M1X having the same frequency will have the same performance to watt ratio as the the M1 assuming the same fab process. The whole system including memory might not.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,678
I kinda wonder if Apple will go all out to annihilate Intel performance per watts

In terms of performance per watt, Apple has an undisputed lead over pretty much everybody else. But performance per watt is only one part of the equation. There is also absolute performance, and Apple has to tackle that goal in order to complete its hardware transition. Fortunately, the path forward is fairly obvious for them.

what do you think? will we really already see a 16 or even 32 core Macbook Pros or do you think it will be more like 10 or 12 cores + better gpus?

I think this can be answered fairly accurately by looking at the state of the art. While Apple doesn't compete with other chip makers directly, what these chipmakers do does matter — after all, Apple wouldn't want to end up with slower machines than the competition. For example, a 32 core MacBook Pro doesn't make much sense — even at 3 watts per core we are looking over 100W CPU TDP, and it's a waste of silicon in the end.

Projecting what we know about Apple's performance per watt:

- high-end laptops will require 8 performance CPU cores (to compete with 8-core Tiger Lake and Ryzen 3) and 16-32 GPU cores (to compete with 3050/3060 mobile), with a combined TDP of around 70-80 watts
- high-end desktops will require at least 12 performance CPU cores (compete with 16-core Intel and AMD chips) and somewhere around 40 GPU cores (to compete with high-end dGPUs)
- professional desktops, well, that's a bit more complicated. A 32-core Mac Pro should be able to match a 64 EPYC CPUs, so I guess that is the upper rang we will see. Also 64-core Apple GPU should be more then performant enough to take on anything Nvidia or AMD can scramble.
 
  • Like
Reactions: Captain Trips

Apple_Robert

Contributor
Sep 21, 2012
35,665
52,472
In a van down by the river
I don't believe Apple is concerned about Intel. Apple has already crushed what Intel was offering. If Microsoft and Apple can work together to get the ability to run Windows on the M series, the proverbial writing will be echoed in tech blood on the wall for Intel, in my opinion.
 
  • Like
Reactions: Solomani and pshufd

Falhófnir

macrumors 603
Aug 19, 2017
6,146
7,001
Don't think Apple intends to actively drive Intel out of business, no. Why would they want to really? I know the Machiavellian saying about caressing or being completely crushed, but I don't think this is something that applies to. Intel's gain isn't directly Apple's loss or vice versa. OTOH, they might just have started an avalanche which will kill off x86 by showing the way for others.
 

UBS28

macrumors 68030
Oct 2, 2012
2,893
2,340
Apple doesn't annihilate Intel since it does not support x86.

Heck, I even bought the 13" 2020 MBP (high-end version with the 10nm chip) as I knew it would be the last Intel Mac.

Mac OS X simply has the worst software support, so I don't see myself being pigeon holed to Mac OS X. Some of my hardware does not even work with Mac OS X also when Apple dropped 32-bit support.

Intel should be worried about AMD, as AMD is the one that has been annihilating Intel for years now.
 

crevalic

Suspended
May 17, 2011
83
98
In terms of performance per watt, Apple has an undisputed lead over pretty much everybody else. But performance per watt is only one part of the equation. There is also absolute performance, and Apple has to tackle that goal in order to complete its hardware transition. Fortunately, the path forward is fairly obvious for them.



I think this can be answered fairly accurately by looking at the state of the art. While Apple doesn't compete with other chip makers directly, what these chipmakers do does matter — after all, Apple wouldn't want to end up with slower machines than the competition. For example, a 32 core MacBook Pro doesn't make much sense — even at 3 watts per core we are looking over 100W CPU TDP, and it's a waste of silicon in the end.

Projecting what we know about Apple's performance per watt:

- high-end laptops will require 8 performance CPU cores (to compete with 8-core Tiger Lake and Ryzen 3) and 16-32 GPU cores (to compete with 3050/3060 mobile), with a combined TDP of around 70-80 watts
- high-end desktops will require at least 12 performance CPU cores (compete with 16-core Intel and AMD chips) and somewhere around 40 GPU cores (to compete with high-end dGPUs)
- professional desktops, well, that's a bit more complicated. A 32-core Mac Pro should be able to match a 64 EPYC CPUs, so I guess that is the upper rang we will see. Also 64-core Apple GPU should be more then performant enough to take on anything Nvidia or AMD can scramble.
So naive, maybe you should give the clearly incompetent engineers at AMD and Nvidia a call to help them improve their performance and crush their competition, they are sure to offer you an amazing job. Even better, give Intel a call - "just increase the number of your iGPU cores by a factor of 10, the performance will linearly increase and you will easily beat anything AMD and Nvidia have to offer." I guess the engineers have never thought of this strategy, but thankfully we have the Macrumors couch GPU architecture engineers to teach them how to do their job.
I mean, it has totally never happened (except literally every single GPU generation) that scaling up the number of processing units in a GPU lead to diminishing or even negative gains in performance. The GPU archs have definitely never been memory bandwidth starved or had other inefficiencies that only became overwhelming while scaling up. I mean, Intel Larrabee has been a fantastic success that has dominated the GPU markets for the last decade, right?

Just to put things into perspective: an entire M1 SoC has 68.25GB/s memory bandwidth, while a single 3090 has a 936.2 GB/s bandwidth to its memory.


And while we are at it, let's have @iPadified school the CPU designers by explaining how easily Apple could kill the whole CPU industry and how the performance scales linearly with the number of cores. It's not like already 32-core threadrippers easily get memory starved with their quad RAM channels and the same goes for the high core count Epyc CPUs with 8 memory channels. LOL just the part of the chip dealing with routing data in and out of an Epic CPU has 5 times the area of a whole M1 SoC.


I'm not saying M1 is bad, honestly, it's great. But it's a low power consumer SoC with no advanced functionality somebody buying Epyc CPUs or Tesla GPUs would expect. High end stuff is a whole different world. And Apple might do well there too, but you know, other companies are doing well too and it's not trivial. It's insane to see how many people here suddenly became hardware experts while it's obvious you've never read an in-depth dive into arch design in your life, which, honestly, isn't surprising among Mac users.
 

LeeW

macrumors 601
Feb 5, 2017
4,342
9,446
Over here
if Apple will go all out to annihilate Intel performance per watts wise or if they're going to throttle themselves a bit now that they can easily outpace the competition.

There is no money in Apple going all out, you can be sure that they could deliver what they have in their CPU roadmap over the next 2/3 years today if they wanted, but they won't, they want everyone to be focussed on that incremental upgrade each refresh, just like the iPhone.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,678
So naive, maybe you should give the clearly incompetent engineers at AMD and Nvidia a call to help them improve their performance and crush their competition, they are sure to offer you an amazing job.

Unsure why you re quoting my post, I am not a CPU/GPU engineer. On a more general note, it's fairly clear that Apple's designs outclass the competition in terms of performance per watt by a large margin. In particular, they only need 5 watts to deliver a level of single-core CPU performance that Intel and AMD need 20 Watts for.

I mean, it has totally never happened (except literally every single GPU generation) that scaling up the number of processing units in a GPU lead to diminishing or even negative gains in performance. The GPU archs have definitely never been memory bandwidth starved or had other inefficiencies that only became overwhelming while scaling up.

Why, someone seems rather angry :D I assumed it was obvious that Apple would have to scale up memory bandwidth if they are to scale up their processing clusters. Their communication on the mater has been rather transparent — they are investing in wide-memory architectures to achieve high memory bandwidth with low latency.

And talking about scaling, it's pretty much linear, especially in the GPU land, as long as you can control all the other factors (bandwidth, ROPs etc). It's fairly cleal when you look at the performance of GPUs within a single generation. Comparing across generations is much more tricky, especially given the misleading marketing of GPU makers (like Nvidia's "CUDA core"). Here, we are talking about increasing the number of Apple GPU clusters (each of which comprises of 4 32-wide ALUs, local memory and a dispatch/control unit). If you can ensure that the work is evenly distributed across these clusters (which is not that complicated in a tiled architecture), you can get pretty much guaranteed linear scaling. All you need is enough bandwidth — which is solvable.

Just to put things into perspective: an entire M1 SoC has 68.25GB/s memory bandwidth, while a single 3090 has a 936.2 GB/s bandwidth to its memory.

M1 has bandwidth appropriate to its processor configuration. Future Apple Silicon clusters will have larger caches and more bandwidth. We also should keep in mind that for graphical applications, Apple GPUs need significantly less bandwidth than Nvidia or AMD, because they use bandwidth more efficiently. Compute work can be a different matter, but here Apple Silicon main selling point is low CPU/GPU communication latency.

And while we are at it, let's have @iPadified school the CPU designers by explaining how easily Apple could kill the whole CPU industry and how the performance scales linearly with the number of cores. It's not like already 32-core threadrippers easily get memory starved with their quad RAM channels and the same goes for the high core count Epyc CPUs with 8 memory channels.

Which is why I believe that Apple will be using something like 16 64-bit memory channels in their pro-desktop chips.

LOL just the part of the chip dealing with routing data in and out of an Epic CPU has 5 times the area of a whole M1 SoC.

... and is built on an inferior process. I don't think anyone has any estimate how big these things will end up when fabricated on 5 or 3nm TSCM.

High end stuff is a whole different world. And Apple might do well there too, but you know, other companies are doing well too and it's not trivial.

Of course it's not trivial. But you seem to be ignoring a crucial bit: Apple does not need to sell these chips to anyone. The chip itself doesn't have to be market-competitive, the resulting PC does. Intel and AMD need to make all these huge, complex chips and be able to sell them at a profit. For Apple R&D + manufacture just needs to cost around the same of what they pay Intel + AMD.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.