Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Bodhitree

macrumors 68020
Original poster
Apr 5, 2021
2,085
2,217
Netherlands
It seems to me that Apple, by starting to manufacture their own desktop CPUs, have a wonderful opportunity to disrupt the industry further by scaling up the core count in desktop machines beyond what Intel or AMD might want to do.

Think on this: Intel and AMD are holding to a certain pattern of core counts because of the way their market works. Low-power chips with 2 or 4 cores, standard desktop chips with 4 or 6 cores, high-end desktops with 8 or 10 cores. Then server chips with up to 32 or 64 cores. This allows them to maximise their revenue by charging what the market will bear in different market segments.

Now, the ARM Neoverse architecture already allows for 64 core ARM server chips, and these are already being made for Amazon Web Services in the form of their Graviton 2 processors which largely follow ARMs reference design. This means that there is already an example implementation which solves all of the problems associated with putting ARM cpu cores in a large-scale setup.

So in order to maximise the impact of the new machines, we might see processors with 16 performance cores for desktop iMacs, and perhaps even 64 performance cores for a future Mac Pro. After all, Apple doesn’t make chips for the server market, and are free to use these technologies to fill in their own product lineup however it suits them. In one fell swoop Apple could redefine the market so that every iMac could function as a high-class workstation, and the Mac Pro would compete with machines bearing the most expensive Xeon or EPYC processors.

Wouldn’t that put the fear of God into Intel and AMD...
 

tranceking26

macrumors 65816
Apr 16, 2013
1,464
1,650
Good opportunity yeah, hopefully some good comes out of a whole M-series Mac range as then Apple won't neglect the Mac going forwards, unlike a few years back. Fingers crossed there will be new Macs in the event.
 
  • Like
Reactions: Martyimac

Joe The Dragon

macrumors 65816
Jul 26, 2006
1,031
524
It seems to me that Apple, by starting to manufacture their own desktop CPUs, have a wonderful opportunity to disrupt the industry further by scaling up the core count in desktop machines beyond what Intel or AMD might want to do.

Think on this: Intel and AMD are holding to a certain pattern of core counts because of the way their market works. Low-power chips with 2 or 4 cores, standard desktop chips with 4 or 6 cores, high-end desktops with 8 or 10 cores. Then server chips with up to 32 or 64 cores. This allows them to maximise their revenue by charging what the market will bear in different market segments.

Now, the ARM Neoverse architecture already allows for 64 core ARM server chips, and these are already being made for Amazon Web Services in the form of their Graviton 2 processors which largely follow ARMs reference design. This means that there is already an example implementation which solves all of the problems associated with putting ARM cpu cores in a large-scale setup.

So in order to maximise the impact of the new machines, we might see processors with 16 performance cores for desktop iMacs, and perhaps even 64 performance cores for a future Mac Pro. After all, Apple doesn’t make chips for the server market, and are free to use these technologies to fill in their own product lineup however it suits them. In one fell swoop Apple could redefine the market so that every iMac could function as a high-class workstation, and the Mac Pro would compete with machines bearing the most expensive Xeon or EPYC processors.

Wouldn’t that put the fear of God into Intel and AMD...
mac pro will need storage on cards that users can replace and what about raid 1,5,10 as an choice? Not just raid 0 as the only choice at apples markup.
Also mac pro needs to have good ram pricing and not $1299 to go to 32GB ram.
 
  • Like
Reactions: Fawkesguyy

JMacHack

Suspended
Mar 16, 2017
1,965
2,424
There’s a limit to how much can be done in parallel, not sure exactly where that limit is with regards to cpu cores. Nonetheless I don’t think Apple’s main focus will be to go all-in on the core count war given that their strength seems to lie in power efficiency and single core speed. After all, why would you want 16 cores when 8 completes them faster and more efficiently?

it will be interesting to see what Apple’s next move is
 
  • Like
Reactions: Fomalhaut

seangrimes590

macrumors member
Jun 21, 2012
81
101
Villanova, PA
ARM is well established in the performance market already, taking the #1 supercomputer spot in 2020 with 64 core compute boards, see here: https://www.arm.com/blogs/blueprint/fujitsu-a64fx-arm.

Intel and AMD don't much care what apple is putting in their low-volume workstations in the context of ARM taking over the top supercomputer and server markets, particularly when Apple has never before shown any indication they would sell their own chips to OEMs.
 

Bodhitree

macrumors 68020
Original poster
Apr 5, 2021
2,085
2,217
Netherlands
I don’t think Apple would sell server chips to OEMs, I think they can just use these chips to reinvigorate the Mac desktop range by providing performance that scales way beyond what’s currently available at a given price point. If you look at how they marketed the M1, you can see that the performance metrics came from multithreaded benchmarks — Apple do care about this.

There is quite a lot of software out there that can take advantage of massive parallelism, take music creation or 3D rendering or working with 8K video. Yes, games generally don’t scale well past 4 or 8 cores. But in this case having the power available, even if only for multitasking, beats not having it, and it would give Apple the opportunity to score some significant points off Intel and AMD.

Single core performance stays important, of course, because it gives a lift to the whole lineup, but its difficult to see a clear winner there. AMD’s most recent Ryzen processors have set a pretty high bar.
 

Bodhitree

macrumors 68020
Original poster
Apr 5, 2021
2,085
2,217
Netherlands
AMD has 64 core/128 thread desktop CPU's... so their high end desktop offering is not 8 or 10 cores.

Yes, they have the Threadripper line, which scale to 64 core / 128 threads, and they sell them for a rather premium couple of thousand euros. I am not disputing that. For a chunk more — but in the same ballpark — you can get a Xeon and stick that in a desktop, in the end a processor is a processor.

But that business is to sell cpu’s, which is not the same as what Apple does. Because Apple have vertically integrated their chip manufacturing their cost to make these cpus is going to be the cost of creating them, and they have the freedom to stick them in whatever computers they please, and undercut Xeon and Threadripper workstations by a significant amount.

Look at what they are already doing with machines like the base spec M1 MacBook Air. It’s a machine for a $1000 which if you went looking for a windows equivalent you’d end up paying twice that.
 
Last edited:

neinjohn

macrumors regular
Nov 9, 2020
107
70
I think core count war and disruption already belongs to AMD and I don't think Apple will more than match the 16-core 5950X on a iMac 27'' and left Threadripper competitor for Mac Pro or other newer SKUs.

That isn't to say I don't agree they can disrupt the desktop market but through the whole package, price and availability the same way they have done with the 999$ Air. You can buy AMD laptops with Ryzen 7 U cores with 8 big cores for less than 999$ but you won't have the best screen, neither trackpad, speakers and overall built quality. Adding to the problem is actual market where you'll need to wait a indefinite time or be always on the lookout to pick those rare AMD laptops.

Focusing on the prospect the current chip demand will keep until 2023, a iMac 24'', a little more expansive than the actual model at 1399$ with a very good 4K or 5K screen and 8 Firestorm cores with any GPU power is already a very good product to buy. On the actual market a 5800X or i7-11th gen plus a very good 4K screen is already more than half the price then you have to buy find a GPU to buy on exorbitant prices through their mangled sales chains. Until now from Apple at the worst you buy, for the announced price, and wait 5-6 weeks.
 
  • Like
Reactions: Bodhitree

cvtem

macrumors member
Jun 8, 2016
37
32
This is inevitable.

Nvidia has announced Grace, which their ARM Server CPU.
The likes of Android, rPi, ODroid and alike have been preparing the hobby scene OS's for years, Linux is only waiting for the hardware.
Microsoft has been held back by crap CPU's as much as its lack of effort, which to be honest if you aren't getting the sales, why bother?
The server market is, well, the only reason people choose x86 over the ARM Gravitron on AWS is that they need to.
And don't forget Marvel ThunderX, these are serious CPU's, and since the release of ThunderX2 are making themselves a name.
You may also be surprised how many people in AI fields use things like the Jetson AGX Xavier as their primary machine, instead of the Xeon/Threadripper of the past.

But having said all those positives, you can't underestimate the software you don't think of, holding back what should be a simple transition. The CAD world is atrocious, for example. Legacy software on Windows will keep x86 alive for a long time, and Microsoft needs to do some Rosetta2 level efforts to even allow ARM to have a chance.

For Apple, Linux, FreeBSD, so on markets, it's something we have been waiting for!
 

seangrimes590

macrumors member
Jun 21, 2012
81
101
Villanova, PA
You may also be surprised how many people in AI fields use things like the Jetson AGX Xavier as their primary machine, instead of the Xeon/Threadripper of the past.
I work in this field, though an academic rather than industry setting, and haven't even heard of these things let alone seen people using them for development (if that's what you mean by "primary machine"?) Pretty cool little machines, I'm very excited for the future with ARM and specialized tools, hardware, and circuits we're going to see as we all move away from the general purpose core to keep pushing performance forward.
 

pshufd

macrumors G4
Oct 24, 2013
10,151
14,574
New Hampshire
I think core count war and disruption already belongs to AMD and I don't think Apple will more than match the 16-core 5950X on a iMac 27'' and left Threadripper competitor for Mac Pro or other newer SKUs.

That isn't to say I don't agree they can disrupt the desktop market but through the whole package, price and availability the same way they have done with the 999$ Air. You can buy AMD laptops with Ryzen 7 U cores with 8 big cores for less than 999$ but you won't have the best screen, neither trackpad, speakers and overall built quality. Adding to the problem is actual market where you'll need to wait a indefinite time or be always on the lookout to pick those rare AMD laptops.

Focusing on the prospect the current chip demand will keep until 2023, a iMac 24'', a little more expansive than the actual model at 1399$ with a very good 4K or 5K screen and 8 Firestorm cores with any GPU power is already a very good product to buy. On the actual market a 5800X or i7-11th gen plus a very good 4K screen is already more than half the price then you have to buy find a GPU to buy on exorbitant prices through their mangled sales chains. Until now from Apple at the worst you buy, for the announced price, and wait 5-6 weeks.

I'd love to see Apple in the server space as they were many years ago but they may not feel that they care to go there with already more than enough work on their plate. At the moment, nVidia seems the most interested in that space but it's by no means clear that they can figure out the secret sauces that Apple has used to get so much compute out of so little power.

I'd love to try building a 5900X system to replace my i7-10700 system but I have my email address at five mailing lists for the part and the only peep I've received is the B&H folks that send me an email every two weeks to tell me that the parts still aren't available. Even if I did get the CPU, getting a decent GPU at MSRP is currently impossible. Fortunately the i7-10700 meets my current requirements.
 

pshufd

macrumors G4
Oct 24, 2013
10,151
14,574
New Hampshire
This is inevitable.

Nvidia has announced Grace, which their ARM Server CPU.
The likes of Android, rPi, ODroid and alike have been preparing the hobby scene OS's for years, Linux is only waiting for the hardware.
Microsoft has been held back by crap CPU's as much as its lack of effort, which to be honest if you aren't getting the sales, why bother?
The server market is, well, the only reason people choose x86 over the ARM Gravitron on AWS is that they need to.
And don't forget Marvel ThunderX, these are serious CPU's, and since the release of ThunderX2 are making themselves a name.
You may also be surprised how many people in AI fields use things like the Jetson AGX Xavier as their primary machine, instead of the Xeon/Threadripper of the past.

But having said all those positives, you can't underestimate the software you don't think of, holding back what should be a simple transition. The CAD world is atrocious, for example. Legacy software on Windows will keep x86 alive for a long time, and Microsoft needs to do some Rosetta2 level efforts to even allow ARM to have a chance.

For Apple, Linux, FreeBSD, so on markets, it's something we have been waiting for!

I was a VMS developer for a long time and there are lots of VMS systems still running today in commercial applications. Except that they're all running on x86 systems emulating Alpha. Emulation is possible with a big enough performance gain for legacy software. I would hope that software companies see the advantages of Apple Silicon, including the commitment of Apple to improve their chips every year.
 

cvtem

macrumors member
Jun 8, 2016
37
32
I work in this field, though an academic rather than industry setting, and haven't even heard of these things let alone seen people using them for development (if that's what you mean by "primary machine"?) Pretty cool little machines, I'm very excited for the future with ARM and specialized tools, hardware, and circuits we're going to see as we all move away from the general purpose core to keep pushing performance forward.
AI is pretty broad, so to narrow the field down, these are cheap development tools for those targeting the nVidia Drive AGX platform, and the choice of tool is the developer and not management. I am a little surprised though that these haven't penetrated academia though.
 

I7guy

macrumors Nehalem
Nov 30, 2013
35,155
25,261
Gotta be in it to win it
ARM is well established in the performance market already, taking the #1 supercomputer spot in 2020 with 64 core compute boards, see here: https://www.arm.com/blogs/blueprint/fujitsu-a64fx-arm.

Intel and AMD don't much care what apple is putting in their low-volume workstations in the context of ARM taking over the top supercomputer and server markets, particularly when Apple has never before shown any indication they would sell their own chips to OEMs.
I read this and at the same time wondered why Intel put AVX512 into their 11th gen processors.
 

pshufd

macrumors G4
Oct 24, 2013
10,151
14,574
New Hampshire
I read this and at the same time wondered why Intel put AVX512 into their 11th gen processors.

One thing that I've wanted out of x86 is a zig-zag Huffman encoder/decoder. This is one area in jpeg where you can't do parallel unpacking - I suppose that it saves space back when that was a huge issue but I wish it hadn't gone that way. Apple can certainly do this with their custom silicon but I've never found a way to do this with x86.

Intel has been moving to improve performance with vectors for a long time but I think that we're going to see more bang for the buck with known, useful and higher-level optimization rather than building bricks.
 

Bodhitree

macrumors 68020
Original poster
Apr 5, 2021
2,085
2,217
Netherlands
I think core count war and disruption already belongs to AMD and I don't think Apple will more than match the 16-core 5950X on a iMac 27'' and left Threadripper competitor for Mac Pro or other newer SKUs.

That isn't to say I don't agree they can disrupt the desktop market but through the whole package, price and availability the same way they have done with the 999$ Air. You can buy AMD laptops with Ryzen 7 U cores with 8 big cores for less than 999$ but you won't have the best screen, neither trackpad, speakers and overall built quality. Adding to the problem is actual market where you'll need to wait a indefinite time or be always on the lookout to pick those rare AMD laptops.

Focusing on the prospect the current chip demand will keep until 2023, a iMac 24'', a little more expansive than the actual model at 1399$ with a very good 4K or 5K screen and 8 Firestorm cores with any GPU power is already a very good product to buy. On the actual market a 5800X or i7-11th gen plus a very good 4K screen is already more than half the price then you have to buy find a GPU to buy on exorbitant prices through their mangled sales chains. Until now from Apple at the worst you buy, for the announced price, and wait 5-6 weeks.

In a way what you are saying is that Apple is playing catch-up to AMD on the processor front. It certainly seems like AMD has surpassed Intel in recent years, and really their lack of penetration in the market is down to Intel’s muscle in the distribution channels. From what I’ve seen you can buy a Ryzen 5950X boxed for about a thousand euros, but that’s selling into the enthousiast markets. Still whole machines do now seem to be available from major vendors.

It does make you wonder, that AMD have faced such an uphill battle even with superior products, and have had a hard time moving the needle. Apple’s case is different, they have the marketing and a significant share of the desktop market, and if they put out a product with leading-edge performance and a great overall package, they will sell in the first instance to their fanbase and then on to people who are convinced by the use cases.

I agree with you it comes down to the entire package — the screen, the processor, connectivity, peripherals, build quality. By vertically integrating they have the opportunity to source leading edge components at less than the market price, and put together a quality product that can disrupt the market for people who care about the entire package. There are always going to be those who just want a fast processor in a box with a cheap screen, and they will probably continue to buy a 5950X or Threadripper.

But when Apple says, our new iMac is faster than 95% of desktops shipped in 2020, they are basically counting coup over all the cheap desktops with Intel Core i3-type processors. They are also saying, we don’t think the kind of performance you have been getting out of these ‘good enough’ processors is creating a satisfying computing experience, we can create something better.
 
Last edited:
  • Like
Reactions: psychicist

pshufd

macrumors G4
Oct 24, 2013
10,151
14,574
New Hampshire
In a way what you are saying is that Apple is playing catch-up to AMD on the processor front. It certainly seems like AMD has surpassed Intel in recent years, and really their lack of penetration in the market is down to Intel’s muscle in the distribution channels. From what I’ve seen you can buy a Ryzen 5950X boxed for about a thousand euros, but that’s selling into the enthousiast markets. Still whole machines do now seem to be available from major vendors.

It does make you wonder, that AMD have faced such an uphill battle even with superior products, and have had a hard time moving the needle. Apple’s case is different, they have the marketing and a significant share of the desktop market, and if they put out a product with leading-edge performance and a great overall package, they will sell in the first instance to their fanbase and then on to people who are convinced by the use cases.

I agree with you it comes down to the entire package — the screen, the processor, connectivity, peripherals, build quality. By vertically integrating they have the opportunity to source leading edge components at less than the market price, and put together a quality product that can disrupt the market for people who care about the entire package. There are always going to be those who just want a fast processor in a box with a cheap screen, and they will probably continue to buy a 5950X or Threadripper.

But when Apple says, our new iMac is faster than 95% of desktops shipped in 2020, they are basically counting coup over all the cheap desktops with Intel Core i3-type processors. They are also saying, we don’t think the kind of performance you have been getting out of these ‘good enough’ processors is creating a satisfying computing experience, we can create something better.

There's a difference that a lot of people don't appreciate in that Apple has a better idea of the software that their customers run. And they can add APIs and custom silicon for expensive compute pieces that normally use standard instructions to perform something used a lot more efficiently. Intel and AMD can't really do that. They added vectors which can kind of do that but those are more building blocks than APIs to do a particular operation. Intel sells software packages that provide APIs to do things more efficiently but that's still in software to specialized hardware.

So they could make specific applications or functions faster and it may or may not show up in specific benchmarks but it would feel a lot faster for those people that used the software that was optimized.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,679
Think on this: Intel and AMD are holding to a certain pattern of core counts because of the way their market works. Low-power chips with 2 or 4 cores, standard desktop chips with 4 or 6 cores, high-end desktops with 8 or 10 cores. Then server chips with up to 32 or 64 cores. This allows them to maximise their revenue by charging what the market will bear in different market segments.

I don't think that this is a full story. Different markets have different needs. A consumer at home or at the office will care more about responsiveness and versatility (fewer cores with higher peak clock), while a workstation or a server user will car more about multi-threaded throughput (more cores with balanced clock). Number of cores, dynamic clock range, RAM configurations — they heavily depends on these considerations.

And of course, there is the price issue. Making larger chips is exponentially more difficult and expensive. And it's not just the chip production itself, it's the things around it. Memory subsystem is already a tricky thing to deal with — to have many cores you need more RAM bandwidth, which means more memory controllers and wider RAM bus, all that is rally expensive.

Now, the ARM Neoverse architecture already allows for 64 core ARM server chips, and these are already being made for Amazon Web Services in the form of their Graviton 2 processors which largely follow ARMs reference design. This means that there is already an example implementation which solves all of the problems associated with putting ARM cpu cores in a large-scale setup.

There are some users here who don't believe in Apple's ability to execute a new Mac Pro, but frankly, this is where I have very little doubt. The energy efficiency of Apple CPU cores make them inherently scalable. Intel and AMD have to severely underclock their server CPU cores (relative to the desktop models), Apple does not have to, because their peak power usage is four to five times lower.

One disclaimer though: ARM Neoverse has nothing to do with this since Apple uses custom designs. Performance and thermal characteristics of Apple CPUs are completely different.

So in order to maximise the impact of the new machines, we might see processors with 16 performance cores for desktop iMacs, and perhaps even 64 performance cores for a future Mac Pro. After all, Apple doesn’t make chips for the server market, and are free to use these technologies to fill in their own product lineup however it suits them. In one fell swoop Apple could redefine the market so that every iMac could function as a high-class workstation, and the Mac Pro would compete with machines bearing the most expensive Xeon or EPYC processors.

I have little doubt that the upcoming Mac Pro will offer excellent performance and plenty of cores (Apple doesn't need to go 64-cores to compete with the likes of EPYC, a 32-core Apple CPU will be more then sufficient), but I am skeptical about 16 cores in the iMac. It is an overkill for an iMac user and it makes little sense financially.

Now, I do agree with you that Apple Silicon will disrupt the desktop market, but in a different way. The interesting thing of Apple Silicon is that it brings to the user technology that is usually associated with datacenter. Look at new Nvidia Grace warehouse systems for example: large CPU and GPU, connected with super-fast fabric, using high-bandwidth unified memory. Does this ring a bell? Its like straight from Apple's M1 announcement slides. The traditional desktop tech up to now is

(CPU+large quantity of slow, low latency RAM) ----- slow data link --- (GPU + small quantity of fast, high latency RAM)

Apple Silicon is changing this to

(CPU+GPU+coprocessors) + large quantity of fast, low latency RAM

This is the "true" revolution for the consumer market as it enables new levels of performance, efficiency, and also completely new applications. On Apple Silicon I can for example make a game that adaptively generates geometry on the CPU and immediately renders it on the GPU. Or I can have a professional content creation app that applies complex effects to a video stream using the GPU and simultaneously does image detection using the ML coprocessor. I can't do any of these things in the traditional architecture because PCI-e link between CPU and GPU is too slow. Why do you think Nvidia is packing ML processors on the GPU? Exactly, to prevent data sending back and forth...

There's a difference that a lot of people don't appreciate in that Apple has a better idea of the software that their customers run. And they can add APIs and custom silicon for expensive compute pieces that normally use standard instructions to perform something used a lot more efficiently. Intel and AMD can't really do that. They added vectors which can kind of do that but those are more building blocks than APIs to do a particular operation. Intel sells software packages that provide APIs to do things more efficiently but that's still in software to specialized hardware.

So they could make specific applications or functions faster and it may or may not show up in specific benchmarks but it would feel a lot faster for those people that used the software that was optimized.

The specialized coprocessors Apple includes in their systems are not that much different from what others are doing. Intel has ML acceleration block in their CPUs (Intel AMX), Nvidia and AMD have them in their GPUs. I don't see much principle difference here. Now, Apple has an obvious advantage here since they have easier time integrating all these accelerators into the standard APIs, Intel and co. don't really have that luxury because they have to convince the users to use their API first (and that's how unhealthy market phenomena like CUDA is born).

There is also another part of this story, and this is really where Apple is very different, which is about optimizing commonly used patterns. Apple CPUs for example include hardware features that accelerate critical Objective-C and Swift APIs — these are not specialized instructions, but simply CPU behavior. For example, Apple frameworks heavily really on reference counting, so they have designed their CPUs to do very quick reference counting. I've also seen mention that their CPUs have hardware predictor for Objective-C method dispatch. These features won't make much sense in code that does not use these software patters, but on Apple platforms, it can provide a massive increase in performance.
 

pshufd

macrumors G4
Oct 24, 2013
10,151
14,574
New Hampshire
There is also another part of this story, and this is really where Apple is very different, which is about optimizing commonly used patterns. Apple CPUs for example include hardware features that accelerate critical Objective-C and Swift APIs — these are not specialized instructions, but simply CPU behavior. For example, Apple frameworks heavily really on reference counting, so they have designed their CPUs to do very quick reference counting. I've also seen mention that their CPUs have hardware predictor for Objective-C method dispatch. These features won't make much sense in code that does not use these software patters, but on Apple platforms, it can provide a massive increase in performance.

They could build a compete JPEG block decoder that's currently done in software now. Yes, vectors greatly improve performance (I may have been the first person to implement vector JPEG decoding in browsers a long time ago) but you could do even more in silicon if you had dedicated cache and registers to work with.
 
  • Like
Reactions: psychicist

Bodhitree

macrumors 68020
Original poster
Apr 5, 2021
2,085
2,217
Netherlands
Now, I do agree with you that Apple Silicon will disrupt the desktop market, but in a different way. The interesting thing of Apple Silicon is that it brings to the user technology that is usually associated with datacenter. Look at new Nvidia Grace warehouse systems for example: large CPU and GPU, connected with super-fast fabric, using high-bandwidth unified memory. Does this ring a bell? Its like straight from Apple's M1 announcement slides. The traditional desktop tech up to now is

(CPU+large quantity of slow, low latency RAM) ----- slow data link --- (GPU + small quantity of fast, high latency RAM)

Apple Silicon is changing this to

(CPU+GPU+coprocessors) + large quantity of fast, low latency RAM

This is the "true" revolution for the consumer market as it enables new levels of performance, efficiency, and also completely new applications. On Apple Silicon I can for example make a game that adaptively generates geometry on the CPU and immediately renders it on the GPU. Or I can have a professional content creation app that applies complex effects to a video stream using the GPU and simultaneously does image detection using the ML coprocessor. I can't do any of these things in the traditional architecture because PCI-e link between CPU and GPU is too slow. Why do you think Nvidia is packing ML processors on the GPU? Exactly, to prevent data sending back and forth...

The exact number of cores is something that remains to be seen, I agree, we just don’t know until Apple announce them. But even something like a 12+4 setup remains impressive on the performance front, if you extrapolate from the M1 benchmarks. I think there is room there for Apple to make something impressive happen.

The potentially disruptive effect of unified memory is something that I think will take longer to play out. The developers of third party software haven’t had very long yet to play with production hardware, a year is not so long in the lifetime of a good application. And new applications may take longer to surface, brand new software takes time to mature, and the concepts are always of variable quality.

Do you suppose we will see data center software making its way to the Mac? If the architecture is similar, it should be an easy port. But I would have to do some research to find out what kind of applications they run and whether it would be interesting to run it on a localised platform like a Mac.

Games would certainly be an application that would benefit from the new model. You have potentially a lot less copying and staging of data that needs to be done, I know some game engines internally spend a lot of cpu cycles moving resources from storage. But I think taking advantage of some of these features in a cross-platform engine like Unreal would be a challenge from a code architecture point of view.
 
  • Like
Reactions: psychicist

leman

macrumors Core
Oct 14, 2008
19,521
19,679
The exact number of cores is something that remains to be seen, I agree, we just don’t know until Apple announce them. But even something like a 12+4 setup remains impressive on the performance front, if you extrapolate from the M1 benchmarks. I think there is room there for Apple to make something impressive happen.

Well, sure, but having too many cores in a consumer machine is subject to diminishing returns. Why do you need 32 or even 16 fast cores if you don't do anything that can be parallelized to take advantage of those configurations?

The potentially disruptive effect of unified memory is something that I think will take longer to play out. The developers of third party software haven’t had very long yet to play with production hardware, a year is not so long in the lifetime of a good application. And new applications may take longer to surface, brand new software takes time to mature, and the concepts are always of variable quality.

If I am not mistaken there is already plenty fo software that makes use of it, directly or indirectly. Video editing software comes to mind.

Do you suppose we will see data center software making its way to the Mac? If the architecture is similar, it should be an easy port. But I would have to do some research to find out what kind of applications they run and whether it would be interesting to run it on a localised platform like a Mac.

What kind of software are you thinking about? If you are doing particle physics or weather simulation on a pentaflop scale, no, there is probably very little point in your software running on a Mac.
 

pshufd

macrumors G4
Oct 24, 2013
10,151
14,574
New Hampshire
Well, sure, but having too many cores in a consumer machine is subject to diminishing returns. Why do you need 32 or even 16 fast cores if you don't do anything that can be parallelized to take advantage of those configurations?

I use Geekbench 5 as a guideline and my multicore score is about 8,200. 8 Performance cores would get the M1X to probably around 14,000 which would be well more than enough for my needs. The M1 is probably enough for my CPU needs as it is but I'd want more GPU, ports, RAM and monitor support. Apple could add small amounts of things to the M1 and it would make a lot of people happy.

I personally could not see a case where I'd need or even want 16 performance cores. The only thing that I do that could use that is Firefox and Thunderbird builds and they would help on full builds but you don't do full builds very often.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.