Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

leman

macrumors Core
Oct 14, 2008
19,522
19,679
This isn't really fair, since there are only 2 SoCs containing X1 cores in existence and they both use a significantly cut down implementation, a single X1 core per SoC and are only used in a couple of smartphones. Even with that they got to within 10-15% of the A13 single core perf which was state of the art at the time of X1 announcement. 6 months newer firestorm cores in A14 had a pretty amazing single core performance uplift, though, but I wouldn't be surprised if Apple pushed extra hard because they knew they would be using the same cores in M1. On another hand Qualcomm couldn't care less (probably because there's simply no market), just look at their pathetic showing with the "highest-performance" (lol) 8cx, which they're now "revising" for the third year in a row by slightly increasing frequency of the 4 years old cores.

Oh, I am not trying to dismiss the achievements of the ARM team, I think that X1 is a great core which is probably very close to Tiger Lake and Zen 2/3, depending on how high clock it can sustain. It's just that looking at the single-core benchmarks, I don't have the confidence that X1 can rival A14/M1. It seems to be roughly comparable to A12, which means that it's still a way to go for the ARM team until they reach the state of the art.

To be honest, I hoped that X1 would be better — that could give ARM the push it needs to challenge x86. Currently, ARM CPUs are still perceived as low-end devices that excel at power consumption but lack performance. If ARM manages to change that perception, more customers will be interested in ARM-based computers. But X1 is still not enough to challenge the latest x86 CPUs...

If memory serves me right, the motivation behind the X1 cores wasn't even really smartphones or other consumer devices but capturing the emerging ARM server market.

Wasn't it supposed to target the laptop market? I was under impression that Neoverse was the server stuff.

Similarly, I wonder how much of the incredible efficiency of M1 in comparison to the latest x86 CPUs comes down to different overall goals of the architecture. Increasing the frequency comes at an exponential increase in power consumption and I'm really curious to see how a higher clocked version of M1 would look like.

Yeah, it's very obvious that Apple deliberately trades the ability to achieve high clocks for the ability to execute many instructions simultaneously. This gives them very high baseline performance and (probably more importantly!) predictable power usage, but not much breathing space above. I do wonder whether Firestorm M1 is clocked conservatively or whether they could potentially push it higher. Anandtech did report that earlier cores (A12 I believe) showed a huge spike in power consumption at the end of their frequency curve. Still, if Apple is holding back (and I would expect anything from those sneaky folks), and they could ship stable Firestorm at 3.5-3.7ghz while still keeping the power consumption of 10-15W per core, it would look bad for Intel or AMD.

I've recently read about a few experiments, where people artificially limited the power draw of their last gen Ryzen CPUs to the one comparable to M1 and the single core performance only differs 20-30% in a scenario AMD has surely not optimized for.

Making an uneducated guess, I'd say that Zen3@5W would run at around 3.5-4.0ghz. That would indeed make it only 20-30% slower than Firestorm@5W. But those 20% are a huge obstacle to overcome in practical terms. Zen3 needs more than 3x power to do it. To put differently, just because they are only 20% slower at low power usage does not mean that they will be able to close that gap any time soon.

It does raise an interesting point though. Modern x86 CPUs can be quite efficient, but the quest for ever increasing performance and the fact that their main enthusiast customer was a desktop user led their evolution towards devices that could be put in overdrive, squeezing all that performance regardless of the power cost. Turbo boost as well as huge frequency and power range became the important buzzwords. Apple on the other hand, was always limited by the thermals. You only get that much cooling (and battery!) in a phone... so quite naturally, the evolution of their chips was towards devices that try to be as fast and smart as possible with absolute lowest power consumption. And since they are stubborn, sneaky people with a superiority complex, they really went out of their way to make something that's really fast. ARM on the other hand, they are serving customers. They design what is required, more or less. Now that they are confident in their ability and see trouble in the x86 land, they are smelling blood and starting to challenge Intel where it hurts the most — the server market.
 

leman

macrumors Core
Oct 14, 2008
19,522
19,679
nvidia has new tricks up its sleeve. im no expert but i say intel better start thinking arm

If you are talking about newly announced Grace products, Nvidia seems to be using licensed ARM Neoverse cores and pairing them with their own GPU ML technology (aka. Tensor Cores) while utilizing their interconnect tech to provide fast connectivity between the CPU and the GPU. In another words, Grace is very similar in spirit to what Apple is doing with Apple Silicon — ultra-fast links between processing units and high-bandwidth unified memory.

What is of particular interest to us Mac users is that it gives us a pretty clear idea to how Mac Pro or even the higher-end iMac might look like. CPU and GPU do not need to be on the same chip. And Apple did hire a large amount of interconnect engineers to work on high-speed fabric recently. Just saying...
 
  • Like
Reactions: jdb8167

EdT

macrumors 68020
Mar 11, 2007
2,429
1,980
Omaha, NE
The number of chip designers who can design chips as well as Apple does is a surprisingly limited resource.

You know that, I know that, there are people at those companies that know that. The execs that run the company frequently have no clue what it takes to make the components that make up their product. And if Apple and AMD Qualcomm can make processors then those executives won’t see any reason why they can’t as well. Yes there are people that will try to tell them. If they are good executives then they will listen. If their expectations aren’t unrealistic then maybe they really can do it. But Apple and AMD and Qualcomm all started making their first designs a while ago and it took time to get things right. And it’s never ending-you need to already be working on the generation that is 3 or 4 years away as you start selling this years. And if someone else comes up with a seriously better design you have to somehow react and know you need to improve this years as much as you can and next years version needs to come close to equaling that competitor’s next year model. It’s less risky for most companies to buy their processors.
 

9927036

Cancelled
Nov 12, 2020
472
460
T2's flaw requires PHYSICAL access to compromise.

Meltdown could be exploited by a bad webpage loaded up in a virtual machine away from the host OS.

It's absolutely possible Apple could drop the ball just as hard at some point, but I think you're delusional if you think the T2 issue is in the same realm as Meltdown, let alone the dozens disclosed since then.
You are delusional for comparing T2 to Meltdown. I did not compare those two HW problems. You did.
 

9927036

Cancelled
Nov 12, 2020
472
460
What everyone is stuck with is compatibility issues between ARM and X86 architecture. Apple is willing to dump systems that they feel don’t have a future, but most companies aren’t. Way too big of an established user base who will scream if their software will now require a new architecture computer. Even if a Rosetta type program is possible most companies don’t want to develop and support 3 programs- a new ARM version of their program, the Rosetta translation program, and the legacy X86 version of every program release for a couple of years minimum.
Microsoft may well do so.
 

ghanwani

macrumors 601
Dec 8, 2008
4,826
6,154
Even if they do, it will be too little too late. Only way Apple goes back to Intel is if they screw up big time on execution. At this point, they have better talent than Intel so I have a hard time seeing how Intel can pull off an upset.
 
  • Like
Reactions: Maconplasma

9927036

Cancelled
Nov 12, 2020
472
460
The thing is, Apple Silicon isn’t an ARM design. It just uses the ARM instruction set, with extra AR bits, video bits, etc. It will be interesting to see how well other ARM SoC manufacturers can compete, both on performance and power/heat.
Agreed. No reason other ARM SoCs cannot match performance and power/heat.
 

Bruninho

Suspended
Mar 12, 2021
354
339
Microsoft may well do so.

Linux is just about to give the M1 native support for its kernel in June. Linux distros are soon coming to the M1 natively.

It's a matter of time for Microsoft to make the jump. Especially when it was proved that the M1 emulates Windows 10 ARM way faster than their Surface counterparts, so we can only imagine the sheer native speed of it.

x86 is done and dusted. Goodbye, when they presented the first batch of M1 Macs, I'd have laughed out loud if they had thought of doing what Steve Jobs did to the OS 9 in 2002: a mocking funeral, this time for x86...
 
  • Haha
Reactions: bobcomer

9927036

Cancelled
Nov 12, 2020
472
460
Linux is just about to give the M1 native support for its kernel in June. Linux distros are soon coming to the M1 natively.

It's a matter of time for Microsoft to make the jump. Especially when it was proved that the M1 emulates Windows 10 ARM way faster than their Surface counterparts, so we can only imagine the sheer native speed of it.

x86 is done and dusted. Goodbye, when they presented the first batch of M1 Macs, I'd have laughed out loud if they had thought of doing what Steve Jobs did to the OS 9 in 2002: a mocking funeral, this time for x86...
I don't agree about x86. Most of the world runs on Windows x86 and the whole gaming industry is not just going to abandon x86.
 

crazy dave

macrumors 65816
Sep 9, 2010
1,454
1,230
Oh, I am not trying to dismiss the achievements of the ARM team, I think that X1 is a great core which is probably very close to Tiger Lake and Zen 2/3, depending on how high clock it can sustain. It's just that looking at the single-core benchmarks, I don't have the confidence that X1 can rival A14/M1. It seems to be roughly comparable to A12, which means that it's still a way to go for the ARM team until they reach the state of the art.

To be honest, I hoped that X1 would be better — that could give ARM the push it needs to challenge x86. Currently, ARM CPUs are still perceived as low-end devices that excel at power consumption but lack performance. If ARM manages to change that perception, more customers will be interested in ARM-based computers. But X1 is still not enough to challenge the latest x86 CPUs...



Wasn't it supposed to target the laptop market? I was under impression that Neoverse was the server stuff.



Yeah, it's very obvious that Apple deliberately trades the ability to achieve high clocks for the ability to execute many instructions simultaneously. This gives them very high baseline performance and (probably more importantly!) predictable power usage, but not much breathing space above. I do wonder whether Firestorm M1 is clocked conservatively or whether they could potentially push it higher. Anandtech did report that earlier cores (A12 I believe) showed a huge spike in power consumption at the end of their frequency curve. Still, if Apple is holding back (and I would expect anything from those sneaky folks), and they could ship stable Firestorm at 3.5-3.7ghz while still keeping the power consumption of 10-15W per core, it would look bad for Intel or AMD.



Making an uneducated guess, I'd say that Zen3@5W would run at around 3.5-4.0ghz. That would indeed make it only 20-30% slower than Firestorm@5W. But those 20% are a huge obstacle to overcome in practical terms. Zen3 needs more than 3x power to do it. To put differently, just because they are only 20% slower at low power usage does not mean that they will be able to close that gap any time soon.

It does raise an interesting point though. Modern x86 CPUs can be quite efficient, but the quest for ever increasing performance and the fact that their main enthusiast customer was a desktop user led their evolution towards devices that could be put in overdrive, squeezing all that performance regardless of the power cost. Turbo boost as well as huge frequency and power range became the important buzzwords. Apple on the other hand, was always limited by the thermals. You only get that much cooling (and battery!) in a phone... so quite naturally, the evolution of their chips was towards devices that try to be as fast and smart as possible with absolute lowest power consumption. And since they are stubborn, sneaky people with a superiority complex, they really went out of their way to make something that's really fast. ARM on the other hand, they are serving customers. They design what is required, more or less. Now that they are confident in their ability and see trouble in the x86 land, they are smelling blood and starting to challenge Intel where it hurts the most — the server market.

To be fair to the X1 it seems to be hampered by its Qualcomm/Samsung implementation which lack the full cache it could have had (and manufactured on Samsun 5nm which is ~equivalent TSMC 7nm doesn’t help). I wouldn’t quite say it was “cut down” as @crevalic stated, but they weren’t as optimal an implementations as they could’ve been. Maybe the manufacturing is to blame, but it’s also possible that it’s cost as cache is expensive and Android SOCs try to keep costs down. A good example of this: Supposedly A7x processors can be downclocked to use as near as little power as A5x processors but with far more performance (like Icestorm cores) but the A5x use so little silicon area that it’s cheaper to use them in smartphones.

The situation with what the X1 were designed for is ... complicated. They seem more like firestorm cores and good for laptops but apparently are lacking features that their A78C cousins are equipped with which is why the upcoming updates to the Qualcomm snapdragon laptops are supposedly A78Cs not X1s.

Also while one set of ARM server chips are indeed going to be based on 78C design, I think that is the neoverse line, and another is going to be based on the X1 design, I think V1 chips. Again I don’t think all the cores are strictly identical but they share commonalities/root design with each other. I don’t know if there are any V1 chips out there yet though.
 

bobcomer

macrumors 601
May 18, 2015
4,949
3,699
Most of the world runs on android on arm, actually.
For phones, yes, but for desktops and laptops, most of the world runs on x64 Windows, and that's not going to change anytime soon. 30 years maybe it'll be a minority, but I bet you'll still be able to buy it.

All this intel's finished crud is ridiculous. People don't run platforms, only us geeks do -- they run software and they buy whatever machine runs their software.
 

crazy dave

macrumors 65816
Sep 9, 2010
1,454
1,230
Microsoft may well do so.

Microsoft is in a weird place with WoA. They seem to be stuck between embracing it and dragging their heels on basic stuff. Like even ignoring how long it took them to implement x64 emulation given how long WoA has been a thing, some of their own ARM apps are still on ARM v7, meaning ARM chip makers still have to support what is essentially a legacy ISA that they could and should shed. I mean there aren’t a huge number of critical 32-bit ARM for Windows applications like there are for the x86 side beyond some of Microsoft’s own apps - most of which they could trivially update.
 

thedocbwarren

macrumors 6502
Nov 10, 2017
430
378
San Francisco, CA
When everyone has a fiber connection then sure.
Nah, we do with less for video. You use a controller or direct connected device to send fast input and stream the screen down. It's likely an option for companies to get extra money for subscription and stream their stuff down on whatever platform they care about.

Again such a specific use case that is not a future driving factor for a platform (IMHO.)
 
  • Like
Reactions: jdb8167

bobcomer

macrumors 601
May 18, 2015
4,949
3,699
When everyone has a fiber connection then sure.
That's a good point, but even if you have fiber, you might not have a fast connection. We need much faster connections to go all streaming. I wouldn't mind that happening, it might even make me give up on x86, or at least locally run x86. :)
 

crazy dave

macrumors 65816
Sep 9, 2010
1,454
1,230
Nah, we do with less for video. You use a controller or direct connected device to send fast input and stream the screen down. It's likely an option for companies to get extra money for subscription and stream their stuff down on whatever platform they care about.

Again such a specific use case that is not a future driving factor for a platform (IMHO.)

Gaming is a pretty big use case - I’m not certain it’s enough to keep x86 alive by itself as most games wouldn’t suffer enough in transition - even translation - to stop x86 to ARM. But I’m also not going to say Intel and x86 are dooomed! Dooomed! ?

But streaming even over fiber has issues with latency as @bobcomer wrote. Bandwidth isn’t everything, you need low latency too. Now like I said with x86 to ARM, for many games the latency is already low enough it won’t matter. So maybe 90% (obviously made up number) could be streamed, but also the infrastructure required for streaming is massive - Nvidia spent a big portion of their keynote on that topic. So doable but not easy.
 

Bruninho

Suspended
Mar 12, 2021
354
339
I would never go to games through streaming. I like to own the games I buy and play. I do not like subscription based games and applications. I've stopped buying them around 2014.

The only subscription I pay for is Apple Music. But for a good reason - I can quickly have a carefully curated music archive across all my devices synced, and the songs can still be played when I am offline. They day they remove this feature, it will be the day I will cancel my subscription and go back to a carefully curated archive made from the good old MP3 music collection. The headache of having to sync manually across all my devices is worth the money I can save cancelling the subscription if this happens one day.
 
  • Like
Reactions: BigPotatoLobbyist

thedocbwarren

macrumors 6502
Nov 10, 2017
430
378
San Francisco, CA
I would never go to games through streaming. I like to own the games I buy and play. I do not like subscription based games and applications. I've stopped buying them around 2014.

The only subscription I pay for is Apple Music. But for a good reason - I can quickly have a carefully curated music archive across all my devices synced, and the songs can still be played when I am offline. They day they remove this feature, it will be the day I will cancel my subscription and go back to a carefully curated archive made from the good old MP3 music collection. The headache of having to sync manually across all my devices is worth the money I can save cancelling the subscription if this happens one day.
That's the same for Apple arcade though. It's a changing world.
 

Bruninho

Suspended
Mar 12, 2021
354
339
That's the same for Apple arcade though. It's a changing world.
I see. I am not going after Apple Arcade too. I do not like that model either.

If I pay for a game, it is mine forever physically or digtally saved on my computer after downloading it. I do not want to pay a subscription to keep playing it month after month. I am not Elon Musk to throw in a bucket of cash every month just for a few games.

The Netflix subscription is another different thing. if I pay to watch a movie, after I've watched it I have no reason to watch it again. It is pretty much the same thing I used to do when going to cinema theaters. Therefore it's not the same thing as a game.

But games, applications, things that I will use again over and over, no thanks but no thanks. I prefer the old model for these.
 
  • Like
Reactions: BigPotatoLobbyist

Sydde

macrumors 68030
Aug 17, 2009
2,563
7,061
IOKWARDI
As far as I’m aware the only additional hardware M1 chips contain to aid Rosetta is an optional x86 memory model mode which were it not there could be a pretty big bottleneck for emulation since the memory models for ARM and x86 are quite different.

Since Rosetta is a static code translator, it could, theoretically, be possible to liberally salt the translated code with STL/LDA instructions to enforce slightly sharper memory ordering at somewhat less of a cost that throwing in a bunch of DMBs. However, in some cases, the release/acquire instructions would incur a penalty because they are register-direct only (indexes, offsets and updates would have to be handled manually). Of course, you would have to have a very sophisticated translator to be able to tease out the places where that would be needed.


Some years ago (probably 25+) I read part of an essay in PC Mag (or somesuch) entitled “RISC: Unsafe at Any Speed” in which the author made the argument that coding for CISC was better because small mistakes arising from having to code more laboriously were a major hazard. Today, obviously, no one hand-codes in AL (at least, not to any significant extent) and compilers are parsecs beyond what we used back in the day, in terms of converting verbose source into refined object code. Reading cmaier's accounts seem to suggest that, where raw speed is called for, that author had the story totally backwards.
 

9927036

Cancelled
Nov 12, 2020
472
460
That's a good point, but even if you have fiber, you might not have a fast connection. We need much faster connections to go all streaming. I wouldn't mind that happening, it might even make me give up on x86, or at least locally run x86. :)
I'm not sure what you mean. A symmetric 1Gb connection with low latency had better be fast. What else could you use?
 

9927036

Cancelled
Nov 12, 2020
472
460
Nah, we do with less for video. You use a controller or direct connected device to send fast input and stream the screen down. It's likely an option for companies to get extra money for subscription and stream their stuff down on whatever platform they care about.

Again such a specific use case that is not a future driving factor for a platform (IMHO.)
I'm not sure what you mean that it is not a future driving factor for a platform? Gaming is a huge business. Look at Nvidia as an example.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.