Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
It feels like you are just being an Apple spokesperson here and ignoring a much larger context. Just because Apple focuses on performance per watt does not mean that everyone wants a computer whose focus is that. If that is the entirety of how a computer is judged than "who cares" makes sense. But if there is more factors than that, then "who cares" is just a dismissal.

Also, any other company who wants to sell a laptop has no choice other than to look to solutions that aren't Apple since Apple won't sell their chips to anyone. Therefore everyone else will focus to build the best machines they can. If they can't put together a laptop with a processing unit that is an efficient as Apple should they just not build computers anymore? Or perhaps they can distinguish themselves in other ways for users who don't want / can't get an Apple laptop for the hundreds of reasons that make a lot of sense?
PPW comes from the node, not Apple or its “designs”.
 
Apple hasn't been picking up the all of the latest ISA updates. For example SVE2. Nested virtual machines . etc.
At this point Apple is incrementally drifting away from ARM ISA. The current intersection is high enough than stuff like Windows 11 (and its apps) still run, but 5-10 years down the road I wouldn't bet the farm that the contemporary update will.
Oh I've been surprised Apple has been sticking to the ISA for this long. I expect at some point down the road Apple may not be using ARM anything other than making sure their agreements don't violate any patents.
 
  • Like
Reactions: Timpetus
Well, Microsoft certainly disagrees with this if you read the article (better than Rosetta) and they have more information than anyone here on the new emulation with Windows 24H2. I guess we'll see, but if emulation is even just as good as Rosetta I can see Windows on Arm starting to become mainstream (with some developers eventually making native apps).

Rosetta2 doesn't really run under Windows 11 on the Mac so this likely is going to be at least a bit of a "Apples to Oranges" comparison. It is not going to be too surprising if "raw iron" Windows Arm apps run faster than virtualized Windows Arm apps on M3. If compare x86 Edge on Mac to x86 Edge on Windows then incrementally different apps ( due to variations in the underlying OS libraries. ) x86 Word/Excel versus x86 Word/Excel on Windows is even more so. DaVinci Resolve (cross platform compare) .. again even more so.

The Elite X is also sized a bit between the 'plain' M3 and M3 Pro. It appears Qualcomm only has one die used in multiple SKU packages. That is entirely reasonable for a version 1.0 line up. They need to get something out to door rather than try to replace every possible AMD/Intel package possible.

The 'raw iron' Windows 11 versus virtualized Window11 is a bit of a 'fair' comparison because the hype train for last 2 years has been to 'Microsoft essentially needs Apple to further the Windows on Arm" cause. Killing that meme off would be useful for Microsoft.


I suspect the "faster than Rosetta 2" is going to be overstated and/or 'cherry picked' . What Microsoft has is more flexible which should lead to more corner cases where they can 'win'. If those happen to be cases that align with some hardcore Windows apps users then it would be a win-win demonstration. It is going to boil down to a better Windows outcome (not a better at running Mac app outcome. )
 
Consumers that "you encounter" have no statistical meaning. Assuming that someone mandates use of Windows is pure conspiracy theory.
If I want to game I need to use Windows. There are other software that is Windows only.
 
Let's see. I am not sure about the efficiency. But, as far as performance goes, from what I have seen, Qualcomm's ARM processors for Windows are still behind Apple's M-series in single-core performance.
 
Let's see. I am not sure about the efficiency. But, as far as performance goes, from what I have seen, Qualcomm's ARM processors for Windows are still behind Apple's M-series in single-core performance.
I think that holds true for literally every non-niche processor out there no? Apple's single core was the fastest on the market last I looked into it (its been a while).
 
  • Like
Reactions: Timpetus
If I want to game I need to use Windows. There are other software that is Windows only.
Most people I talk with are Windows users that would like to have a Mac, but price is a barrier. Some of them would specifically like to be able to do all sort of combos, like a steam deck for game and a Mac for work, among other mixes, but don't have money. It is quite very simple like that.
 
Hmmm... so let me see if I'm reading all this correctly. Slightly faster than the lowest end of the chip that Apple has already released. And that small increase in speed comes at the cost of running hotter, requiring fans, draining battery more quickly... then we look over at the Pro, Max and soon to release Ultra version of the current chip. And then as we try to stop shaking our head, notice the M4 and all of its variants quickly on the verge of being announced as well. Kinda feel like a celebration of getting close.
 
I think the big question here is the potential battery drain of the Snapdragon X Elite. While it may be fast, will be a major battery power hog in the versions the compare with the Apple M3 Pro and M3 Max chip?
 
I think the big question here is the potential battery drain of the Snapdragon X Elite. While it may be fast, will be a major battery power hog in the versions the compare with the Apple M3 Pro and M3 Max chip?
Question for clarification, why are you choosing to compare against the M3 Pro or Max when MS and Qualcomm are explicitly targeting the lower end M3?
 
What are the TDPs for M3, M3 Pro and M3 Max?

Qualcomm’s is 12 W to 80 W (typical 45 W).
 
...what? The node certainly contributes but this is the first time I've ever seen a claim like this made, anywhere.

Oh jeeze, I just read your post before that...nvm, I don't think we're going to be able to have a productive conversation with such a tenuous grasp on technical matters.
It’s 99.9% the node.

The design part only matters because there’s only a limited space on the die, so you have to decide how much space you want to apportion to the CPU, GPU, etc. Adding more CPU cores, for example, will improve performance but it’s not going to change the PPW. That comes from the node.

You also have to consider yield and pricing issues if you make your SoC too big.

Designing chips is an economics game or deciding where in the yield-cost curve you want to land on. It’s not a technical challenge.

There’s a point on the PPW curve where increasing performance causes a disproportionate increase in wattage. Whether Qualcomm wants to play in this area is a design choice, but it won’t be hard for them to tone it down and play on the more efficient part of the PPE curve. It’s as simple as ordering pizza.
 
Because Apple does not allow graphics cards, I'd like to see how Windows, Microsoft, and Nvidia, consumers would react to their Windows machines not being compatible with dedicated graphics cards or compatible with them when they ditch Intel. You know ARM's newest flagship chips have Ray Tracing, it's not just Apple using Ray Tracing, It's across the board with ARM's latest designs. The Windows world is not as religious about not allowing Nvidia GPUs as Apple..

Right, but your sentence, "With no way to add a graphics card, Apple had no choice but to add Ray Tracing to remain competitive or see themselves go down in flames as it was obvious Windows would get the Nuvia chips or comparable sooner or later." implies something about Nuvia, I thought.
 
What are the TDPs for M3, M3 Pro and M3 Max?

Qualcomm’s is 12 W to 80 W (typical 45 W).
Should be 20W and up. I need to do more testing on it when I get the chance.

More than likely, Qualcomm is fitting a better GPU than Apple so their wattage may be higher.

Note that PPW is based 100% on the node, not Apple or Qualcomm. Apple and Qualcomm are only picking where in the PPW curve they want to be it. They don’t determine the curve’s position.

Simply put, the fab does the hard and intellectual part of determining where the PPW is located. The designers like Apple and Qualcomm only decided where in the PPW curve they want to be in. Apple and Qualcomm’s job is intellectually child’s play compared to the fab’s job.
 
...what? The node certainly contributes but this is the first time I've ever seen a claim like this made, anywhere.

Oh jeeze, I just read your post before that...nvm, I don't think we're going to be able to have a productive conversation with such a tenuous grasp on technical matters.
wrong. This is common knowledge to anyone with knowledge in semiconductors. Every time fabs announce a new node, they announce performance and efficiency gains compared to the last node. Where do you think they’re getting these figures from? They’re from the derivative of performance over wattage = 1 (The inflection point on the node’s PPW curve where it becomes less advantageous to increase wattage to increase performance).

Designing a chip using a fab’s node is picking where on the PPW curve you want to be in. You cannot alter the position of the PPW curve by “designing” a chip. Based on history, Apple likes being on the left side of the curve where performance goes up disproportionately with wattage. Qualcomm can easily match Apple if they wanted to, but they’re probably aiming for the power users and will settle on the other end of the curve where you get marginal performance gains with more wattage.
Again, this is a DESIGN choice that a 3-year-old can make. There’s nothing sophisticated about chip design.
 
Last edited:
The 90s was a very different time. As you said, the eco system was quite different. And Apple did not have the brand and status symbol halo it has now.
Apple is not aiming for the cheap Windows-based market. So they are not competing.
And for the high-price segment, people buying expensive Windows laptops usually know what they are doing. They want or need Windows. Apple is not competing there either because Windows is not viable on AS Macs. In the same right, people looking for MacOS machines would not even consider even a very high end PC.
Mac and Windows do exactly the same thing. They compute. People use them for productivity apps, surf the web, email, playing games, etc.

Apple can only continue to charge a premium if they innovate. Otherwise there’s no reason for people to pay a premium for inferior hardware, a 20 year old design, and a stale OS.

Apple is lucky that their two primary OS competitors are so incompetent.
 
Rosetta2 doesn't really run under Windows 11 on the Mac so this likely is going to be at least a bit of a "Apples to Oranges" comparison. It is not going to be too surprising if "raw iron" Windows Arm apps run faster than virtualized Windows Arm apps on M3. If compare x86 Edge on Mac to x86 Edge on Windows then incrementally different apps ( due to variations in the underlying OS libraries. ) x86 Word/Excel versus x86 Word/Excel on Windows is even more so. DaVinci Resolve (cross platform compare) .. again even more so.

The Elite X is also sized a bit between the 'plain' M3 and M3 Pro. It appears Qualcomm only has one die used in multiple SKU packages. That is entirely reasonable for a version 1.0 line up. They need to get something out to door rather than try to replace every possible AMD/Intel package possible.

The 'raw iron' Windows 11 versus virtualized Window11 is a bit of a 'fair' comparison because the hype train for last 2 years has been to 'Microsoft essentially needs Apple to further the Windows on Arm" cause. Killing that meme off would be useful for Microsoft.


I suspect the "faster than Rosetta 2" is going to be overstated and/or 'cherry picked' . What Microsoft has is more flexible which should lead to more corner cases where they can 'win'. If those happen to be cases that align with some hardcore Windows apps users then it would be a win-win demonstration. It is going to boil down to a better Windows outcome (not a better at running Mac app outcome. )
Honestly this is all speculation at this point. Of course what Microsoft meant was that with the new emulation that is coming with the next version of Windows the difference between native and emulated will be less than the difference between Rosetta and native ARM on Mac. Of course nobody in the Apple world believes this, and personally I don't care if they are slightly better, similar or slightly worse than Rosetta. If they really manage to improve the emulation to a point where emulated stuff on WoA is, with Elite X, as fast or faster than on Intel or AMD with better battery life (so at similar TDP), that's a big win in my book.
 
Right, but your sentence, "With no way to add a graphics card, Apple had no choice but to add Ray Tracing to remain competitive or see themselves go down in flames as it was obvious Windows would get the Nuvia chips or comparable sooner or later." implies something about Nuvia, I thought.
Nope, about ARM in general, and the need not to follow Intel's footsteps in complacency, as competition keeps moving forward and aiming to overtake those who are.
 
Performance and efficiency curve is set by the node, not “design”. Apple “design” is mostly a marketing stunt. There’s actually very minimal or no benefit to the end user except making them think they’re getting a super special chip. The most important, hardest and intellectual part comes from manufacturing, not “design”.
Do you think x86 and ARM have identical efficiency characteristics? Do you think Qualcomm buying Nuvia was stupid, given that they already had knowledge on how to make an ARM design? Do you think differences between Cortex and other designs are “mostly a marketing stunt”?
 
  • Like
Reactions: NT1440
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.