Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
It has the fastest single-core performance and runs at a higher frequency, with it being eight cores vs. Apple 6 cores. But single-core Apple is still faster.
The Media Tek chip is the one Apple has to worry about runs almost at fast as Apple's in single core and will be releasing a desktop chip next year with the help pf Nvidia.
 
I never met a single person with that issue. And all of my friends come to me over Apple product questions/issues. I don't think it was as wide spread as it seems here.

Most people buy the Pro phones as fashion accessories, and the silicon in their phone sits around doing nothing until they upgrade to the latest fashion accessory. People who actually use their Pro phones for "pro" features (at least the ones that Apple allows) understand how inadequate the thermal solution for the iPhone 15 Pro is.
 
  • Haha
Reactions: BugeyeSTI
I have stated repeatedly that chip "design" is mostly marketing gimmick and has no impact on the final chip's performance. Everything is based on the fabbing process, which is both more important and intellectually harder than designing a chip, which is on par intellectually with ordering a pizza from Domino's.

QC may be using the N3E process, but it is an improved N3E process that TSMC adapted from fabbing the A18 pro. Of course this chip will outperform the A18 pro since the mfg process is more refined.

Apple fanatics are in denial and this is just further proof that Apple "design" is unimportant junk.

P.S. The displays on the iPhones are technology developed by Samsung/LG. Apple just does high-level specs like screen size, shape, resolution, color profile. Apple does low intelligence task. The real engineering magic is done by Samsung/LG, NOT Apple. Apple can't engineer themselves out of a plastic bag. Apple is only good at marketing and sales volume (Which they're slowly losing).
Well, they engineered themselves out of a plastic bag to become the most valued and rich company in the world, so I guess your point is moot.

Thanks for your contribution, whatever point you might have had you lost it when you said such bs, including the comparison of chip design to ordering pizza from dominos, lol
 
  • Like
Reactions: NetMage
I agree it’d be appreciated but it’d likely age pretty badly. You have to make sure that chip would be capable enough to run the software for no less than 6 years and any performance tradeoff just limits you from competing in software.

I suppose we'll have to see how Zen 5, the architecture that had minimal performance gains and focused on efficiency, ages.

That being said, most software hasn't outrun the decade-old i5 in my desktop at work.
 
Speaking of gaslighting…

BTW, My 15 Pro has never had any thermal issues. I assume you just watched a Max Tech video or something?
I had severe overheating on my 15 Pro Max and had to replace the battery in a year. I purchased a 16 Pro and it runs warm at times but it much better.
 
  • Like
Reactions: steve09090
This brings up a thought I've been having about phones and computers. I think we're about at the point where those that don't need a high power computer should be able to connect their phone to a dock that connects a monitor and keyboard and use their phone like a computer. The phone's OS could detect that it's docked then allow more desktop/laptop like user interface.

Have a look at Samsung Dex. It’s something in that direction for Android.

About the new Snapdragon chip, they need 6 P-Cores to call it ‘the fastest’…
 
How can they be 4 years behind when this new chip is the fastest on the market.
Apple where at least 2 years ahead about 2 years ago it was obvious chip competitors where going to catch up
The Samsung S24 Ultra comes in with Geekbench scores of 2139 (single core) and 6684 (multi core).

The 12 Pro Max with the A14 Bionic comes in at 2119 (single core) and 14 Pro with the A16 Bionic hits 6676 (multi core).

For reference, the 16 Pro/Pro Max A18 Pro is getting 3409 (single core) and 8492 (multi core).

Whatever “world‘s fastest chip” is being touted has zero reflection of the capabilities of the chips in latest and greatest Android phones. This happens every year and this 2-4 year benchmark lag for Snapdragon chips is not a new phenomenon either.
 
in general, Firefox needs to step up like Chrome and other browsers here
Firefox literally sucks on Android. I don’t even know or understand why it exists. It still has no support for add-ons (i.e. adblockers), very slow browsing speed and no theming capabilities (the main reason I use it as my main browser on Mac). Chrome on the other hand eats battery in minutes. I absolutely love Samsung browser on new Galaxy devices, I’ve heard it uses webkit (correct me if I am wrong) thus it offers almost Safari-like speed, no adblock support either unfortunately (at least those adblocks that they offer do not work properly).

The contra with Qualcomm SoCs is that you don't get 7 years of guaranteed major OS updates (if it was installed in my phone at least, Pixel 8).
Apple can do whatever they want with their own SoC and this is a huge advantage, as Qualcomm does not sell end products and the companies selling a phone with Qualcomm SoC do not own as much as Apple with the iPhone.
Well, as a long time Apple user I gotta unveil the “myth” of 7 years+ iOS update support: there is no other way.

Android has one major advantage over iOS – it doesn’t require OS updates for the apps to function as expected, i.e. you can pretty much install most apps from Google Play onto devices running as old as version 4.4 Kitkat. Cannot say same about iOS 6, many apps require version 12 or 15 nowadays.

On iOS it is not possible to update Safari without updating OS itself. If phone is not updated for 3 years or more, sites become unresponsive or often fail to load. Rarely a problem even on old Android phones.

And if Qualcomm is the only one getting better, then there is still low competition.
Exynos proved its uselessness, Chinese mobile chipsets have rather poor app support and sometimes applications fail to load. Qualcomm is just sort of universal choice for now
 
  • Like
Reactions: amenotef
What's really sad about this arrant nonsense is that some gullible readers may actually be taken in by it. The poster is a master of irony, but not engineering.
I have stated repeatedly that chip "design" is mostly marketing gimmick and has no impact on the final chip's performance. Everything is based on the fabbing process, which is both more important and intellectually harder than designing a chip, which is on par intellectually with ordering a pizza from Domino's.[...]

He sure has stated it repeatedly. First time I remember seeing this insanity was in April. Back then he was spanked hard and didn't show his face for a while, probably in sheer embarrassment. Since it's almost all relevant to his current postings, I'll repeat it here verbatim. Alternatively, click the link in the quote attribution to see the original thread. There's a bunch of good info in the last few pages.

Holy cow. There's been a ton of ignorance and nonsense in this thread (unsurprisingly), but this... is next-level. I thought the poster was joking at first (the username is hilarious), but their arguments with others here who know a bit more show that they're serious. I'm not going to respond to everything - it's not worth engaging with them, as shown by others who have already tried. But for the benefit of anyone reading them here are some corrections.

Performance and efficiency curve is set by the node, not “design”. Apple “design” is mostly a marketing stunt. There’s actually very minimal or no benefit to the end user except making them think they’re getting a super special chip. The most important, hardest and intellectual part comes from manufacturing, not “design”.

Possibly the dumbest comment ever posted on MR. (Ok, maybe not, that's a very high bar!) That "curve" doesn't exist in a vacuum. The notion that the chip design is meaningless is ... more wrong than mustard on ice cream. It's laughable. For a simple proof of the sheer stupidity of it, consider two different core designs on the SAME process: Apple's P and E cores. There's roughly a factor of 3 difference in performance. Or look at Intel's P & E cores - the difference is even larger. Naturally, in both cases, the P cores are a LOT larger. Design with more transistors, you can get a faster core. Pretty basic.

You could also compare Apple's older N7 cores (A12 or A13) with another vendor's N7 core. The differences are stark.

Lastly, as I mentioned in a previous post, design will determine the highest clock you can run a chip at. In the language of the P-E curve, the curve doesn't extend forever. It cuts off at a certain point, beyond which more power won't get you any more performance, because the design is literally not capable of it.

It’s 99.9% the node.

The design part only matters because there’s only a limited space on the die, so you have to decide how much space you want to apportion to the CPU, GPU, etc. Adding more CPU cores, for example, will improve performance but it’s not going to change the PPW. That comes from the node.

You also have to consider yield and pricing issues if you make your SoC too big.

Designing chips is an economics game or deciding where in the yield-cost curve you want to land on. It’s not a technical challenge.

There’s a point on the PPW curve where increasing performance causes a disproportionate increase in wattage. Whether Qualcomm wants to play in this area is a design choice, but it won’t be hard for them to tone it down and play on the more efficient part of the PPE curve. It’s as simple as ordering pizza.

Nearly everything above is wrong. The two parts that are correct are:
1) Yield and pricing do matter, and are a direct consequence of area
2) The PPW curve is generally as stated. QC *is* playing in both "area"s to some extent already, by selling the chip as useful at both 20ish and 80ish W.

wrong. This is common knowledge to anyone with knowledge in semiconductors. Every time fabs announce a new node, they announce performance and efficiency gains compared to the last node. Where do you think they’re getting these figures from? They’re from the derivative of performance over wattage = 1 (The inflection point on the node’s PPW curve where it becomes less advantageous to increase wattage to increase performance).

Designing a chip using a fab’s node is picking where on the PPW curve you want to be in. You cannot alter the position of the PPW curve by “designing” a chip. Based on history, Apple likes being on the left side of the curve where performance goes up disproportionately with wattage. Qualcomm can easily match Apple if they wanted to, but they’re probably aiming for the power users and will settle on the other end of the curve where you get marginal performance gains with more wattage.
Again, this is a DESIGN choice that a 3-year-old can make. There’s nothing sophisticated about chip design.
This is 99.9% wrong. The flimflam about P-E curves in the first paragraph is irrelevant to the second, and in any case incorrect - when a single area-reduction number is quoted, it's for a "typical" mix of logic, SRAM, and analog, which mix is chosen by the foundry, usually derived from an actual chip design. If you look in more detail, they'll quote specific numbers for each of those. For example, TSMC quoted area improvements of 1.7x for logic going from N5 to N3, but only 1.2x for SRAM and 1.1x for analog. (And it turned out the SRAM improvement wasn't nearly that good, in the end.)

As for the choice of where you want to be on the curve... you just choose. You run your design at a higher or lower power (or equivalently, clocks), and that determines where you are on the curve.

HOWEVER, that's not *really* true, because - as I already mentioned above, and at greater lengths in previous posts - the design has a major impact on how fast you can actually run your core (and your uncore, but let's not get too far into the weeds). It will also have a particular part of the frequency curve where you get the best efficiency, which is entirely dependent on the design. So yes, you can pick your clock, but your design constrains you.

They're not. See pic below.

The Wattage is 2-3X higher under their most recent processor because TSMC's 3nm is total ******* and provides almost no PPA gains from their N4P node. It's another proof that design doesn't matter and it's all in the node. To get any form of performance gain, Apple had to move further right in the PPW curve to the inefficient side (Where derivative < 1) which is why you're seeing such terrible PPW on the M3 and A17 Pro when it does anything other than idle. You also notice the battery life + battery health complaints on the iPhone 15 pro? That's because Apple moved to the inefficient side of TSMC's PPW curve (More heat and more watts).

Usually Apple gets first dibs on the best technology from their suppliers, but this backfired on 3nm because TSMC messed that node up badly. The gains on N3B are extremely minimal compared to N4P that Apple had no choice but to play on the right-side of the PPW curve or they risk getting no performance gains from last gen chips. That would've been a marketing and sales disaster.
Yeah, this is all garbage. A bunch of people with short fuses got the idea that N3 was bad when it first came out, and all sorts of nonsense was published. As it turns out, N3 seems to have landed where it was supposed to. The one slightly unexpected shortcoming, as I mentioned earlier, was that SRAM cells only shrank about 5% compared to N5. There were also big concerns about yield at the start. I don't think anyone who actually knows about this is telling, but the general consensus seems to be that it's fine, and within the limits of the info presented in their financial statements, that appears to be true.

Intel and AMD are on older nodes. Intel is on 10nm and about to go down to 7nm while AMD is still on 4-5nm.

The 3nm lineup is FinFlex, so there are manufacturing improvements with each generation. Normally how it works is that you have a manufacturing base process (1st gen N3B) and each subsequent generation (N3E, N3P, N3X, etc.) is a slightly modified/improved manufacturing process that gives you some PPA improvement though at a smaller gain than a full node jump.

Chipmaking is a lucrative sector. I don't downplay the manufacturing aspect. I only say the "designing" part that Apple, Qualcomm, AMD, etc. do is child's play and an intellectual joke.
Calling Intel's process 10nm is arguing about semantics... but is also wrong. They're currently producing the old intel "7nm" which is now called "Intel 4". The old 10nmSF is now called Intel 7 and that's been up for a while now. You can remark snidely on their need to rename to keep up appearances, and you'd be right, but it's also true that the old names were less dishonest than the names used by other foundries (TSMC, Samsung, etc.) There is no feature in "3nm" chips that gets anywhere near to being 3nm in actual size. Intel 4 is roughly equivalent to TSMC N4, so if you're going to accept one name you should accept the other.

N3 variants (not "generations") (E, P, X, etc.) are indeed smaller changes, but not all of them improve PPA. For example, X is about high power applications, and will likely relax some design rules... which is fine, because such designs can't go that dense anyway.

Calling design "child's play and an intellectual joke" demonstrates complete ignorance, and probably psychological issues I'm not qualified to diagnose.

Apple provides large sales volume. That’s about it. The actual designing part is pretty easy and trivial.

We can see how Apple gave up on microLED and the car that they just suck at engineering. Their strength is in marketing and branding. Tim Cook knows this, which is why he’s pivoting away from engineering and leaving that to their higher IQ suppliers.

Apple will focus on DEI, affirmative action, social justice, marketing political activism and other activities that increase their social clout to get higher sales.

...and now it starts to become clear why this person is so dismissive of Apple. The DEI etc. comment makes it clear that engineering isn't motivating these many posts, but rather politics. Which I could really stand NOT to have to hear about for five frickin' minutes out of my day, please.

Do you have any semiconductor engineering experience (Programming doesn’t count)

You have no background in this topic and your opinion is irrelevant.

No engineer is going to care if someone not educated in his field of expertise believes in science or not.

Wow. Pot, meet kettle. Take some classes, then come back here.
 
For the TL;DR crowd, here's the most important part of the response to the high (hahahaha) IQ person:

[...] The notion that the chip design is meaningless is ... more wrong than mustard on ice cream. It's laughable. For a simple proof of the sheer stupidity of it, consider two different core designs on the SAME process: Apple's P and E cores. There's roughly a factor of 3 difference in performance. Or look at Intel's P & E cores - the difference is even larger. [Note that this was written before Lunar Lake, and it refers to Intel's non-chiplet processors] Naturally, in both cases, the P cores are a LOT larger. Design with more transistors, you can get a faster core. Pretty basic.

You could also compare Apple's older N7 cores (A12 or A13) with another vendor's N7 core. The differences are stark.

Lastly, as I mentioned in a previous post, design will determine the highest clock you can run a chip at. In the language of the P-E curve, the curve doesn't extend forever. It cuts off at a certain point, beyond which more power won't get you any more performance, because the design is literally not capable of it.
 
  • Like
Reactions: steve123
Seems like a Windows issues as it appears to perform properly on Android and "other platforms". From the link:

Hence my point about CL being a crap compiler.

There are also some problems with NT's architecture on system call latency as well which is a damage multiplier on windows x86 as it is already (look at how slow small atomic things against the filesystem and fudging kernel objects is). I think if I remember, reading the call conventions years ago, the SWI semantics are terrible as well on ARM. Not sure if the latter applies now but on 32-bit ARM years ago (windows phone) it was awful.

Android probably uses (I don't know) LLVC/clang or GCC or something far superior.
 
Firefox literally sucks on Android. I don’t even know or understand why it exists. It still has no support for add-ons (i.e. adblockers)

Actually Firefox does support add blockers in Android. It's the only reason I installed it. I use it with uBlock Origin.
 
And lastly, about the chip itself...

Do we know if this is the same core as in the recent SXE? (I'm pretty sure it is - they're using the exact same name, Oryon, rather than "Oryon 2" or something else.) Because if it is, any notion that it's as fast as the A18 in single-core (which is by far what matters most in a phone) is hilariously wrong. The A18 crushes the SXE. A18 raw GB6 score is superior, but more importantly, it uses less power. That in turn means that you can actually run your core at top speed for a meaningful amount of time without throttling or draining your battery. EDIT: Yes, It's actually two new cores, about which I and many others posted a lot. It's a huge improvement but it's still a generation or more back from the A18.

For multicore, the situation is a little different. I'd expect an 8-P-core SD8G4 to be faster than the A18P in GB6 multicore. BUT... how often does this matter in a phone? I think the only significant place you're going to see a phone max out multicore for any length of time will be in some games. (There are doubtless less common other cases - but the situation should be the same for them.) And in that case... the phone with *actually* better performance will be the one that can *sustain* peak MC performance. That will again be the A18, because it's way more efficient.

The NPU stuff, AFAIK, is an open question not likely to be resolved soon as I have yet to see any broad cross-platform measurements that are meaningful. There are things that try, but there's no way to know if they're even remotely optimized properly for any particular platform.

GPU is interesting. QC definitely beats Apple by significant margins in some benchmarks. However the Adreno in the SXE (and presumably in the SD4g4) is much less capable in some ways than the Apple GPU. How this will work out for gamers, I don't know - this is not something I know all that much about. @leman can comment more on this. I do think that over longer timeframes than benchmarks take, heat and power draw will dominate the discussion, and there again Apple will win even for most games - but I don't know that.

This has every sign of being a replay of the SXE rollout. QC talks big, but in the end they're not even close to Apple. However, that doesn't matter so much because they're not really competing with Apple, whatever they're fantasizing about. They're competing with other vendors, and in the market they're actually in, they likely have a quite competitive product.
 
Last edited:
Competition is good, so if true, that’s cool.

And a 45% increase over the previous generation would put this at least in striking distance of the Geekbench scores of Apple’s mobile CPUs.

There are two kind of mind-blowing aspects to this, though:

One is a 45% jump in performance. In this era Of CPU development, that’s absolutely massive, and extremely rare unless either your previous design kind of sucked, or your new one is revolutionary.

The other is that it took until this chip, with a 45% jump in performance, to even get close to where Apple is. The core performance of the A series vs Qualcomm’s was, until this generation, staggeringly unbalanced. As in, three-year-old Apple CPUs were directly competitive with Qualcomm’s top of the line.
They sucked. They made an inferior product compared to Apple's M chips. And they have more or less figured it out, and are catching up. intel and AMD will figure it out as well. And make inroads for sure. But, in the end. Microsoft will have to make a choice. Stick with 2 different architectures (ARM and x86-64). Which I can't see happening forever. Or one of these chips will win out. But how long it takes for that to play out is anyones guess.

When will we see Nvidia or AMD GPU support for ARM? When can you build a tower (or any kind of desktop) with Qualcomm chips and any 3rd-Party add-on? It's nice in a thin and light laptop, but will there be a desktop variant that can push the limits even further? Then of course the server end. When is that coming? As a consumer, what should you buy? As a gamer, there is only one choice and that is X86 for now.
 
They sucked. They made an inferior product compared to Apple's M chips. And they have more or less figured it out, and are catching up.
Not really... they are catching up, much like Zeno's Achilles is "catching up" to the tortoise.

As I wrote a while back, it's not so hard to get to 80% of Apple's performance (and, more importantly, performance/power). Getting to 90% is really difficult. Getting to 100% is, so far, completely impossible for anyone except Apple.

Things haven't changed much since I wrote that. The M4 shipped (before the SXE, to everyone's surprise!), as did the SXE, Lunar Lake, and Zen 5. Intel especially has improved their position, but not enough. In the larger picture, Apple is still the performance leader, and especially crushing it on perf/power. It doesn't seem likely that anyone's going to catch up to them any time soon, on a tech level. And my 80% (above) turns out to be pretty optimistic for non-Apple players, in fact, if you're looking at perf/power.

On the product marketing level it's a different story. Apple won't keep up with raw MC performance at certain price points, because they choose not to. Obviously, given the size of their E cores and their chips, they could put 12-18 E cores in the M5 if they felt like it. Even 8-12 E cores in the A19. That seems very unlikely though. They obviously believe that certain levels of MC performance are all the market calls for at certain price points. If you're outside that envelope, then Apple chips won't be your best option. But I think that they're generally right, and that most people won't need/want that.
 
Last edited:
This is interesting. In the recent past, Samsung processors, even after the new processor would be announced, were comparable to iPhones which were 2 Generations behind. So when the iPhone 14 Pro came out, the Samsung which came out after the iPhone has comparable specs to the iPhone 12 Pro Max. Or I guess you can compare it to the iPhone 13 non Pro (as that was basically an iPhone 12 Pro), but Samsung processors were not comparable to the current generation top end iPhone.
 
This is interesting. In the recent past, Samsung processors, even after the new processor would be announced, were comparable to iPhones which were 2 Generations behind. So when the iPhone 14 Pro came out, the Samsung which came out after the iPhone has comparable specs to the iPhone 12 Pro Max. Or I guess you can compare it to the iPhone 13 non Pro (as that was basically an iPhone 12 Pro), but Samsung processors were not comparable to the current generation top end iPhone.
Exynos (Samsung) tends to be further behind that Snapdragon (Qualcomm). Especially on process node.
 
I think in the end, while the Snapdragon 8 Elite is definitely fast, I do worry about how much battery is needed for decent life per battery charge and also the amount of heat this SoC could generate. Let's hope that Samsung has upgraded their vapor chamber cooling system so the SoC does not run overly hot in high CPU demand situations on the Galaxy S25 models.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.