I don't think it was as wide spread as it seems here.
It doesn’t even “seem” white spread. Literally a single guy making unsubstantiated claims.
I don't think it was as wide spread as it seems here.
I never met a single person with that issue. And all of my friends come to me over Apple product questions/issues. I don't think it was as wide spread as it seems here.
Well, they engineered themselves out of a plastic bag to become the most valued and rich company in the world, so I guess your point is moot.I have stated repeatedly that chip "design" is mostly marketing gimmick and has no impact on the final chip's performance. Everything is based on the fabbing process, which is both more important and intellectually harder than designing a chip, which is on par intellectually with ordering a pizza from Domino's.
QC may be using the N3E process, but it is an improved N3E process that TSMC adapted from fabbing the A18 pro. Of course this chip will outperform the A18 pro since the mfg process is more refined.
Apple fanatics are in denial and this is just further proof that Apple "design" is unimportant junk.
P.S. The displays on the iPhones are technology developed by Samsung/LG. Apple just does high-level specs like screen size, shape, resolution, color profile. Apple does low intelligence task. The real engineering magic is done by Samsung/LG, NOT Apple. Apple can't engineer themselves out of a plastic bag. Apple is only good at marketing and sales volume (Which they're slowly losing).
I agree it’d be appreciated but it’d likely age pretty badly. You have to make sure that chip would be capable enough to run the software for no less than 6 years and any performance tradeoff just limits you from competing in software.
I had severe overheating on my 15 Pro Max and had to replace the battery in a year. I purchased a 16 Pro and it runs warm at times but it much better.Speaking of gaslighting…
BTW, My 15 Pro has never had any thermal issues. I assume you just watched a Max Tech video or something?
This brings up a thought I've been having about phones and computers. I think we're about at the point where those that don't need a high power computer should be able to connect their phone to a dock that connects a monitor and keyboard and use their phone like a computer. The phone's OS could detect that it's docked then allow more desktop/laptop like user interface.
The Samsung S24 Ultra comes in with Geekbench scores of 2139 (single core) and 6684 (multi core).How can they be 4 years behind when this new chip is the fastest on the market.
Apple where at least 2 years ahead about 2 years ago it was obvious chip competitors where going to catch up
Firefox literally sucks on Android. I don’t even know or understand why it exists. It still has no support for add-ons (i.e. adblockers), very slow browsing speed and no theming capabilities (the main reason I use it as my main browser on Mac). Chrome on the other hand eats battery in minutes. I absolutely love Samsung browser on new Galaxy devices, I’ve heard it uses webkit (correct me if I am wrong) thus it offers almost Safari-like speed, no adblock support either unfortunately (at least those adblocks that they offer do not work properly).in general, Firefox needs to step up like Chrome and other browsers here
Well, as a long time Apple user I gotta unveil the “myth” of 7 years+ iOS update support: there is no other way.The contra with Qualcomm SoCs is that you don't get 7 years of guaranteed major OS updates (if it was installed in my phone at least, Pixel 8).
Apple can do whatever they want with their own SoC and this is a huge advantage, as Qualcomm does not sell end products and the companies selling a phone with Qualcomm SoC do not own as much as Apple with the iPhone.
Exynos proved its uselessness, Chinese mobile chipsets have rather poor app support and sometimes applications fail to load. Qualcomm is just sort of universal choice for nowAnd if Qualcomm is the only one getting better, then there is still low competition.
I have stated repeatedly that chip "design" is mostly marketing gimmick and has no impact on the final chip's performance. Everything is based on the fabbing process, which is both more important and intellectually harder than designing a chip, which is on par intellectually with ordering a pizza from Domino's.[...]
Holy cow. There's been a ton of ignorance and nonsense in this thread (unsurprisingly), but this... is next-level. I thought the poster was joking at first (the username is hilarious), but their arguments with others here who know a bit more show that they're serious. I'm not going to respond to everything - it's not worth engaging with them, as shown by others who have already tried. But for the benefit of anyone reading them here are some corrections.
Performance and efficiency curve is set by the node, not “design”. Apple “design” is mostly a marketing stunt. There’s actually very minimal or no benefit to the end user except making them think they’re getting a super special chip. The most important, hardest and intellectual part comes from manufacturing, not “design”.
Possibly the dumbest comment ever posted on MR. (Ok, maybe not, that's a very high bar!) That "curve" doesn't exist in a vacuum. The notion that the chip design is meaningless is ... more wrong than mustard on ice cream. It's laughable. For a simple proof of the sheer stupidity of it, consider two different core designs on the SAME process: Apple's P and E cores. There's roughly a factor of 3 difference in performance. Or look at Intel's P & E cores - the difference is even larger. Naturally, in both cases, the P cores are a LOT larger. Design with more transistors, you can get a faster core. Pretty basic.
You could also compare Apple's older N7 cores (A12 or A13) with another vendor's N7 core. The differences are stark.
Lastly, as I mentioned in a previous post, design will determine the highest clock you can run a chip at. In the language of the P-E curve, the curve doesn't extend forever. It cuts off at a certain point, beyond which more power won't get you any more performance, because the design is literally not capable of it.
It’s 99.9% the node.
The design part only matters because there’s only a limited space on the die, so you have to decide how much space you want to apportion to the CPU, GPU, etc. Adding more CPU cores, for example, will improve performance but it’s not going to change the PPW. That comes from the node.
You also have to consider yield and pricing issues if you make your SoC too big.
Designing chips is an economics game or deciding where in the yield-cost curve you want to land on. It’s not a technical challenge.
There’s a point on the PPW curve where increasing performance causes a disproportionate increase in wattage. Whether Qualcomm wants to play in this area is a design choice, but it won’t be hard for them to tone it down and play on the more efficient part of the PPE curve. It’s as simple as ordering pizza.
Nearly everything above is wrong. The two parts that are correct are:
1) Yield and pricing do matter, and are a direct consequence of area
2) The PPW curve is generally as stated. QC *is* playing in both "area"s to some extent already, by selling the chip as useful at both 20ish and 80ish W.
This is 99.9% wrong. The flimflam about P-E curves in the first paragraph is irrelevant to the second, and in any case incorrect - when a single area-reduction number is quoted, it's for a "typical" mix of logic, SRAM, and analog, which mix is chosen by the foundry, usually derived from an actual chip design. If you look in more detail, they'll quote specific numbers for each of those. For example, TSMC quoted area improvements of 1.7x for logic going from N5 to N3, but only 1.2x for SRAM and 1.1x for analog. (And it turned out the SRAM improvement wasn't nearly that good, in the end.)wrong. This is common knowledge to anyone with knowledge in semiconductors. Every time fabs announce a new node, they announce performance and efficiency gains compared to the last node. Where do you think they’re getting these figures from? They’re from the derivative of performance over wattage = 1 (The inflection point on the node’s PPW curve where it becomes less advantageous to increase wattage to increase performance).
Designing a chip using a fab’s node is picking where on the PPW curve you want to be in. You cannot alter the position of the PPW curve by “designing” a chip. Based on history, Apple likes being on the left side of the curve where performance goes up disproportionately with wattage. Qualcomm can easily match Apple if they wanted to, but they’re probably aiming for the power users and will settle on the other end of the curve where you get marginal performance gains with more wattage.
Again, this is a DESIGN choice that a 3-year-old can make. There’s nothing sophisticated about chip design.
As for the choice of where you want to be on the curve... you just choose. You run your design at a higher or lower power (or equivalently, clocks), and that determines where you are on the curve.
HOWEVER, that's not *really* true, because - as I already mentioned above, and at greater lengths in previous posts - the design has a major impact on how fast you can actually run your core (and your uncore, but let's not get too far into the weeds). It will also have a particular part of the frequency curve where you get the best efficiency, which is entirely dependent on the design. So yes, you can pick your clock, but your design constrains you.
Yeah, this is all garbage. A bunch of people with short fuses got the idea that N3 was bad when it first came out, and all sorts of nonsense was published. As it turns out, N3 seems to have landed where it was supposed to. The one slightly unexpected shortcoming, as I mentioned earlier, was that SRAM cells only shrank about 5% compared to N5. There were also big concerns about yield at the start. I don't think anyone who actually knows about this is telling, but the general consensus seems to be that it's fine, and within the limits of the info presented in their financial statements, that appears to be true.They're not. See pic below.
The Wattage is 2-3X higher under their most recent processor because TSMC's 3nm is total ******* and provides almost no PPA gains from their N4P node. It's another proof that design doesn't matter and it's all in the node. To get any form of performance gain, Apple had to move further right in the PPW curve to the inefficient side (Where derivative < 1) which is why you're seeing such terrible PPW on the M3 and A17 Pro when it does anything other than idle. You also notice the battery life + battery health complaints on the iPhone 15 pro? That's because Apple moved to the inefficient side of TSMC's PPW curve (More heat and more watts).
Usually Apple gets first dibs on the best technology from their suppliers, but this backfired on 3nm because TSMC messed that node up badly. The gains on N3B are extremely minimal compared to N4P that Apple had no choice but to play on the right-side of the PPW curve or they risk getting no performance gains from last gen chips. That would've been a marketing and sales disaster.
Calling Intel's process 10nm is arguing about semantics... but is also wrong. They're currently producing the old intel "7nm" which is now called "Intel 4". The old 10nmSF is now called Intel 7 and that's been up for a while now. You can remark snidely on their need to rename to keep up appearances, and you'd be right, but it's also true that the old names were less dishonest than the names used by other foundries (TSMC, Samsung, etc.) There is no feature in "3nm" chips that gets anywhere near to being 3nm in actual size. Intel 4 is roughly equivalent to TSMC N4, so if you're going to accept one name you should accept the other.Intel and AMD are on older nodes. Intel is on 10nm and about to go down to 7nm while AMD is still on 4-5nm.
The 3nm lineup is FinFlex, so there are manufacturing improvements with each generation. Normally how it works is that you have a manufacturing base process (1st gen N3B) and each subsequent generation (N3E, N3P, N3X, etc.) is a slightly modified/improved manufacturing process that gives you some PPA improvement though at a smaller gain than a full node jump.
Chipmaking is a lucrative sector. I don't downplay the manufacturing aspect. I only say the "designing" part that Apple, Qualcomm, AMD, etc. do is child's play and an intellectual joke.
N3 variants (not "generations") (E, P, X, etc.) are indeed smaller changes, but not all of them improve PPA. For example, X is about high power applications, and will likely relax some design rules... which is fine, because such designs can't go that dense anyway.
Calling design "child's play and an intellectual joke" demonstrates complete ignorance, and probably psychological issues I'm not qualified to diagnose.
Apple provides large sales volume. That’s about it. The actual designing part is pretty easy and trivial.
We can see how Apple gave up on microLED and the car that they just suck at engineering. Their strength is in marketing and branding. Tim Cook knows this, which is why he’s pivoting away from engineering and leaving that to their higher IQ suppliers.
Apple will focus on DEI, affirmative action, social justice, marketing political activism and other activities that increase their social clout to get higher sales.
...and now it starts to become clear why this person is so dismissive of Apple. The DEI etc. comment makes it clear that engineering isn't motivating these many posts, but rather politics. Which I could really stand NOT to have to hear about for five frickin' minutes out of my day, please.
Do you have any semiconductor engineering experience (Programming doesn’t count)
You have no background in this topic and your opinion is irrelevant.
No engineer is going to care if someone not educated in his field of expertise believes in science or not.
Wow. Pot, meet kettle. Take some classes, then come back here.
Seems like a Windows issues as it appears to perform properly on Android and "other platforms". From the link:
Firefox literally sucks on Android. I don’t even know or understand why it exists. It still has no support for add-ons (i.e. adblockers)
They sucked. They made an inferior product compared to Apple's M chips. And they have more or less figured it out, and are catching up. intel and AMD will figure it out as well. And make inroads for sure. But, in the end. Microsoft will have to make a choice. Stick with 2 different architectures (ARM and x86-64). Which I can't see happening forever. Or one of these chips will win out. But how long it takes for that to play out is anyones guess.Competition is good, so if true, that’s cool.
And a 45% increase over the previous generation would put this at least in striking distance of the Geekbench scores of Apple’s mobile CPUs.
There are two kind of mind-blowing aspects to this, though:
One is a 45% jump in performance. In this era Of CPU development, that’s absolutely massive, and extremely rare unless either your previous design kind of sucked, or your new one is revolutionary.
The other is that it took until this chip, with a 45% jump in performance, to even get close to where Apple is. The core performance of the A series vs Qualcomm’s was, until this generation, staggeringly unbalanced. As in, three-year-old Apple CPUs were directly competitive with Qualcomm’s top of the line.
Not really... they are catching up, much like Zeno's Achilles is "catching up" to the tortoise.They sucked. They made an inferior product compared to Apple's M chips. And they have more or less figured it out, and are catching up.
Resolved via software, so not likely the chips.so do Apple's chips...all the overheating issues with the A17 Pro...
Exynos (Samsung) tends to be further behind that Snapdragon (Qualcomm). Especially on process node.This is interesting. In the recent past, Samsung processors, even after the new processor would be announced, were comparable to iPhones which were 2 Generations behind. So when the iPhone 14 Pro came out, the Samsung which came out after the iPhone has comparable specs to the iPhone 12 Pro Max. Or I guess you can compare it to the iPhone 13 non Pro (as that was basically an iPhone 12 Pro), but Samsung processors were not comparable to the current generation top end iPhone.
Exynos (Samsung) tends to be further behind that Snapdragon (Qualcomm). Especially on process node.
Phones with this chip come with an optional oven mitt 😁Ok, what about that heat dissipation, sounds like it will get pretty toasty...