Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

High IQ Person

macrumors member
Dec 31, 2022
96
60
I said I'm not convinced by your argument, not sure what is not clear about this, any child would understand what not "being convinced" means.
You have no background in this topic and your opinion is irrelevant.

No engineer is going to care if someone not educated in his field of expertise believes in science or not.
 

romulopb

macrumors newbie
Apr 9, 2024
13
3
You have no background in this topic and your opinion is irrelevant.

No engineer is going to care if someone not educated in his field of expertise believes in science or not.
Great, now you think I am in search of an engineer validation... What people do nowadays when others simple decide to disagree is almost unbelievable. 😀 who cares... Ah, yes, you.
 

High IQ Person

macrumors member
Dec 31, 2022
96
60
Great, now you think I am in search of an engineer validation... What people do nowadays when others simple decide to disagree is almost unbelievable. 😀 who cares... Ah, yes, you.
Apple sucks at engineering and tech development. They're good at marketing, branding and generating large sales volume.
 
  • Disagree
Reactions: javisan

High IQ Person

macrumors member
Dec 31, 2022
96
60
All of which would be meaningless without actually having a great product to walk the talk with. Or are things like Apple silicon just pure marketing hype?
emoji854.png
Don’t get me wrong. How Apple built its cult-like fanbase is amazing, but I’m willing to bet it was more luck-based than anything else.
 
  • Haha
Reactions: mazz0

Abazigal

Contributor
Jul 18, 2011
19,793
22,420
Singapore
Don’t get me wrong. How Apple built its cult-like fanbase is amazing, but I’m willing to bet it was more luck-based than anything else.
Perhaps, but Apple also had to be able to capitalise on that luck and opportunity when it presented itself, and then sustain that momentum and success year after year after year. Nor do I see marketing and advertising as a "dirty word" because its purpose is simply to bring attention to a product said company is selling. At the end of the day, the buyer decides what works best for them and the truth is - plenty of people buy Apple products because that's really what works best for them and again, credit goes back to Apple for making great experiences which people are willing to pay a premium for.
 
  • Like
Reactions: Michael Scrip

romulopb

macrumors newbie
Apr 9, 2024
13
3
Perhaps, but Apple also had to be able to capitalise on that luck and opportunity when it presented itself, and then sustain that momentum and success year after year after year. Nor do I see marketing and advertising as a "dirty word" because its purpose is simply to bring attention to a product said company is selling. At the end of the day, the buyer decides what works best for them and the truth is - plenty of people buy Apple products because that's really what works best for them and again, credit goes back to Apple for making great experiences which people are willing to pay a premium for.
Nooo, please... Don't breaks people dream 😭 are you saying a child is trolling all engineers in the world for so many years?! Even making some come to mac forums to vent?! NOOOOO!
 
  • Haha
Reactions: NT1440 and Abazigal

High IQ Person

macrumors member
Dec 31, 2022
96
60
Perhaps, but Apple also had to be able to capitalise on that luck and opportunity when it presented itself, and then sustain that momentum and success year after year after year. Nor do I see marketing and advertising as a "dirty word" because its purpose is simply to bring attention to a product said company is selling. At the end of the day, the buyer decides what works best for them and the truth is - plenty of people buy Apple products because that's really what works best for them and again, credit goes back to Apple for making great experiences which people are willing to pay a premium for.
We can see from the disaster that is the Vision Pro that Apple lucked their way to the top.
 

Kan-O-Z

macrumors 6502
Aug 3, 2007
305
2
It feels like you are just being an Apple spokesperson here and ignoring a much larger context. Just because Apple focuses on performance per watt does not mean that everyone wants a computer whose focus is that. If that is the entirety of how a computer is judged than "who cares" makes sense. But if there is more factors than that, then "who cares" is just a dismissal.

Also, any other company who wants to sell a laptop has no choice other than to look to solutions that aren't Apple since Apple won't sell their chips to anyone. Therefore everyone else will focus to build the best machines they can. If they can't put together a laptop with a processing unit that is an efficient as Apple should they just not build computers anymore? Or perhaps they can distinguish themselves in other ways for users who don't want / can't get an Apple laptop for the hundreds of reasons that make a lot of sense?
It actually matters and it matters a lot for portable lightweight laptops. If it can't match the 18-20 hours of battery life of the M3 then it matters. If it runs hot and the chips start throttling and fans spin up and the battery drains out even faster then matters. If it cooks your lap, it matters. This is exactly why Intel is losing out.

Now if we are talking about desktop comptuters that plug into the wall then yes that changes things.
 

romulopb

macrumors newbie
Apr 9, 2024
13
3
20+ years is a very long time to be lucky, that's all I am saying.
Don't be silly... 20+ years is the minimum amount of time a child that knows how to design chips can be lucky 🧐 You know why? Because it is certified by the official engineer from this thread.
 
  • Like
Reactions: mazz0
It actually matters and it matters a lot for portable lightweight laptops. If it can't match the 18-20 hours of battery life of the M3 then it matters. If it runs hot and the chips start throttling and fans spin up and the battery drains out even faster then matters. If it cooks your lap, it matters. This is exactly why Intel is losing out.

Now if we are talking about desktop comptuters that plug into the wall then yes that changes things.
But I never said it didn't matter when those are important considerations. What I said was that those aren't the ONLY considerations and use cases for why every person and business on the planet purchases a laptop. The OP literally said "who cares" to anything other than the singular focus. That was the response and not only does everything you list matter, but also other use cases also....matter.
 

coredev

macrumors 6502a
Sep 26, 2012
577
1,230
Bavaria
Apple can only continue to charge a premium if they innovate. Otherwise there’s no reason for people to pay a premium for inferior hardware, a 20 year old design, and a stale OS.
How do you figure Apple's hardware is inferior?
Especially the Apple Silicon chips are leading the pack, aren't they? Why else would Microsoft feel the need to claim that a new Snapdragon based Laptop is faster than Apple's entry level MacBook?
As for the "20 year old design" and "stale OS", not sure where you take that from.
It is a matter of taste what one considers stale and old.
 

Confused-User

macrumors 6502a
Oct 14, 2014
596
655
Yet the speedup from M1 to M3 is almost entirely because of clock increase from 3.2 GHz to 4.05 GHz. Almost no IPC improvement for M series while AMD, Intel and Qualcomm have all increased both frequency and IPC. Would you agree?
I wouldn't answer that question with a simple yes or no, because you are selectively picking your (correct) facts to make a point (in your original post, not here) which is false.

Yes, IPC on Mx hasn't gained much. Yes, all the others have gained significant IPC. But why is that?

It's because their IPC was crap. And it's still not close to Apple's. And gaining IPC gets harder the better it is to start, so getting to 80% of Apple's IPC is no big deal these days, but getting to 90% takes real work, and getting to 100%... well, nobody's managed that so far. Except Apple.

For Apple to be able to maintain IPC while driving up clocks by 25% is a significant accomplishment- but in fact, it's up by some 3% give or take. And it was done (and almost certainly, only possible) by redesigning the cores. So claiming that Apple is standing still is ignorant nonsense. Sure I wish they'd done more. But what they did is *impressive*. To recap:
Single-core - drove up clocks 25% from M1 to M3 while gaining a little IPC, and ALSO while *increasing* efficiency. Though the latter is probably mostly (NOT entirely) process improvements, the former required a significant redesign.
Multi-core - this is even more impressive. On the Max, just from M2 to M3, scaling efficiency is fantastic, losing very little while bumping up P cores by 50% (along with clocks). Comparisons to the M2 Ultra would make the M3 Max look even more impressive, but that would be an error due to Ultrafusion, which necessarily imposes a cost.

I have high hopes for the M4. I expect to be partially disappointed. It's still going to kick ass, and for most applications that don't require performance at any price (or Windows), it will likely still be the best chip by a significant margin. ...from an engineering standpoint. From a consumer's, it will depend on relative pricing of course, along with many other things that vary from person to person.
 

Confused-User

macrumors 6502a
Oct 14, 2014
596
655
Not sure energy consumption matters.

For example, computer A runs 100 watts and takes 2 seconds to complete a task, it has consumed 200 watts.

Then computer B runs the same task in 1 second but consumes 200 watts, it also only consumed 200 watts.

You are not saving the world or preserving energy with computer A, just delaying the time for the user to complete the task and costing more time and therefore more salary.

This absolute focus on low power is an Apple marketing strategy and market propaganda, unless you never need performance. Which is why Apple targets teenagers, 20-somethings, and other low performance users, and not professional users.
Your fundamental premise is correct (unit errors aside, which are egregious). Your application of it completely fails, because the numbers matter. You really are "preserving" a lot of energy using M chips.

The base M3 is faster than many x86 chips, and slower than others. But it doesn't consume half the energy while doing half the work. It's more like a quarter of the energy, for roughly the same work. (And yes, I'm pulling those numbers out of my ass, because it's unlikely that anyone on this planet can give you average power and performance figures for "all PC laptops" or "all PCs" but they are in fact, much more close to accurate than yours.)

For general CPU tasks, the M series chips are the most efficient that exist. Better than Intel, better than AMD, and (according to all published benchmarks so far) better than this new Snapdragon. There are certainly tasks for which more raw power is a worthwhile choice, but most people don't do those tasks, except for one (playing games). Even many professional users.

To understand the difference between power and energy, see for example https://en.wikipedia.org/wiki/Watt#Distinction_between_watts_and_watt-hours

I hate to have created an account just for this, but the poster just throws out this line "the Snapdragon X Elite will likely run hotter and require laptops with fans". Where did that come from? It isn't in the article from The Verge. One of the sources says that there will be an 80W variant that requires a fan. That's no surprise. But this line appears to be entirely unsourced. Where did this come from? Is this just a random guess to fill out the post, or did I miss something?
Most posters here don't bother to read the article in full, much less comments with more useful info (like mine and a few others). It's just nonsense.

The chip can run at 20w or 80w or anything in between (and probably a bit higher, we'll see). At the low power, it loses to M3 at everything. At the high power, it's roughly like the M3 Pro, which is a pretty poor showing as the M3 Pro has half the P cores, along with 6 E cores which are roughly equal to 2 P cores. So it uses more power than the M3, and ~50% more cores (where 3E = 1P) to get to roughly the same place.

I do expect that Apple will use the latest ARM CPU core and the newest GPU core as the basis for the M4 SoC. That plus the proper optimization for MacOS could still mean a performance gap over Qualcomm's new Snapdragon X Elite SoC, given we don't know how well Microsoft will take full advantage of the Qualcomm SoC hardware registers in the ARM version of Windows 11 (and eventually 12).
That's a joke, right? I mean, you don't really think Apple uses ARM's cores, do you?

In any case, there's already a performance gap to the M3 (or M3 Pro, depending on power profile) - though as I've said a couple times before, talking about performance without looking at the price is meaningless. If MS sells this thing at the same price as an Air, then it will be compared to the Air.

I think that holds true for literally every non-niche processor out there no? Apple's single core was the fastest on the market last I looked into it (its been a while).

No, that was only true for a little while when the M1 came out, IIRC. The Intel cores running at 6+GHz are somewhat faster than M3 cores running at 4.05GHz. Not even close to 50% faster, because their IPC is poor compared to the M3. And they consume... I don't recall, but several times the power the M3 uses. I believe AMD's fastest cores are close to Intel's and therefore also faster than Apple's, though again with worse IPC, and much worse efficiency, though notably better than Intel's.
 

Confused-User

macrumors 6502a
Oct 14, 2014
596
655
Apple hasn't been picking up the all of the latest ISA updates. For example SVE2. Nested virtual machines . etc.
At this point Apple is incrementally drifting away from ARM ISA. The current intersection is high enough than stuff like Windows 11 (and its apps) still run, but 5-10 years down the road I wouldn't bet the farm that the contemporary update will.

This is a very interesting point! I suspect that you're not quite right though - I think that as more custom-designed cores (like QC's) come out, you're going to see them target different levels of the ISA. So I am pretty sure contemporary Windows will run on M9 (or whatever) just fine, and so will most apps, though a few may not.

I suspect the "faster than Rosetta 2" is going to be overstated and/or 'cherry picked' . What Microsoft has is more flexible which should lead to more corner cases where they can 'win'. If those happen to be cases that align with some hardcore Windows apps users then it would be a win-win demonstration. It is going to boil down to a better Windows outcome (not a better at running Mac app outcome. )

This is still great for Mac users who have to run Windows under Parallels/VMWare/whatever. Their windows VMs will benefit from this. So it's a win all around, even if it turns out not to be as fast as Rosetta at some/all things.
 

chucker23n1

macrumors G3
Dec 7, 2014
8,609
11,421
For Apple to be able to maintain IPC while driving up clocks by 25% is a significant accomplishment

OK, but the clock on the M3 is already 4.1 GHz. Up 9.4% on the M2 (ignoring the M2 Max), then another 17% on the M3.

This approach isn't sustainable. Within two or three generations, they'd be at 5 GHz.
 
  • Like
Reactions: wanha

wanha

macrumors 68000
Oct 30, 2020
1,560
4,504
Marketing 101. I am guessing you have watched Apple events where new products are announced. The same dog and pony show. Every manufacturer is going to do what is necessary to make their product appear to be the best.

Yes, but there is a way to do this where your claims are at least somewhat credible and not out-and-out lies.

Over the past 12 years or so, Microsoft have promised amazing performance with their ARM-powered Windows laptops but those promises simply haven't panned out.

Fool me once, shame on you...
 

akbarali.ch

macrumors 6502a
May 4, 2011
805
695
Mumbai (India)
It seems "the writing is on the wall" for Intel as more and more MFG's move to ARM based systems
Not going to happen soon or might not happen at all. Windows is a mammoth not because of its an operating system by Microsoft but because of the third party hardware software it carries on its shoulder. It will be interesting to see if those devs, both on the hardware and software side are willing to make a version for ARM. It's a huge resouce to pour in. Apple pulled it off as there was a clear dead-end on Intel-mac road map. The devs have to change the lane, no option. That compulsion is not there on the Windows side. Unless they too put a visible dead-end.
 

Confused-User

macrumors 6502a
Oct 14, 2014
596
655
Holy cow. There's been a ton of ignorance and nonsense in this thread (unsurprisingly), but this... is next-level. I thought the poster was joking at first (the username is hilarious), but their arguments with others here who know a bit more show that they're serious. I'm not going to respond to everything - it's not worth engaging with them, as shown by others who have already tried. But for the benefit of anyone reading them here are some corrections.

Performance and efficiency curve is set by the node, not “design”. Apple “design” is mostly a marketing stunt. There’s actually very minimal or no benefit to the end user except making them think they’re getting a super special chip. The most important, hardest and intellectual part comes from manufacturing, not “design”.

Possibly the dumbest comment ever posted on MR. (Ok, maybe not, that's a very high bar!) That "curve" doesn't exist in a vacuum. The notion that the chip design is meaningless is ... more wrong than mustard on ice cream. It's laughable. For a simple proof of the sheer stupidity of it, consider two different core designs on the SAME process: Apple's P and E cores. There's roughly a factor of 3 difference in performance. Or look at Intel's P & E cores - the difference is even larger. Naturally, in both cases, the P cores are a LOT larger. Design with more transistors, you can get a faster core. Pretty basic.

You could also compare Apple's older N7 cores (A12 or A13) with another vendor's N7 core. The differences are stark.

Lastly, as I mentioned in a previous post, design will determine the highest clock you can run a chip at. In the language of the P-E curve, the curve doesn't extend forever. It cuts off at a certain point, beyond which more power won't get you any more performance, because the design is literally not capable of it.

It’s 99.9% the node.

The design part only matters because there’s only a limited space on the die, so you have to decide how much space you want to apportion to the CPU, GPU, etc. Adding more CPU cores, for example, will improve performance but it’s not going to change the PPW. That comes from the node.

You also have to consider yield and pricing issues if you make your SoC too big.

Designing chips is an economics game or deciding where in the yield-cost curve you want to land on. It’s not a technical challenge.

There’s a point on the PPW curve where increasing performance causes a disproportionate increase in wattage. Whether Qualcomm wants to play in this area is a design choice, but it won’t be hard for them to tone it down and play on the more efficient part of the PPE curve. It’s as simple as ordering pizza.

Nearly everything above is wrong. The two parts that are correct are:
1) Yield and pricing do matter, and are a direct consequence of area
2) The PPW curve is generally as stated. QC *is* playing in both "area"s to some extent already, by selling the chip as useful at both 20ish and 80ish W.

wrong. This is common knowledge to anyone with knowledge in semiconductors. Every time fabs announce a new node, they announce performance and efficiency gains compared to the last node. Where do you think they’re getting these figures from? They’re from the derivative of performance over wattage = 1 (The inflection point on the node’s PPW curve where it becomes less advantageous to increase wattage to increase performance).

Designing a chip using a fab’s node is picking where on the PPW curve you want to be in. You cannot alter the position of the PPW curve by “designing” a chip. Based on history, Apple likes being on the left side of the curve where performance goes up disproportionately with wattage. Qualcomm can easily match Apple if they wanted to, but they’re probably aiming for the power users and will settle on the other end of the curve where you get marginal performance gains with more wattage.
Again, this is a DESIGN choice that a 3-year-old can make. There’s nothing sophisticated about chip design.
This is 99.9% wrong. The flimflam about P-E curves in the first paragraph is irrelevant to the second, and in any case incorrect - when a single area-reduction number is quoted, it's for a "typical" mix of logic, SRAM, and analog, which mix is chosen by the foundry, usually derived from an actual chip design. If you look in more detail, they'll quote specific numbers for each of those. For example, TSMC quoted area improvements of 1.7x for logic going from N5 to N3, but only 1.2x for SRAM and 1.1x for analog. (And it turned out the SRAM improvement wasn't nearly that good, in the end.)

As for the choice of where you want to be on the curve... you just choose. You run your design at a higher or lower power (or equivalently, clocks), and that determines where you are on the curve.

HOWEVER, that's not *really* true, because - as I already mentioned above, and at greater lengths in previous posts - the design has a major impact on how fast you can actually run your core (and your uncore, but let's not get too far into the weeds). It will also have a particular part of the frequency curve where you get the best efficiency, which is entirely dependent on the design. So yes, you can pick your clock, but your design constrains you.

They're not. See pic below.

The Wattage is 2-3X higher under their most recent processor because TSMC's 3nm is total ******* and provides almost no PPA gains from their N4P node. It's another proof that design doesn't matter and it's all in the node. To get any form of performance gain, Apple had to move further right in the PPW curve to the inefficient side (Where derivative < 1) which is why you're seeing such terrible PPW on the M3 and A17 Pro when it does anything other than idle. You also notice the battery life + battery health complaints on the iPhone 15 pro? That's because Apple moved to the inefficient side of TSMC's PPW curve (More heat and more watts).

Usually Apple gets first dibs on the best technology from their suppliers, but this backfired on 3nm because TSMC messed that node up badly. The gains on N3B are extremely minimal compared to N4P that Apple had no choice but to play on the right-side of the PPW curve or they risk getting no performance gains from last gen chips. That would've been a marketing and sales disaster.
Yeah, this is all garbage. A bunch of people with short fuses got the idea that N3 was bad when it first came out, and all sorts of nonsense was published. As it turns out, N3 seems to have landed where it was supposed to. The one slightly unexpected shortcoming, as I mentioned earlier, was that SRAM cells only shrank about 5% compared to N5. There were also big concerns about yield at the start. I don't think anyone who actually knows about this is telling, but the general consensus seems to be that it's fine, and within the limits of the info presented in their financial statements, that appears to be true.

Intel and AMD are on older nodes. Intel is on 10nm and about to go down to 7nm while AMD is still on 4-5nm.

The 3nm lineup is FinFlex, so there are manufacturing improvements with each generation. Normally how it works is that you have a manufacturing base process (1st gen N3B) and each subsequent generation (N3E, N3P, N3X, etc.) is a slightly modified/improved manufacturing process that gives you some PPA improvement though at a smaller gain than a full node jump.

Chipmaking is a lucrative sector. I don't downplay the manufacturing aspect. I only say the "designing" part that Apple, Qualcomm, AMD, etc. do is child's play and an intellectual joke.
Calling Intel's process 10nm is arguing about semantics... but is also wrong. They're currently producing the old intel "7nm" which is now called "Intel 4". The old 10nmSF is now called Intel 7 and that's been up for a while now. You can remark snidely on their need to rename to keep up appearances, and you'd be right, but it's also true that the old names were less dishonest than the names used by other foundries (TSMC, Samsung, etc.) There is no feature in "3nm" chips that gets anywhere near to being 3nm in actual size. Intel 4 is roughly equivalent to TSMC N4, so if you're going to accept one name you should accept the other.

N3 variants (not "generations") (E, P, X, etc.) are indeed smaller changes, but not all of them improve PPA. For example, X is about high power applications, and will likely relax some design rules... which is fine, because such designs can't go that dense anyway.

Calling design "child's play and an intellectual joke" demonstrates complete ignorance, and probably psychological issues I'm not qualified to diagnose.

Apple provides large sales volume. That’s about it. The actual designing part is pretty easy and trivial.

We can see how Apple gave up on microLED and the car that they just suck at engineering. Their strength is in marketing and branding. Tim Cook knows this, which is why he’s pivoting away from engineering and leaving that to their higher IQ suppliers.

Apple will focus on DEI, affirmative action, social justice, marketing political activism and other activities that increase their social clout to get higher sales.

...and now it starts to become clear why this person is so dismissive of Apple. The DEI etc. comment makes it clear that engineering isn't motivating these many posts, but rather politics. Which I could really stand NOT to have to hear about for five frickin' minutes out of my day, please.

Do you have any semiconductor engineering experience (Programming doesn’t count)

You have no background in this topic and your opinion is irrelevant.

No engineer is going to care if someone not educated in his field of expertise believes in science or not.

Wow. Pot, meet kettle. Take some classes, then come back here.
 
Last edited:

Confused-User

macrumors 6502a
Oct 14, 2014
596
655
OK, but the clock on the M3 is already 4.1 GHz. Up 9.4% on the M2 (ignoring the M2 Max), then another 17% on the M3.

This approach isn't sustainable. Within two or three generations, they'd be at 5 GHz.
Of course. And perhaps they'll actually do that for their desktops (I doubt it, but it's not impossible).

No single approach is endlessly sustainable. One year you improve the NoC, another you work on the cache hierarchy, then you tackle the OoO resources, then you're onto prefetchers and branch prediction... etc. Except you tend to do a few of those at once. In the future, everyone think more special-purpose engines will be a big factor. Focusing on one approach would be a recipe for failure. You work on something until better ideas come around, then you work on those.

The question is always, what's the smartest way to spend engineering time, transistors, and energy? (Notice those all come down to money in different ways). And slightly more subtly, what is "smartest" in this context?

They decided that for the M3, the smartest way involved allowing for higher clocks. It also involved a LOT of other stuff, in other core types, in the NoC, and the rest of it all- but that alone was a substantial investment in engineering time.

When the M4 comes around, who can say what they'll focus on? We won't know until we see it.
 
  • Like
Reactions: jdb8167

chucker23n1

macrumors G3
Dec 7, 2014
8,609
11,421
Of course. And perhaps they'll actually do that for their desktops (I doubt it, but it's not impossible).

No single approach is endlessly sustainable. One year you improve the NoC, another you work on the cache hierarchy, then you tackle the OoO resources, then you're onto prefetchers and branch prediction... etc. Except you tend to do a few of those at once. In the future, everyone think more special-purpose engines will be a big factor. Focusing on one approach would be a recipe for failure. You work on something until better ideas come around, then you work on those.

The question is always, what's the smartest way to spend engineering time, transistors, and energy? (Notice those all come down to money in different ways). And slightly more subtly, what is "smartest" in this context?

They decided that for the M3, the smartest way involved allowing for higher clocks. It also involved a LOT of other stuff, in other core types, in the NoC, and the rest of it all- but that alone was a substantial investment in engineering time.

When the M4 comes around, who can say what they'll focus on? We won't know until we see it.

Right. I guess I'm just a bit surprised they went (mostly) for clock for the second time in a row.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.