But that's really not all they did. At their M2-level IPC, 3% gain (or 4-5, depending) isn't negligible. Though obviously I'd like more. But beyond that, there were a ton of changes. The most obvious ones, aside from redesigning the core to allow for higher clocks, were:Right. I guess I'm just a bit surprised they went (mostly) for clock for the second time in a row.
Just been on a two week trip around the East Coast. In all the cafes and hotels I was in, and on all the trains and flights I didn’t see anyone using more screens than the one on their laptop.And it supports three external 4K displays.
Apple put so much focus on M3 ray tracing, they didn’t leave enough silicon for CPU performance.
The FTC and SEC (not the sports one) would be all over any vendor with significant penalties who outright lied. All vendors do the same thing, slant tests, descriptions, etc. into a direction that makes them look good. Auto makers, drug companies, flashlight companies all bend the truth. But none of them actually make false statements or lies. Yes, even Apple does the same thing. Theranos tried, and failed. They will get caught.Yes, but there is a way to do this where your claims are at least somewhat credible and not out-and-out lies.
Personally, I think designing manufacturing processes is child's play.Holy cow. There's been a ton of ignorance and nonsense in this thread (unsurprisingly), but this... is next-level. I thought the poster was joking at first (the username is hilarious), but their arguments with others here who know a bit more show that they're serious. I'm not going to respond to everything - it's not worth engaging with them, as shown by others who have already tried. But for the benefit of anyone reading them here are some corrections.
Possibly the dumbest comment ever posted on MR. (Ok, maybe not, that's a very high bar!) That "curve" doesn't exist in a vacuum. The notion that the chip design is meaningless is ... more wrong than mustard on ice cream. It's laughable. For a simple proof of the sheer stupidity of it, consider two different core designs on the SAME process: Apple's P and E cores. There's roughly a factor of 3 difference in performance. Or look at Intel's P & E cores - the difference is even larger. Naturally, in both cases, the P cores are a LOT larger. Design with more transistors, you can get a faster core. Pretty basic.
You could also compare Apple's older N7 cores (A12 or A13) with another vendor's N7 core. The differences are stark.
Lastly, as I mentioned in a previous post, design will determine the highest clock you can run a chip at. In the language of the P-E curve, the curve doesn't extend forever. It cuts off at a certain point, beyond which more power won't get you any more performance, because the design is literally not capable of it.
Nearly everything above is wrong. The two parts that are correct are:
1) Yield and pricing do matter, and are a direct consequence of area
2) The PPW curve is generally as stated. QC *is* playing in both "area"s to some extent already, by selling the chip as useful at both 20ish and 80ish W.
This is 99.9% wrong. The flimflam about P-E curves in the first paragraph is irrelevant to the second, and in any case incorrect - when a single area-reduction number is quoted, it's for a "typical" mix of logic, SRAM, and analog, which mix is chosen by the foundry, usually derived from an actual chip design. If you look in more detail, they'll quote specific numbers for each of those. For example, TSMC quoted area improvements of 1.7x for logic going from N5 to N3, but only 1.2x for SRAM and 1.1x for analog. (And it turned out the SRAM improvement wasn't nearly that good, in the end.)
As for the choice of where you want to be on the curve... you just choose. You run your design at a higher or lower power (or equivalently, clocks), and that determines where you are on the curve.
HOWEVER, that's not *really* true, because - as I already mentioned above, and at greater lengths in previous posts - the design has a major impact on how fast you can actually run your core (and your uncore, but let's not get too far into the weeds). It will also have a particular part of the frequency curve where you get the best efficiency, which is entirely dependent on the design. So yes, you can pick your clock, but your design constrains you.
Yeah, this is all garbage. A bunch of people with short fuses got the idea that N3 was bad when it first came out, and all sorts of nonsense was published. As it turns out, N3 seems to have landed where it was supposed to. The one slightly unexpected shortcoming, as I mentioned earlier, was that SRAM cells only shrank about 5% compared to N5. There were also big concerns about yield at the start. I don't think anyone who actually knows about this is telling, but the general consensus seems to be that it's fine, and within the limits of the info presented in their financial statements, that appears to be true.
Calling Intel's process 10nm is arguing about semantics... but is also wrong. They're currently producing the old intel "7nm" which is now called "Intel 4". The old 10nmSF is now called Intel 7 and that's been up for a while now. You can remark snidely on their need to rename to keep up appearances, and you'd be right, but it's also true that the old names were less dishonest than the names used by other foundries (TSMC, Samsung, etc.) There is no feature in "3nm" chips that gets anywhere near to being 3nm in actual size. Intel 4 is roughly equivalent to TSMC N4, so if you're going to accept one name you should accept the other.
N3 variants (not "generations") (E, P, X, etc.) are indeed smaller changes, but not all of them improve PPA. For example, X is about high power applications, and will likely relax some design rules... which is fine, because such designs can't go that dense anyway.
Calling design "child's play and an intellectual joke" demonstrates complete ignorance, and probably psychological issues I'm not qualified to diagnose.
...and now it starts to become clear why this person is so dismissive of Apple. The DEI etc. comment makes it clear that engineering isn't motivating these many posts, but rather politics. Which I could really stand NOT to have to hear about for five frickin' minutes out of my day, please.
Wow. Pot, meet kettle. Take some classes, then come back here.
Just been on a two week trip around the East Coast. In all the cafes and hotels I was in, and on all the trains and flights I didn’t see anyone using more screens than the one on their laptop.
Please explain. As an electrical engineer, I'm sure my calculations and unit selections are absolutely correct, but the rest of your explanation is hijacking my comment using a different concept that is spreading, as your user name implies, confusion. There is nothing in my comments that have anything to do with watt-hours, which is a rate of energy consumption. My comments were about absolute energy consumption.Your fundamental premise is correct (unit errors aside, which are egregious). Your application of it completely fails, because the numbers matter. You really are "preserving" a lot of energy using M chips.
Yeah I don't think this is something we would see in the next few years but I think by the end of the 2020's that the visible end to x86 architecture outside of maybe for servers in data centers.Not going to happen soon or might not happen at all. Windows is a mammoth not because of its an operating system by Microsoft but because of the third party hardware software it carries on its shoulder. It will be interesting to see if those devs, both on the hardware and software side are willing to make a version for ARM. It's a huge resouce to pour in. Apple pulled it off as there was a clear dead-end on Intel-mac road map. The devs have to change the lane, no option. That compulsion is not there on the Windows side. Unless they too put a visible dead-end.
Honestly this is all speculation at this point. Of course what Microsoft meant was that with the new emulation that is coming with the next version of Windows the difference between native and emulated will be less than the difference between Rosetta and native ARM on Mac
. Of course nobody in the Apple world believes this, and personally I don't care if they are slightly better, similar or slightly worse than Rosetta. If they really manage to improve the emulation to a point where emulated stuff on WoA is, with Elite X, as fast or faster than on Intel or AMD with better battery life (so at similar TDP), that's a big win in my book.
This is a very interesting point! I suspect that you're not quite right though - I think that as more custom-designed cores (like QC's) come out, you're going to see them target different levels of the ISA. So I am pretty sure contemporary Windows will run on M9 (or whatever) just fine, and so will most apps, though a few may not.
This is still great for Mac users who have to run Windows under Parallels/VMWare/whatever. Their windows VMs will benefit from this. So it's a win all around, even if it turns out not to be as fast as Rosetta at some/all things.
Well, yeah. Apple hasn’t improved their core much after so many years. And there are built in architecture limitations. They are dropping the ball big time…
Microsoft will advertise that its upcoming Windows laptops with Qualcomm's Snapdragon X Elite processor are faster than the MacBook Air with Apple's latest M3 chip, according to internal documents obtained by The Verge.
"Microsoft is so confident in these new Qualcomm chips that it's planning a number of demos that will show how these processors will be faster than an M3 MacBook Air for CPU tasks, AI acceleration, and even app emulation," the report says. Microsoft believes its laptops will offer "faster app emulation" than Apple's Rosetta 2.
Introduced in October, the Snapdragon X Elite has Arm-based architecture like Apple silicon. Qualcomm last year claimed that the processor achieved 21% faster multi-core CPU performance than the M3 chip, based on the Geekbench 6 benchmark tool.
There are a few caveats here, including that Microsoft and Qualcomm are comparing to Apple's lower-end M3 chip instead of its higher-end M3 Pro and M3 Max chips. MacBooks with Apple silicon also offer industry-leading performance-per-watt, while the Snapdragon X Elite will likely run hotter and require laptops with fans. Since being updated with the M1 chip in 2020, the MacBook Air has featured a fanless design. Apple can also optimize the performance of MacBooks since it controls both the hardware and macOS software.
Nevertheless, it is clear that Apple's competitors are making progress with Arm-based laptops. Microsoft plans to announce laptops powered by the Snapdragon X Elite later this year, including the Surface Pro 10 and Surface Laptop 6 on May 20.
Article Link: Microsoft Says Windows Laptops With Snapdragon X Elite Will Be Faster Than M3 MacBook Air
You are talking like there is some theoretical maximum IPC Apple has almost reached after decades of development and has nowhere further to go, and others are catching up. That is not true, we are not living in some CPU development "End Times". I bet 10 years ago people also thought that getting 80% of the IPC of the fastest chips is easy but getting past 100% will be nearly impossible. And look where we are now…It's because their IPC was crap. And it's still not close to Apple's. And gaining IPC gets harder the better it is to start, so getting to 80% of Apple's IPC is no big deal these days, but getting to 90% takes real work, and getting to 100%... well, nobody's managed that so far. Except Apple.
Erm, no. Watt is a measure of power, e.g. rate of energy consumption. Watt-hour is a unit for energy.There is nothing in my comments that have anything to do with watt-hours, which is a rate of energy consumption. My comments were about absolute energy consumption.
The above more so is a speculation. Microsoft has a x86-to-Arm binary converter. There is nothing to 'speculate there'
Even Architecuture implementers have to comply with the ISA to get certification from Arm. As implementers move to new version 9.2 , 9.5 , 9.8 , 10.2 , 10.3 , etc. more stuff becomes mandatory from optional. As long as the Arch implementors keep moving along they will stay aligned with the ISA.
The only Architecture implementor avoiding SVE2 is Apple. The others don't have a problem.
That was true(ish) in the days of multiple nodes left in silicon. We are actually starting to approach the end of the road for transistors based on silicon. At some point the ballooning costs (it’s been almost double the cost in billions to get 3nm up compared to 5nm, and 5nm was nearly double the cost of 7nm) will make expansion *theoretically* possible, but cost prohibitive for anyone to continue this road. I expect a different process is going to be needed to go much further than the 1nm node, which really isn’t that far away.You are talking like there is some theoretical maximum IPC Apple has almost reached after decades of development and has nowhere further to go, and others are catching up. That is not true, we are not living in some CPU development "End Times". I bet 10 years ago people also thought that getting 80% of the IPC of the fastest chips is easy but getting past 100% will be nearly impossible. And look where we are now…
I agree with everything you're saying. What I'm suggesting is that the same market forces that put Apple where it is (no SVE2 for example) may well cause similar results in other companies' chip designs.Even Architecuture implementers have to comply with the ISA to get certification from Arm. As implementers move to new version 9.2 , 9.5 , 9.8 , 10.2 , 10.3 , etc. more stuff becomes mandatory from optional. As long as the Arch implementors keep moving along they will stay aligned with the ISA.
The only Architecture implementor avoiding SVE2 is Apple. The others don't have a problem.
The issue is that "AI PC" need SIMD processing. Yeah it is looking that Microsoft is going to put a NPU requirement on Windows going forward. Extremely likely, there are going to be Windows features that 'fall back' to SVE2 over the long term. [ Nevermind that Apple's NPUs are entirely proprietary. ]
The question is what happens if Apple never adopts SVE2 and it gets to be mandatory in the ISA. The basic ISA isn't going to change much but SIMD/Virtualiation/required-AI features problem are not. Those are areas where Apple is already way off the 'reservation'.
Whenever get to the point that Apple doesn't like the new addtions to the iSA , Apple will likely just stop paying for access and make "old stuff". At some point Windows 11 will disappear. So will Windows 12. ( just like Windows 10 , 8 , etc. did ).
I am not. I don't think anyone knows what the theoretical limits of IPC are. It's a massively complex problem, in part because it's not just about hardware design- the way software is written (and especially how compilers work, and now to some extent interpreters too) is very important for this, on a practical level. After all, IPC depends on the instruction mix. And of course nothing runs entirely out of cache, so memory hierarchy is critical as well. And there's lots more.You are talking like there is some theoretical maximum IPC Apple has almost reached after decades of development and has nowhere further to go, and others are catching up. That is not true, we are not living in some CPU development "End Times". I bet 10 years ago people also thought that getting 80% of the IPC of the fastest chips is easy but getting past 100% will be nearly impossible. And look where we are now…
Wow. You're an electrical engineer and you don't know what a watt-hour is?Please explain. As an electrical engineer, I'm sure my calculations and unit selections are absolutely correct, but the rest of your explanation is hijacking my comment using a different concept that is spreading, as your user name implies, confusion. There is nothing in my comments that have anything to do with watt-hours, which is a rate of energy consumption. My comments were about absolute energy consumption.
That claim was theoretically possible with two imaginary computers, but is not correct in the real world, where Mx chips are either faster than competing x86 chips, or not much slower, while being dramatically more efficient. With real-world Mx chips, you may be delaying the time to complete a task compared to a top-end x86 chip by a small fraction of the total time (12% or less, with current top-end chips, give or take), but you're consuming MUCH less energy. It really is better for the environment. Whether or not that should be a deciding factor in any given situation is, well, situational.You are not saving the world or preserving energy with computer A, just delaying the time for the user to complete the task and costing more time and therefore more salary.
For sure.No. I’ve not got any that I am aware. What bugs does it have? I googled it, found some (apparent) alleged crashing bugs like Contacts Freezing when printing. I just tried it and wouldn’t freeze no matter how many contacts I added. 28 pages and it was flawless. So pray tell. What bugs? Maybe it’s your install? Did you drop your mac, because that might cause some issues.
I totally agree. How people use their computers and what they need is all about context. I’ve edited 4K Video with multiple streams on my 2020 M1 MacBook Air whilst doing some PS Lightroom work without problems. But it’s all about use case. Maybe the MacBook Air or the Windows equivalence isn’t the right tool for a real power user. They’re designed for portability not Hollywood movies. Saying that MKBHD has no problems rendering 1.5 hour videos with multiple 8K streams on his M2 MacBook Pro.
For sure.
I have the last intel MacBook Pro and for specifically what I want to do it is orders of magnitude better than any apple silicon Mac. AS macs won't talk to many devices used industrial systems so in that one context at least my power hungry machine that will do the job is more use than one that won't
Look at my original post. I was comparing 200 watts for 1 seconds versus 100 watts for 2 seconds.Wow. You're an electrical engineer and you don't know what a watt-hour is?
Watts are *rates*. A watt-hour is NOT a rate. It's that rate, across a certain period of time, which produces a number which is a measure of actual energy (3600 joules). Again, I suggest that you read the section of the wikipedia article I linked, which is quite clear.
So it's hardly surprising that my explanation is confusing to you. You didn't write what you thought you were writing.
In any case, I was addressing your original claim:
That claim was theoretically possible with two imaginary computers, but is not correct in the real world, where Mx chips are either faster than competing x86 chips, or not much slower, while being dramatically more efficient. With real-world Mx chips, you may be delaying the time to complete a task compared to a top-end x86 chip by a small fraction of the total time (12% or less, with current top-end chips, give or take), but you're consuming MUCH less energy. It really is better for the environment. Whether or not that should be a deciding factor in any given situation is, well, situational.
Most people don't factor the diversity of impacts in this subject, most x86 performance devices out there just ruin the battery quicker, because those machines don't care about sane consumption, what leads to easier cycle, temperature, % abuses and so on. It is so easy to maintain a MacBook battery nowadays, my device will enter 1 year of use and still 100% battery health.It really is better for the environment. Whether or not that should be a deciding factor in any given situation is, well, situational.
The problem is that, even if it was really that true, you picked such a non-realist scenario, computers, on the majority of their times are much more like a car stopped in transit than a machine doing some sort of 100% efficiency work analogy. Not everybody is waiting for a compiler/video encoding/etc, and for sure it is not a massive fraction of their work time. Would it be, it is time to move such a task to a server.Seconds example: 2 sec * 100 joule / sec (remember that 1 watt = 1 joule /sec) = 200 joules.
Both tasks consumed the same amount of energy from the world. None was saved. No rate was used. And again my premise is that just because A CPU uses less energy and runs at a lower clock speed, does not simply meant it uses less energy to accomplish the same task. Which also implies that running the same task on the same CPU (like low energy CPUs in an Apple laptop) with reduced clock does not save the world. It only saves the battery.