Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Apple_MIA

macrumors newbie
Jun 16, 2016
8
7
The other issue for MS is their major market is business and most security software and infrastructure is built for x86. Guess that would have to catch up as well to these new Snap-thingies
 

Confused-User

macrumors 6502a
Oct 14, 2014
597
656
Right. I guess I'm just a bit surprised they went (mostly) for clock for the second time in a row.
But that's really not all they did. At their M2-level IPC, 3% gain (or 4-5, depending) isn't negligible. Though obviously I'd like more. But beyond that, there were a ton of changes. The most obvious ones, aside from redesigning the core to allow for higher clocks, were:
- new GPU, with substantially better scaling
- new NoC/uncore, with much better scaling for CPUs (and GPUs, but that's mostly not NoC I think)
- better NPU (details unknown)
- probably more stuff I'm not remembering or that we don't know about

The multicore performance - as is especially apparent with the Max - is a very big change, and if I had to guess I'd say they're going to continue this trend in the M4- both higher clocks and better scaling. (Though really, we may have higher clocks than we realize already. It's not impossible we'll see 4.3-4.5GHz in the M3 Studio.)

BTW, I don't think the M2 + M3 clocks really constitute a trend. The M2 is "special" in that it appears to have been a rush job (inasmuch as a chip can be one) when the "real" M2 couldn't be built due to delays in N3. M3 + M4 may well constitute a trend, but not one I think will last, because clocks were low-hanging fruit, and might still be for another generation, but probably not after that.
 
  • Like
Reactions: jdb8167

Jonnod III

macrumors member
Jan 21, 2004
92
51
And it supports three external 4K displays.

Apple put so much focus on M3 ray tracing, they didn’t leave enough silicon for CPU performance.
Just been on a two week trip around the East Coast. In all the cafes and hotels I was in, and on all the trains and flights I didn’t see anyone using more screens than the one on their laptop.

Oh, and the Mac to Others ratio was about 3:1 which took me by surprise, as I've heard people say how niche Apple stuff is.
 

raythompsontn

macrumors 6502a
Feb 8, 2023
602
805
Yes, but there is a way to do this where your claims are at least somewhat credible and not out-and-out lies.
The FTC and SEC (not the sports one) would be all over any vendor with significant penalties who outright lied. All vendors do the same thing, slant tests, descriptions, etc. into a direction that makes them look good. Auto makers, drug companies, flashlight companies all bend the truth. But none of them actually make false statements or lies. Yes, even Apple does the same thing. Theranos tried, and failed. They will get caught.
 

mazz0

macrumors 68040
Mar 23, 2011
3,163
3,619
Leeds, UK
Holy cow. There's been a ton of ignorance and nonsense in this thread (unsurprisingly), but this... is next-level. I thought the poster was joking at first (the username is hilarious), but their arguments with others here who know a bit more show that they're serious. I'm not going to respond to everything - it's not worth engaging with them, as shown by others who have already tried. But for the benefit of anyone reading them here are some corrections.



Possibly the dumbest comment ever posted on MR. (Ok, maybe not, that's a very high bar!) That "curve" doesn't exist in a vacuum. The notion that the chip design is meaningless is ... more wrong than mustard on ice cream. It's laughable. For a simple proof of the sheer stupidity of it, consider two different core designs on the SAME process: Apple's P and E cores. There's roughly a factor of 3 difference in performance. Or look at Intel's P & E cores - the difference is even larger. Naturally, in both cases, the P cores are a LOT larger. Design with more transistors, you can get a faster core. Pretty basic.

You could also compare Apple's older N7 cores (A12 or A13) with another vendor's N7 core. The differences are stark.

Lastly, as I mentioned in a previous post, design will determine the highest clock you can run a chip at. In the language of the P-E curve, the curve doesn't extend forever. It cuts off at a certain point, beyond which more power won't get you any more performance, because the design is literally not capable of it.



Nearly everything above is wrong. The two parts that are correct are:
1) Yield and pricing do matter, and are a direct consequence of area
2) The PPW curve is generally as stated. QC *is* playing in both "area"s to some extent already, by selling the chip as useful at both 20ish and 80ish W.


This is 99.9% wrong. The flimflam about P-E curves in the first paragraph is irrelevant to the second, and in any case incorrect - when a single area-reduction number is quoted, it's for a "typical" mix of logic, SRAM, and analog, which mix is chosen by the foundry, usually derived from an actual chip design. If you look in more detail, they'll quote specific numbers for each of those. For example, TSMC quoted area improvements of 1.7x for logic going from N5 to N3, but only 1.2x for SRAM and 1.1x for analog. (And it turned out the SRAM improvement wasn't nearly that good, in the end.)

As for the choice of where you want to be on the curve... you just choose. You run your design at a higher or lower power (or equivalently, clocks), and that determines where you are on the curve.

HOWEVER, that's not *really* true, because - as I already mentioned above, and at greater lengths in previous posts - the design has a major impact on how fast you can actually run your core (and your uncore, but let's not get too far into the weeds). It will also have a particular part of the frequency curve where you get the best efficiency, which is entirely dependent on the design. So yes, you can pick your clock, but your design constrains you.


Yeah, this is all garbage. A bunch of people with short fuses got the idea that N3 was bad when it first came out, and all sorts of nonsense was published. As it turns out, N3 seems to have landed where it was supposed to. The one slightly unexpected shortcoming, as I mentioned earlier, was that SRAM cells only shrank about 5% compared to N5. There were also big concerns about yield at the start. I don't think anyone who actually knows about this is telling, but the general consensus seems to be that it's fine, and within the limits of the info presented in their financial statements, that appears to be true.


Calling Intel's process 10nm is arguing about semantics... but is also wrong. They're currently producing the old intel "7nm" which is now called "Intel 4". The old 10nmSF is now called Intel 7 and that's been up for a while now. You can remark snidely on their need to rename to keep up appearances, and you'd be right, but it's also true that the old names were less dishonest than the names used by other foundries (TSMC, Samsung, etc.) There is no feature in "3nm" chips that gets anywhere near to being 3nm in actual size. Intel 4 is roughly equivalent to TSMC N4, so if you're going to accept one name you should accept the other.

N3 variants (not "generations") (E, P, X, etc.) are indeed smaller changes, but not all of them improve PPA. For example, X is about high power applications, and will likely relax some design rules... which is fine, because such designs can't go that dense anyway.

Calling design "child's play and an intellectual joke" demonstrates complete ignorance, and probably psychological issues I'm not qualified to diagnose.



...and now it starts to become clear why this person is so dismissive of Apple. The DEI etc. comment makes it clear that engineering isn't motivating these many posts, but rather politics. Which I could really stand NOT to have to hear about for five frickin' minutes out of my day, please.





Wow. Pot, meet kettle. Take some classes, then come back here.
Personally, I think designing manufacturing processes is child's play.
 
  • Haha
Reactions: Confused-User

chucker23n1

macrumors G3
Dec 7, 2014
8,609
11,421
Just been on a two week trip around the East Coast. In all the cafes and hotels I was in, and on all the trains and flights I didn’t see anyone using more screens than the one on their laptop.

Sure, but a ton of laptop use is ultimately at a desk.

 

nt5672

macrumors 68040
Jun 30, 2007
3,459
7,375
Midwest USA
Your fundamental premise is correct (unit errors aside, which are egregious). Your application of it completely fails, because the numbers matter. You really are "preserving" a lot of energy using M chips.
Please explain. As an electrical engineer, I'm sure my calculations and unit selections are absolutely correct, but the rest of your explanation is hijacking my comment using a different concept that is spreading, as your user name implies, confusion. There is nothing in my comments that have anything to do with watt-hours, which is a rate of energy consumption. My comments were about absolute energy consumption.
 
Not going to happen soon or might not happen at all. Windows is a mammoth not because of its an operating system by Microsoft but because of the third party hardware software it carries on its shoulder. It will be interesting to see if those devs, both on the hardware and software side are willing to make a version for ARM. It's a huge resouce to pour in. Apple pulled it off as there was a clear dead-end on Intel-mac road map. The devs have to change the lane, no option. That compulsion is not there on the Windows side. Unless they too put a visible dead-end.
Yeah I don't think this is something we would see in the next few years but I think by the end of the 2020's that the visible end to x86 architecture outside of maybe for servers in data centers.

In the 70's and 80's it was all about compute power. In the 90's we got our first real glimpse of mobile computing as laptops started to become more popular. The 200x's were when laptops boomed. But that one event in 2007 changed everything. The iPhone made the 201x's about "pocket" computing or truly "personal" computing since it is a device that is typically very personal to people and something they always have with them. I think if "personal computers" in the sense of a desktop or laptop are going to survive, which I have my doubts, they will likely be on the same ecosystem as our mobile devices. Apple has already started this transition with Mac OS and their mobile OS's interoperability. I think Windows is maybe a decade behind, and whether or not that comes in the form of compatibility with Android or their own platform is yet to be seen.

Now everything outside of general consumers still moved along. However, the focus has shifted to smaller and smaller computing technology so that it can be more easily incorporated into different areas of business. For a simple example, one of the newest GIS systems for using ground penetrating radar in the mining industry is using iPad Pro's going forward for their entire solution lineup and ditching laptops. The landscape of a computer at the office and a computer in a server room hasn't seen real dramatic change. So for me, that is also a very big sign of change to come. I think as we see personal computing shift it will draw a larger and larger line between what a user has in the office and what sits in a rack in the back. I think x86 will lose in the industrial computing space to smaller more power efficient systems, I think it will lose office computing to the direction of "personal computing" and I think it will be relegated to the server racks. From there who knows, maybe even moved to a entry in the history books.

But certainly, as you said, not soon....and who knows...maybe not at all, I may be delusional :)
 
  • Like
Reactions: akbarali.ch

deconstruct60

macrumors G5
Mar 10, 2009
12,385
3,945
Honestly this is all speculation at this point. Of course what Microsoft meant was that with the new emulation that is coming with the next version of Windows the difference between native and emulated will be less than the difference between Rosetta and native ARM on Mac

The above more so is a speculation. Microsoft has a x86-to-Arm binary converter. There is nothing to 'speculate there' it has existed for years in 32-bit mode and has decent 64-bit app coverage for about a year. There is likely no 'new' one coming. It is just running on far more capable hardware now which was a very substantial contributors to the end users complaints about its performance.

There are differences in how soon/aggressively Microsofts version does the recompile. The highest performing Rosetta translation depends upon some proprietary features in the Apple Silicon. But for Windows those do not come into play. ( that isn't speculative. Need custom code on both sides of 'special mode' to work right that only Apple does. And makes about zero sense for Windows to weave in; since not targeting Apple Silicon. )

. Of course nobody in the Apple world believes this, and personally I don't care if they are slightly better, similar or slightly worse than Rosetta. If they really manage to improve the emulation to a point where emulated stuff on WoA is, with Elite X, as fast or faster than on Intel or AMD with better battery life (so at similar TDP), that's a big win in my book.

The recompiled Arm binaries can only run as fast as the underlying Arm implementation. Rosetta 2 on an Apple A9 would be 'bad' also relative to contemporary AMD/Intel SoCs.

It likely isn't going to be "as fast" as the top end Intel/AMD current models across a broad range. But faster than older ones and running at 'good enough' speeds ? Probably yes.

Microsoft's solution allows app with a mix of x86 and Arm code in them. It isn't about "fastest with every other aspect secondary" . Getting a wider set of apps to run at various levels of conversion is about as equal an objective. Apple just tells folks their apps are dead and moves on (e.g, 32-bit mac apps). Microsoft has to do more 'cat herding' of various developers evolving more slowly over time.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,385
3,945
This is a very interesting point! I suspect that you're not quite right though - I think that as more custom-designed cores (like QC's) come out, you're going to see them target different levels of the ISA. So I am pretty sure contemporary Windows will run on M9 (or whatever) just fine, and so will most apps, though a few may not.

Even Architecuture implementers have to comply with the ISA to get certification from Arm. As implementers move to new version 9.2 , 9.5 , 9.8 , 10.2 , 10.3 , etc. more stuff becomes mandatory from optional. As long as the Arch implementors keep moving along they will stay aligned with the ISA.

The only Architecture implementor avoiding SVE2 is Apple. The others don't have a problem.

The issue is that "AI PC" need SIMD processing. Yeah it is looking that Microsoft is going to put a NPU requirement on Windows going forward. Extremely likely, there are going to be Windows features that 'fall back' to SVE2 over the long term. [ Nevermind that Apple's NPUs are entirely proprietary. ]


This is still great for Mac users who have to run Windows under Parallels/VMWare/whatever. Their windows VMs will benefit from this. So it's a win all around, even if it turns out not to be as fast as Rosetta at some/all things.

The question is what happens if Apple never adopts SVE2 and it gets to be mandatory in the ISA. The basic ISA isn't going to change much but SIMD/Virtualiation/required-AI features problem are not. Those are areas where Apple is already way off the 'reservation'.

Whenever get to the point that Apple doesn't like the new addtions to the iSA , Apple will likely just stop paying for access and make "old stuff". At some point Windows 11 will disappear. So will Windows 12. ( just like Windows 10 , 8 , etc. did ).
 

ikramerica

macrumors 68000
Apr 10, 2009
1,566
1,864


Microsoft will advertise that its upcoming Windows laptops with Qualcomm's Snapdragon X Elite processor are faster than the MacBook Air with Apple's latest M3 chip, according to internal documents obtained by The Verge.

Qualcomm-Snapdragon-X-Elite-Laptop.jpg

"Microsoft is so confident in these new Qualcomm chips that it's planning a number of demos that will show how these processors will be faster than an M3 MacBook Air for CPU tasks, AI acceleration, and even app emulation," the report says. Microsoft believes its laptops will offer "faster app emulation" than Apple's Rosetta 2.

Introduced in October, the Snapdragon X Elite has Arm-based architecture like Apple silicon. Qualcomm last year claimed that the processor achieved 21% faster multi-core CPU performance than the M3 chip, based on the Geekbench 6 benchmark tool.

There are a few caveats here, including that Microsoft and Qualcomm are comparing to Apple's lower-end M3 chip instead of its higher-end M3 Pro and M3 Max chips. MacBooks with Apple silicon also offer industry-leading performance-per-watt, while the Snapdragon X Elite will likely run hotter and require laptops with fans. Since being updated with the M1 chip in 2020, the MacBook Air has featured a fanless design. Apple can also optimize the performance of MacBooks since it controls both the hardware and macOS software.

Nevertheless, it is clear that Apple's competitors are making progress with Arm-based laptops. Microsoft plans to announce laptops powered by the Snapdragon X Elite later this year, including the Surface Pro 10 and Surface Laptop 6 on May 20.

Article Link: Microsoft Says Windows Laptops With Snapdragon X Elite Will Be Faster Than M3 MacBook Air
Well, yeah. Apple hasn’t improved their core much after so many years. And there are built in architecture limitations. They are dropping the ball big time…
 
  • Disagree
Reactions: jdb8167

Houpla

macrumors member
Jan 1, 2018
70
84
It's because their IPC was crap. And it's still not close to Apple's. And gaining IPC gets harder the better it is to start, so getting to 80% of Apple's IPC is no big deal these days, but getting to 90% takes real work, and getting to 100%... well, nobody's managed that so far. Except Apple.
You are talking like there is some theoretical maximum IPC Apple has almost reached after decades of development and has nowhere further to go, and others are catching up. That is not true, we are not living in some CPU development "End Times". I bet 10 years ago people also thought that getting 80% of the IPC of the fastest chips is easy but getting past 100% will be nearly impossible. And look where we are now…
 

boak

macrumors 68000
Jun 26, 2021
1,511
2,446
There is nothing in my comments that have anything to do with watt-hours, which is a rate of energy consumption. My comments were about absolute energy consumption.
Erm, no. Watt is a measure of power, e.g. rate of energy consumption. Watt-hour is a unit for energy.
 
  • Like
Reactions: chabig

chucker23n1

macrumors G3
Dec 7, 2014
8,609
11,421
The above more so is a speculation. Microsoft has a x86-to-Arm binary converter. There is nothing to 'speculate there'

There is. Microsoft's assertions are just vague enough that they kind of suggest that they may or may have improved XTA.



Even Architecuture implementers have to comply with the ISA to get certification from Arm. As implementers move to new version 9.2 , 9.5 , 9.8 , 10.2 , 10.3 , etc. more stuff becomes mandatory from optional. As long as the Arch implementors keep moving along they will stay aligned with the ISA.

The only Architecture implementor avoiding SVE2 is Apple. The others don't have a problem.

The thing is, Apple really couldn't care less if ARM considers them "certified".

 

NT1440

macrumors G5
May 18, 2008
14,815
21,507
You are talking like there is some theoretical maximum IPC Apple has almost reached after decades of development and has nowhere further to go, and others are catching up. That is not true, we are not living in some CPU development "End Times". I bet 10 years ago people also thought that getting 80% of the IPC of the fastest chips is easy but getting past 100% will be nearly impossible. And look where we are now…
That was true(ish) in the days of multiple nodes left in silicon. We are actually starting to approach the end of the road for transistors based on silicon. At some point the ballooning costs (it’s been almost double the cost in billions to get 3nm up compared to 5nm, and 5nm was nearly double the cost of 7nm) will make expansion *theoretically* possible, but cost prohibitive for anyone to continue this road. I expect a different process is going to be needed to go much further than the 1nm node, which really isn’t that far away.

Not to mention that other component are getting harder and harder to scale down (I think SRAM barely budged for 3nm)?

Basically that’s all to say that generational leaps, when all factors are considered, are getting ludicrously expensive and more difficult to continue generation to generation. Apple had a long game going on here because their architecture is so advanced at its low power consumption that they’re going to have headroom to play with power adjustments when competitors simply won’t.
 

Confused-User

macrumors 6502a
Oct 14, 2014
597
656
Even Architecuture implementers have to comply with the ISA to get certification from Arm. As implementers move to new version 9.2 , 9.5 , 9.8 , 10.2 , 10.3 , etc. more stuff becomes mandatory from optional. As long as the Arch implementors keep moving along they will stay aligned with the ISA.

The only Architecture implementor avoiding SVE2 is Apple. The others don't have a problem.

The issue is that "AI PC" need SIMD processing. Yeah it is looking that Microsoft is going to put a NPU requirement on Windows going forward. Extremely likely, there are going to be Windows features that 'fall back' to SVE2 over the long term. [ Nevermind that Apple's NPUs are entirely proprietary. ]

The question is what happens if Apple never adopts SVE2 and it gets to be mandatory in the ISA. The basic ISA isn't going to change much but SIMD/Virtualiation/required-AI features problem are not. Those are areas where Apple is already way off the 'reservation'.

Whenever get to the point that Apple doesn't like the new addtions to the iSA , Apple will likely just stop paying for access and make "old stuff". At some point Windows 11 will disappear. So will Windows 12. ( just like Windows 10 , 8 , etc. did ).
I agree with everything you're saying. What I'm suggesting is that the same market forces that put Apple where it is (no SVE2 for example) may well cause similar results in other companies' chip designs.

Your counterargument, to perhaps say it more plainly, is that MS will likely mandate certain feature levels. And that's a good point too. I can see it happening both ways.
 

Confused-User

macrumors 6502a
Oct 14, 2014
597
656
You are talking like there is some theoretical maximum IPC Apple has almost reached after decades of development and has nowhere further to go, and others are catching up. That is not true, we are not living in some CPU development "End Times". I bet 10 years ago people also thought that getting 80% of the IPC of the fastest chips is easy but getting past 100% will be nearly impossible. And look where we are now…
I am not. I don't think anyone knows what the theoretical limits of IPC are. It's a massively complex problem, in part because it's not just about hardware design- the way software is written (and especially how compilers work, and now to some extent interpreters too) is very important for this, on a practical level. After all, IPC depends on the instruction mix. And of course nothing runs entirely out of cache, so memory hierarchy is critical as well. And there's lots more.

But there is one thing we DO know, and that is that it's getting harder every year. And then once in a while a new concept comes along, and there's a big advance... but then after that you're back to chasing smaller wins. Will there be any more such advances? Definitely, but maybe not inside the traditional core. For example, in the recent past, we have the advent of accelerators for specific jobs (like codecs in hardware, or NPUs, or matrix units like AMX). In the near future, perhaps, we'll start to see in- or near-memory compute. (Notably, these advances aren't much help in logic-heavy, branchy code.)

So while performance is likely to continue to advance for many years to come, in the more restricted problem space of traditional CPU core design, I don't think anyone can say if we're in your "end times" or not. It's entirely plausible that we are.

I wasn't making that claim, in any case. I did say that things are getting asymptotically harder, and that appears to generally be true. More importantly, it's obviously true that people are having a very hard time chasing the industry leader in IPC... which is Apple.
 
  • Like
Reactions: romulopb

Confused-User

macrumors 6502a
Oct 14, 2014
597
656
Please explain. As an electrical engineer, I'm sure my calculations and unit selections are absolutely correct, but the rest of your explanation is hijacking my comment using a different concept that is spreading, as your user name implies, confusion. There is nothing in my comments that have anything to do with watt-hours, which is a rate of energy consumption. My comments were about absolute energy consumption.
Wow. You're an electrical engineer and you don't know what a watt-hour is?

Watts are *rates*. A watt-hour is NOT a rate. It's that rate, across a certain period of time, which produces a number which is a measure of actual energy (3600 joules). Again, I suggest that you read the section of the wikipedia article I linked, which is quite clear.

So it's hardly surprising that my explanation is confusing to you. You didn't write what you thought you were writing.

In any case, I was addressing your original claim:
You are not saving the world or preserving energy with computer A, just delaying the time for the user to complete the task and costing more time and therefore more salary.
That claim was theoretically possible with two imaginary computers, but is not correct in the real world, where Mx chips are either faster than competing x86 chips, or not much slower, while being dramatically more efficient. With real-world Mx chips, you may be delaying the time to complete a task compared to a top-end x86 chip by a small fraction of the total time (12% or less, with current top-end chips, give or take), but you're consuming MUCH less energy. It really is better for the environment. Whether or not that should be a deciding factor in any given situation is, well, situational.
 
Last edited:

H2SO4

macrumors 603
Nov 4, 2008
5,689
6,960
No. I’ve not got any that I am aware. What bugs does it have? I googled it, found some (apparent) alleged crashing bugs like Contacts Freezing when printing. I just tried it and wouldn’t freeze no matter how many contacts I added. 28 pages and it was flawless. So pray tell. What bugs? Maybe it’s your install? Did you drop your mac, because that might cause some issues.


I totally agree. How people use their computers and what they need is all about context. I’ve edited 4K Video with multiple streams on my 2020 M1 MacBook Air whilst doing some PS Lightroom work without problems. But it’s all about use case. Maybe the MacBook Air or the Windows equivalence isn’t the right tool for a real power user. They’re designed for portability not Hollywood movies. Saying that MKBHD has no problems rendering 1.5 hour videos with multiple 8K streams on his M2 MacBook Pro.
For sure.
I have the last intel MacBook Pro and for specifically what I want to do it is orders of magnitude better than any apple silicon Mac. AS macs won't talk to many devices used industrial systems so in that one context at least my power hungry machine that will do the job is more use than one that won't
 
  • Like
Reactions: steve09090

chucker23n1

macrumors G3
Dec 7, 2014
8,609
11,421
For sure.
I have the last intel MacBook Pro and for specifically what I want to do it is orders of magnitude better than any apple silicon Mac. AS macs won't talk to many devices used industrial systems so in that one context at least my power hungry machine that will do the job is more use than one that won't

Have you tried? ARM Macs are fairly fast, so even in a VM, it might still be faster than your x86 Mac.

Unless, of course, the physical connection you need doesn't pass through to the VM.
 

nt5672

macrumors 68040
Jun 30, 2007
3,459
7,375
Midwest USA
Wow. You're an electrical engineer and you don't know what a watt-hour is?

Watts are *rates*. A watt-hour is NOT a rate. It's that rate, across a certain period of time, which produces a number which is a measure of actual energy (3600 joules). Again, I suggest that you read the section of the wikipedia article I linked, which is quite clear.

So it's hardly surprising that my explanation is confusing to you. You didn't write what you thought you were writing.

In any case, I was addressing your original claim:

That claim was theoretically possible with two imaginary computers, but is not correct in the real world, where Mx chips are either faster than competing x86 chips, or not much slower, while being dramatically more efficient. With real-world Mx chips, you may be delaying the time to complete a task compared to a top-end x86 chip by a small fraction of the total time (12% or less, with current top-end chips, give or take), but you're consuming MUCH less energy. It really is better for the environment. Whether or not that should be a deciding factor in any given situation is, well, situational.
Look at my original post. I was comparing 200 watts for 1 seconds versus 100 watts for 2 seconds.

Let me to the math for you. 1 sec * 200 joule / sec (remember that 1 watt = 1 joule /sec) = 200 joules. That is energy, it is not a rate at all. Yep my original post used the wrong units for the result of watts which you correctly pointed out as wrong and should have been joules. Mark that up to too much other stuff going on.

Seconds example: 2 sec * 100 joule / sec (remember that 1 watt = 1 joule /sec) = 200 joules.

Both tasks consumed the same amount of energy from the world. None was saved. No rate was used. And again my premise is that just because A CPU uses less energy and runs at a lower clock speed, does not simply meant it uses less energy to accomplish the same task. Which also implies that running the same task on the same CPU (like low energy CPUs in an Apple laptop) with reduced clock does not save the world. It only saves the battery.

Now you might say but we were talking about different CPUs (Apple ARM vs Snapdragon Arm). You cannot compare different instruction sets using overall CPU energy consumption because the instruction set effects both the task duration and energy consumption. Most of this talk about saving energy is marketing BS for a lot of cases. Simple uses that are not using the full CPU power are not saving anything substantial. If all a person does is email, then they are not a saving the world with Apple devices even though Apple marketing makes them feel that way.
 

romulopb

macrumors newbie
Apr 9, 2024
13
3
It really is better for the environment. Whether or not that should be a deciding factor in any given situation is, well, situational.
Most people don't factor the diversity of impacts in this subject, most x86 performance devices out there just ruin the battery quicker, because those machines don't care about sane consumption, what leads to easier cycle, temperature, % abuses and so on. It is so easy to maintain a MacBook battery nowadays, my device will enter 1 year of use and still 100% battery health.

Meanwhile, the median laptop or its battery goes to garbage. Or after ~2 years, due to inertia or money lack, the device destroys productivity, because battery is in such state that it is easier to get in a situation where it accidentally turns off (you forgot it don't survive out of plug, it don't last the normal time any work session does, etc.
 

romulopb

macrumors newbie
Apr 9, 2024
13
3
Seconds example: 2 sec * 100 joule / sec (remember that 1 watt = 1 joule /sec) = 200 joules.

Both tasks consumed the same amount of energy from the world. None was saved. No rate was used. And again my premise is that just because A CPU uses less energy and runs at a lower clock speed, does not simply meant it uses less energy to accomplish the same task. Which also implies that running the same task on the same CPU (like low energy CPUs in an Apple laptop) with reduced clock does not save the world. It only saves the battery.
The problem is that, even if it was really that true, you picked such a non-realist scenario, computers, on the majority of their times are much more like a car stopped in transit than a machine doing some sort of 100% efficiency work analogy. Not everybody is waiting for a compiler/video encoding/etc, and for sure it is not a massive fraction of their work time. Would it be, it is time to move such a task to a server.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.