Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

falainber

macrumors 68040
Mar 16, 2016
3,539
4,136
Wild West
It was a corner case because in the past no powerful laptop could really be used as a true mobile device. Apple has changed all that with Apple silicon MacBooks. As others here have noted, I use my M2 MacBook Air all day at my current contract doing software dev and I never bother to bring a charger because it isn’t necessary. I don’t use a larger monitor at work because they are mostly PC centric and the monitors there suck (think 1920x1080). I’m not an employee so I can’t ask for a modern monitor and I’m not going to bring my own.
I do software design too and I use my 38" monitor (plus the laptop's screen). My next monitor will be bigger. Doing software design on a laptop monitor could be done but it should not unless in extreme circumstances or for the simplest of the software projects.
 

Gudi

Suspended
May 3, 2013
4,590
3,267
Berlin, Berlin
That's a far different picture from what you describe above, as there is a big difference between 80% of a market and under 50% of the same market.

Notebook, desktop PC, and tablet shipments worldwide from 2010 to 2025

2020
79.8 million desktops (26.4%)
222.5 million laptops (73,6%)

Note that this statistic also regards tablets as a form of personal computers, because of their larger screen size compared to smartphones. If you see it that way, desktops are well below 20%. One could also argue that laptops and desktops have a multi-user OS and only smartphones and tablets are truly personal.

Maybe that 80/20 split which I remember vividly was only with regard to Macs? 🤔 🖥️ > 💻

Nonetheless battery powered devices will only grow in importance for Apple. And energy efficiency is also beneficial to Apple's thin and small desktop designs.
 
  • Like
Reactions: AlphaCentauri

falainber

macrumors 68040
Mar 16, 2016
3,539
4,136
Wild West
Notebook, desktop PC, and tablet shipments worldwide from 2010 to 2025

2020
79.8 million desktops (26.4%)
222.5 million laptops (73,6%)

Note that this statistic also regards tablets as a form of personal computers, because of their larger screen size compared to smartphones. If you see it that way, desktops are well below 20%. One could also argue that laptops and desktops have a multi-user OS and only smartphones and tablets are truly personal.

Maybe that 80/20 split which I remember vividly was only with regard to Macs? 🤔 🖥️ > 💻

Nonetheless battery powered devices will only grow in importance for Apple. And energy efficiency is also beneficial to Apple's thin and small desktop designs.
There are many new types of laptops that are just nominally battery powered: gaming laptops, desktop replacement laptops, mobile workstations. But even "normal" laptops typically are not used on battery power for more than an hour or two. The only place where I do see people using laptops unplugged is the airports but even there situation is changing. Airplanes already offer power outlets, airport seating arrangements also increasingly gain power outlets.
 

bobcomer

macrumors 601
May 18, 2015
4,949
3,699
So let me get this straight—in a world where Apple is now making very powerful laptops that can literally go all day off charger even with some heavier workflows and even without throttling anything if you don’t want them to, and you are justifying that Windows PCs can’t do that by saying why would you want to? Come on. You have to have your entire body in the sand to think that’s a logical defense.
My M1 MBA got 6 hours on a charge and throttled HEAVILY with just my normal workload. A good Windows laptop would get about half that, say 2.5-3 hours, so it is better for the Apple --- but I don't work unplugged, ever, unless it's a power out situation, and if that's true, I can't hardly do anything but browse the web as my servers and LAN is down.
 
  • Haha
Reactions: jdb8167

Gudi

Suspended
May 3, 2013
4,590
3,267
Berlin, Berlin
There are many new types of laptops that are just nominally battery powered: gaming laptops, desktop replacement laptops, mobile workstations.
Windows trash. 😁

If you fold your laptop often enough, you get an origami swan.

swan.png

But even "normal" laptops typically are not used on battery power for more than an hour or two.
That was and is still true for normal Intel laptops. But the dark ages are over! Early laptops were neither powerful, nor mobile, nor could they handle their own heat. To fulfill the promise of what the laptop form factor could be, you needed to increase energy efficiency by many magnitudes. Intel failed to do it, Apple Silicon did it.
 
  • Like
Reactions: AlphaCentauri

falainber

macrumors 68040
Mar 16, 2016
3,539
4,136
Wild West
Windows trash. 😁

If you fold your laptop often enough, you get an origami swan.

View attachment 2198798

That was and is still true for normal Intel laptops. But the dark ages are over! Early laptops were neither powerful, nor mobile, nor could they handle their own heat. To fulfill the promise of what the laptop form factor could be, you needed to increase energy efficiency by many magnitudes. Intel failed to do it, Apple Silicon did it.
As said already, in most cases to be productive one needs to plug in the laptop - to work with a proper monitor. The power then comes as a bonus. People whose job does not require a monitor usually do the work that does not require powerful computer either.
 

mi7chy

macrumors G4
Oct 24, 2014
10,619
11,293
So let me get this straight—in a world where Apple is now making very powerful laptops that can literally go all day off charger even

You're confusing idle battery life vs under load. Idle workloads like browsing the internet can go all day but under load CPU alone can consume ~31W. That means if you have M1 Pro and 99Wh battery on 16" that gives you 99Wh / 31W = ~3.2 hours of battery life. If you also put M1 Pro GPU under load that's an additional ~30W so 99Wh / (31W + 30W) = 1.6 hours of battery life. Either way you'll need to plug in since that is short of even half a workday.
 
Last edited:

falainber

macrumors 68040
Mar 16, 2016
3,539
4,136
Wild West
You're confusing idle battle life vs under load. Idle workloads like browsing the internet can go all day but under load CPU alone can consume ~31W. That means if you have M1 Pro and 99Wh battery on 16" that gives you 99Wh / 31W = ~3.2 hours of battery life. If you also put M1 Pro GPU under load that's an additional ~30W so 99Wh / (31W + 30W) = 1.6 hours of battery life. Either way you'll need to plug in since that is short of even half a workday.
I have a feeling that most Mac Book users never do anything but web browsing with their laptops.
 
  • Haha
Reactions: bobcomer

mi7chy

macrumors G4
Oct 24, 2014
10,619
11,293
I have a feeling that most Mac Book users never do anything but web browsing with their laptops.

That's what I do on my M1 Macbook since software availability is limited. Whatever few games that do run drain the battery fast.
 

Gudi

Suspended
May 3, 2013
4,590
3,267
Berlin, Berlin
As said already, in most cases to be productive one needs to plug in the laptop - to work with a proper monitor.
Or you just buy an iMac. 🤷
The power then comes as a bonus.
It's not a bonus when you return to the advertised clock speed after throttling.
People whose job does not require a monitor usually do the work that does not require powerful computer either.
Again, "usually" and "normal" always refer back to how things were when Intel-based laptops couldn't be as powerful as desktops, because of their horrible energy efficiency. The times are over, when laptop and desktop were not only different form factors, but also different performance classes. The M1 goes into everything from iPad to iMac. You don't limit yourself anymore by choosing a less-powerful form factor or by unplugging froom the power grid. That's the old normal, not the new normal.
 

spiderman0616

Suspended
Aug 1, 2010
5,670
7,499
As said already, in most cases to be productive one needs to plug in the laptop - to work with a proper monitor. The power then comes as a bonus. People whose job does not require a monitor usually do the work that does not require powerful computer either.
Ah so this is the part of every thread where we start telling everyone what’s proper productivity and what isn’t. I’m out. This is where things always get extra dumb.
 

leman

macrumors Core
Oct 14, 2008
19,520
19,670
Why would I want to work anywhere but at the proper desk? That's for slackers who do not want to work or do not know what productive work is. And if I work at the desk, why would I want to use anything but the computer with the highest performance? To slow me down? Personally, I would prefer to use a desktop but most companies want you to work from home too so desktops are rarely an option.

Today so learned that I’m a slacker because I don’t care much for working at a desk 😂

Well, you do you. Fortunately, there are options. And I’m happy that there are computers which allow me to go about my business the way it suits me rather than what some people I don’t know claim is the “proper way”.
 

falainber

macrumors 68040
Mar 16, 2016
3,539
4,136
Wild West
Newsflash - most PC owners don’t do anything but web browsing, email, and occasional word document with their PCs either. The average user doesn’t need anything more.
But we are not talking about such users and their laptops. For these users, performance does not matter (at least in the context of the discussion which processors are more powerful)
 

leman

macrumors Core
Oct 14, 2008
19,520
19,670
But we are not talking about such users and their laptops. For these users, performance does not matter (at least in the context of the discussion which processors are more powerful)

Exactly, so why are you mentioning them? "Most Mac Book users only need web browsing" is not the same as "no Mac Book user needs performance". It's a classical appeal to probability fallacy.

The funny thing is that even the passively cooled M1 is probably a better computer for light development work than most other thin and light laptops in the same price range. It will likely perform non-parallelised incremental builds and tests faster than a desktop 13600K.
 

Sydde

macrumors 68030
Aug 17, 2009
2,563
7,061
IOKWARDI
The funny thing is that even the passively cooled M1 is probably a better computer for light development work than most other thin and light laptops in the same price range. It will likely perform non-parallelised incremental builds and tests faster than a desktop 13600K.

Most compilers, AIUI, are designed to work on the thing you just changed without fiddling around with that other thing you did not change. Incremental builds are just not that costly, and it would be awesome if XCode provided a way to do a mixed debug/production build so that you could commit the stuff that you know is working and only have debug build on the stuff you are still tweaking.
 

senttoschool

macrumors 68030
Nov 2, 2017
2,626
5,482
Incremental builds are just not that costly,
True but you underestimate just how big of an impact the original M1 had on incremental builds for developers. For example, large Typescript projects are notoriously slow to rebuild and show changes on the screen. The lag between saving a file and seeing changes on the screen makes a big difference in developer happiness. The M1 was a massive leap in Apple's ecosystem. And at the time of release, the M1 had the fastest ST in the world, which meant it was the best way to develop in Javascript/Typescript. It stil is along with M2 if you factor in everything from speed, battery life, quietness, and *nix style OS.
 
  • Like
Reactions: jdb8167

mi7chy

macrumors G4
Oct 24, 2014
10,619
11,293
The funny thing is that even the passively cooled M1 is probably a better computer for light development work than most other thin and light laptops in the same price range. It will likely perform non-parallelised incremental builds and tests faster than a desktop 13600K.

Examples? It's usually the other way around. Here's M2 towards bottom while 13600K is about 8x faster and 6800U 4.7x faster.

https://openbenchmarking.org/test/pts/build-linux-kernel
 
Last edited:

mr_roboto

macrumors 6502a
Sep 30, 2020
856
1,866
Examples? It's usually the other way around. Here's M2 towards bottom while 13600K is about 8x faster and 6800U 4.7x faster.

https://openbenchmarking.org/test/pts/build-linux-kernel
Not only did you entirely miss the point, you also took Phoronix Test Suite results seriously.

The key phrase you glossed over: @leman was talking about "non-parallelised incremental builds". For most developers, the typical work cycle looks like edit source file -> compile -> link -> test -> repeat. But since you're typically editing only one or two source files before each compile/etc pass, and since compilation results are saved per source file, you're never waiting on compiling the whole project, only one or two files. That's what those of us who actually understand something about software and/or hardware mean when we say "incremental" build.

In turn, since compilers are typically single threaded on the scope of a single source file, 1 file to compile translates to 1 CPU core being tasked with work. M2 should be competitive with a 13600K for 1 to 4 files compiled.

The results you linked are compiling a whole Linux kernel from scratch. This will use all the cores on whatever CPU you happen to have. Of course the M2 loses to the 13600K here - the 13600K has six high performance cores plus eight throughput cores, while M2 has 4 high performance plus 4 efficiency (read: slow, but use almost no power). It's also likely that the M2 system tested was an Air, and M1/M2 Air machines cannot sustain full clock speed over the course of compiling an entire Linux kernel.

As for my jab at Phoronix... an essential part of any attempt to objectively benchmark two systems is to make sure each one is doing the same amount of work. This test fails to do that! It just downloads Linux kernel source code and does "make defconfig". When you run it on an Arm system (like Asahi Linux on a M2), you compile whatever is default in that kernel tarball for Arm. When you run it on an x86 system, you compile whatever is default for x86. Since the set of source files compiled is a common base plus architecture-specific sources, there is no guarantee here that the number of files compiled is the same. In fact, it's quite likely to be more for Arm because there's been a lot of fragmentation in Arm SoCs (leading to lots of extra driver code).

To do the kind of comparison you're attempting to make here in a proper, controlled way, at minimum you'd need to:

1. Decide on a target CPU architecture and kernel version (e.g. we are going to compile an x86 kernel)
2. Set up a specific configuration rather than relying on 'make defconfig'
3. Install a cross compiler if necessary (so, if you've decided to test compilation of a x86 kernel, when compiling on Arm you need to install a cross compiler to emit x86 code)
4. Make sure that the compiler version is identical across all the systems/CPUs you want to compare

Phoronix does none of this. PTS is not a good benchmark suite. It never has been, it never will be as long as it's run by the Phoronix guy, who simply doesn't understand anything about benchmarking above the level of automating his bad ideas by writing scripts.
 

falainber

macrumors 68040
Mar 16, 2016
3,539
4,136
Wild West
Not only did you entirely miss the point, you also took Phoronix Test Suite results seriously.

The key phrase you glossed over: @leman was talking about "non-parallelised incremental builds". For most developers, the typical work cycle looks like edit source file -> compile -> link -> test -> repeat. But since you're typically editing only one or two source files before each compile/etc pass, and since compilation results are saved per source file, you're never waiting on compiling the whole project, only one or two files. That's what those of us who actually understand something about software and/or hardware mean when we say "incremental" build.

In turn, since compilers are typically single threaded on the scope of a single source file, 1 file to compile translates to 1 CPU core being tasked with work. M2 should be competitive with a 13600K for 1 to 4 files compiled.

The results you linked are compiling a whole Linux kernel from scratch. This will use all the cores on whatever CPU you happen to have. Of course the M2 loses to the 13600K here - the 13600K has six high performance cores plus eight throughput cores, while M2 has 4 high performance plus 4 efficiency (read: slow, but use almost no power). It's also likely that the M2 system tested was an Air, and M1/M2 Air machines cannot sustain full clock speed over the course of compiling an entire Linux kernel.

As for my jab at Phoronix... an essential part of any attempt to objectively benchmark two systems is to make sure each one is doing the same amount of work. This test fails to do that! It just downloads Linux kernel source code and does "make defconfig". When you run it on an Arm system (like Asahi Linux on a M2), you compile whatever is default in that kernel tarball for Arm. When you run it on an x86 system, you compile whatever is default for x86. Since the set of source files compiled is a common base plus architecture-specific sources, there is no guarantee here that the number of files compiled is the same. In fact, it's quite likely to be more for Arm because there's been a lot of fragmentation in Arm SoCs (leading to lots of extra driver code).

To do the kind of comparison you're attempting to make here in a proper, controlled way, at minimum you'd need to:

1. Decide on a target CPU architecture and kernel version (e.g. we are going to compile an x86 kernel)
2. Set up a specific configuration rather than relying on 'make defconfig'
3. Install a cross compiler if necessary (so, if you've decided to test compilation of a x86 kernel, when compiling on Arm you need to install a cross compiler to emit x86 code)
4. Make sure that the compiler version is identical across all the systems/CPUs you want to compare

Phoronix does none of this. PTS is not a good benchmark suite. It never has been, it never will be as long as it's run by the Phoronix guy, who simply doesn't understand anything about benchmarking above the level of automating his bad ideas by writing scripts.
You are narrowing down the notion of incremental build to one particular scenario. Editing one file may require re-compilation of all/many files if the edited file is include file. But that's in C/C++ universe. That's what a minority of developers do nowadays. If you are a web developer, you might be using a parallel-webpack and the number of cores could do magic for you. Here is one specific example:

 

leman

macrumors Core
Oct 14, 2008
19,520
19,670
Examples? It's usually the other way around. Here's M2 towards bottom while 13600K is about 8x faster and 6800U 4.7x faster.

https://openbenchmarking.org/test/pts/build-linux-kernel

I was talking about incremental builds and you are bringing clean linux kernel benchmarks? Besides, those particular results are clearly nonsensical, surely even you can see it. They contradict other compile benchmarks done by Phoronix. They either do a single-core build on M2 or maybe have some weird Docker setup.
 

Schnort

macrumors regular
Oct 24, 2013
204
61
Why are you focused on incremental builds vs. clean builds?

Why would it matter? Compiling either way is friendly to multicores and parallelism. A clean build would be longer and thus more opportunity for a multicore fast storage device to shine.
 

leman

macrumors Core
Oct 14, 2008
19,520
19,670
Why are you focused on incremental builds vs. clean builds?

Because that is where the discussion went

Why would it matter? Compiling either way is friendly to multicores and parallelism. A clean build would be longer and thus more opportunity for a multicore fast storage device to shine.

Absolutely, no doubt about it. Which again shows how unreliable operbenchmarking.org is. In the Linux kernel build benchmark linked here the 6800U is on par with a desktop 5900HX (same for LLVM builds). In the Apache server build it's as fast as a 32-core Threadripper or an 64-core EPYC (although that might be possible as it's a simple build with less parallelism probably).
 

Sydde

macrumors 68030
Aug 17, 2009
2,563
7,061
IOKWARDI
Why are you focused on incremental builds vs. clean builds? Why would it matter?
90% of programming is debugging. Well, something like that. A lot. A programmer trying to chase down a bug will tweak one or two source files and try that to see if it fixes the problem. This can go on all night. This is where the compiler gets the most use. That is why it matters.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.