Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

pdoherty

macrumors 65816
Dec 30, 2014
1,491
1,736
There's a lot of latecomers here seriously out of touch with reality.


YoY 100-200%?!? I'm so sorry that you fell out of your awesome universe and ended up in our pedestrian one.

Here in this universe, that's a ridiculous assertion. Aside from M1, it has never happened, AFAIK. Apple's made real gains in every generation, even though M2 was hammered by the delay in N3B and the M3 was now, as we're discovering, an interim effort.
He's not wrong though. CPUs used to double in speed from one iteration to the next routinely back in the 1980s and 1990s. Kind of hard for us to have gotten 10,000 times faster CPUs in just 30 years if there weren't much larger increases than the 10-20% we're seeing now.


Most notably, the performance of individual computer processors increased on the order of 10,000 times over the last 2 decades of the 20th century without substantial increases in cost or power consumption.
National Academies of Sciences, Engineering, and Medicine. 2011. The Future of Computing Performance: Game Over or Next Level?. Washington, DC: The National Academies Press. https://doi.org/10.17226/12980.
 

klasma

macrumors 604
Jun 8, 2017
7,440
20,733
Only with very basic applications. Most operating systems today are smart enough to break off code into separate tasks when possible (even able to determine which compute unit is more efficient to run the task). So even though an application may not have been designed to be multithreaded, they may actually make use of multiple cores.

Biggest example of that, is the fact the all user interface interaction runs on its own thread, separate from the application code.
That’s incorrect. Single-threaded applications may switch from one core to another, but they are never ever broken up to run in parallel. What does happen is that processing stages are pipelined and thus sort-of run in parallel, and conditional branches are executed speculatively while the input to the condition isn’t known yet, but that all happens within a single core, and having more cores doesn’t speed it up.
 

sparksd

macrumors G3
Jun 7, 2015
9,989
34,243
Seattle WA
He's not wrong though. CPUs used to double in speed from one iteration to the next routinely back in the 1980s and 1990s. Kind of hard for us to have gotten 10,000 times faster CPUs in just 30 years if there weren't much larger increases than the 10-20% we're seeing now.


Yeah, compare the Altair 8800 I used in the early 70's to today's iPads 9 (and that Altair with no display cost $3500 in today's dollars).
 

Confused-User

macrumors 6502a
Oct 14, 2014
852
984
And I agree with your general point. It's still fun to remember the big jumps that you could sometimes get like if you bought a Mac IIci in 1989 with a 25MHz '030 and could quadruple its performance with a Quadra in 1991 with its 25MHz '040. As for Alpha, that was a floating point monster, though I can't remember how it compared to the competition. If we are talking Unix workstations, PA-RISC wasn't half bad at that either? To come back on topic, I'm curious to see the cache configuration of the M4 and going forward how long Apple can get away with a shared L2. IBM and Intel have been using per-core L2 for over 15 years.
Wow, I'd forgotten just how big an advantage the 040 had over the 030.

Nevertheless, it doesn't meet the requirement of 100% YoY. The 030 @ 16MHz was introduced in 1987, while the 040 @25MHz came out in '90. It would need to be 8x the performance to meet the doubling-every-year criterion, much less 100-200%.

Alpha kicked the competition's ass, but only in FP. In integer loads it was in the lead but not nearly so much.

Snakes/PA-RISC was good but did not take a commanding lead.

In fact the first SPARC was a big big step up over previous Suns, but they were artificially held back in favor of the soon-to-come Sun 4. Had Sun kept up with the latest from Moto, the perf jump would have been a lot less.

In all, only the 040 comes close to hitting the 100% YoY figure.
 

pdoherty

macrumors 65816
Dec 30, 2014
1,491
1,736
Conversations like this just make me look back fondly at the single CPU single core era and marvel how much they got done on such little computing power. Incredible stuff. Our software today hasn't improved 1000X even though our processing power has. Some bugs in some apps are worse than they were back then.
Agreed. Not only do we not have the fun of those days of power increases, but now, somehow after 25 years of development companies being fine with one-time software purchases, we have tons of software categories trying to shoehorn subscriptions into it. Like email programs, word processors, etc. Stuff that basically never needs to change in any real way and you're paying for patches to keep it working in the face of OS patches, when really that's just a function the developer needs to do to keep selling the app to new customers.
 
Last edited:

pdoherty

macrumors 65816
Dec 30, 2014
1,491
1,736
What does it matter? There's no cooling so none of theses scores mean much as they won't be sustained for long due to thermal constraints. Fine for short bursts of general use, but for anything else like gaming when performance really does matter...
Divinity Original Sin 2 is on the iPad and is a no-holds-barred port of the desktop version. I'm pretty sure that's creating sustained load and seems to refute your argument.
 

ArkSingularity

macrumors 6502a
Mar 5, 2022
928
1,130
Only with very basic applications. Most operating systems today are smart enough to break off code into separate tasks when possible (even able to determine which compute unit is more efficient to run the task). So even though an application may not have been designed to be multithreaded, they may actually make use of multiple cores.

Biggest example of that, is the fact the all user interface interaction runs on its own thread, separate from the application code.
User interfaces often are explicitly designed to run on another thread, but that isn't split up by the operating system, but rather by the software developer who wrote the application. (Operating systems aren't able to split up applications that are truly single-threaded. This kind of threading has to be explicitly written into the application code.)

Applications where the user interface isn't split up from the program logic tend to freeze and become unresponsive on long operations. If you've ever run a program that froze for several seconds before suddenly coming back to life, there is a good chance that this is exactly what happened.

These days, this isn't really a huge issue though, as most modern applications, web browsers, games, and other such things are typically very multithreaded in nature. A lot of modern programming languages explicitly encourage development that lends itself to being friendly in multithreaded environments, so it's less common than it used to be to see applications that only will utilize a single thread.
 

pdoherty

macrumors 65816
Dec 30, 2014
1,491
1,736
good bless the Celeron 300A.....overclock that to 450mhz by moving a jumper. 50% MHz gains!
I did the same exact thing back in the day. I remember it drove a guy I knew crazy because I did that 50% overclock it using air cooling and he had a water-cooling system on an AMD CPU and was getting like 20-25%.
 

ArkSingularity

macrumors 6502a
Mar 5, 2022
928
1,130
I did the same exact thing back in the day. I remember it drove a guy I knew crazy because I did that 50% overclock it using air cooling and he had a water-cooling system on an AMD CPU and was getting like 20-25%.
I remember the day that I upgraded an old 450mhz Pentium III to a 600mhz one. I was shocked at how noticeable the speed difference was. (Yes, it was a 33% increase, but it felt more like a doubling in performance.)

That was back in the days when one core running at sub-GHZ speeds could actually handle everyday computing. These days it'd probably take 20 minutes just to boot.
 

Confused-User

macrumors 6502a
Oct 14, 2014
852
984
What does it matter? There's no cooling so none of theses scores mean much as they won't be sustained for long due to thermal constraints. Fine for short bursts of general use, but for anything else like gaming when performance really does matter...
Gamers... so fixated on their own use cases. Yes, your analysis is correct for (some) gaming, but not for many other bursty tasks. Apple did apparently put some effort into lengthening the time they can support high load, by putting in copper and graphite spreaders. It would be reeeeally interesting to see the analysis they did that came to the conclusion that that was worth doing. Was it just because they were going thinner? Or are they targeting some intermediate use cases?

Only with very basic applications. Most operating systems today are smart enough to break off code into separate tasks when possible (even able to determine which compute unit is more efficient to run the task). So even though an application may not have been designed to be multithreaded, they may actually make use of multiple cores.

Biggest example of that, is the fact the all user interface interaction runs on its own thread, separate from the application code.
Yeah, no. Not even close. OSes don't do that ("break off code"), in the general case. They do allocate between core types (P/E/whatever), but that's not relevant.

Some high-level languages and runtimes may try to do that (it's an intractable problem with no general solution). Perhaps what you were thinking of, library calls can sometimes run in separate threads, as can OS calls, but this does not generally allow single-threaded code to effectively use multiple cores. Mostly it'll get you <2x, max.

Edit: See #132 above for more detail on this.
 

Confused-User

macrumors 6502a
Oct 14, 2014
852
984
He's not wrong though. CPUs used to double in speed from one iteration to the next routinely back in the 1980s and 1990s. Kind of hard for us to have gotten 10,000 times faster CPUs in just 30 years if there weren't much larger increases than the 10-20% we're seeing now.

He *is* wrong, as are you, as I posted above. The rare (*really* rare) times you actually got doublings, it didn't happen in less than a year.

Your numbers are off anyway. 30 years ago, processors were already somewhat superscalar, and ran up to 66MHz. I think we're closer to 1000x faster than 10000x. Though interestingly I can't immediately put my hands on an accurate figure. Anyone have one handy?

It's true 20% YoY wouldn't have got us where we are, but 30% YoY gets you over 2500x in 30 years. And it's also true that performance increases have been largely lackluster for the last decade, compared to the previous decades- most of the time.
 

pdoherty

macrumors 65816
Dec 30, 2014
1,491
1,736
Yes, but all this conversation has been done and dusted for a decade so it is better to stop repeating. An iPad workflow is an iPad workflow. A Mac workflow is a Mac workflow. Hell even on the macOS side the way people work on different Macs can differ greatly.
And, just like their point about having to adapt your workflow to iPadOS, one has to adapt to how MacOS forces apps to work, too. They're just both different and both impose constraints on how you can do things.
 
  • Like
Reactions: Boing123

stinksroundhere

macrumors regular
May 10, 2024
235
343
Agreed. Not only do we not have the fun of those days of power increases, but now, somehow after 25 years of being fine with one-timo software purchases, we have tons of software categories trying to shoehorn subscriptions into it. Like email programs, word processors, etc. Stuff that basically never needs to change in any real way and you're paying for patches to keep it working in the face of OS patches, when really that's just a function the developer needs to do to keep selling the app to new customers.

Micro-payments could get worse. When a company maxes out subscribers and can't find anymore growth their rich shareholders will demand some other kind of monetisation such as forcing users to pay per hour or per minute.

Capitalism in this filthy state can only become greedier and more hostile unless enough people say enough is enough. Screaming on social media isn't enough. Organised protest is only effective when lots of money is withheld.
 

pdoherty

macrumors 65816
Dec 30, 2014
1,491
1,736
I remember the day that I upgraded an old 450mhz Pentium III to a 600mhz one. I was shocked at how noticeable the speed difference was. (Yes, it was a 33% increase, but it felt more like a doubling in performance.)

That was back in the days when one core running at sub-GHZ speeds could actually handle everyday computing. These days it'd probably take 20 minutes just to boot.
Yeah, a lot of the old overclocks felt so substantial because to get the CPU faster you increased the front-side bus clockspeed, which also meant you were increasing RAM speed as well. You had to have good components for it to work solidly, but when it did it was excellent. I remember what I used to do to confirm an overlock was going to remain stable - I'd run Prime95 in torture test repetitive mode until the temps maxed out and then leave it running for 5-10 minutes. If it could do that without generating errors in Prime95, with no glitches in the OS and (obviously) no OS crash it could handle anything else I'd ever throw at it.
 

sparksd

macrumors G3
Jun 7, 2015
9,989
34,243
Seattle WA
Micro-payments could get worse. When a company maxes out subscribers and can't find anymore growth their rich shareholders will demand some other kind of monetisation such as forcing users to pay per hour or per minute.

Capitalism in this filthy state can only become greedier and more hostile unless enough people say enough is enough. Screaming on social media isn't enough. Organised protest is only effective when lots of money is withheld.

Easiest protest is simply to not buy.
 

macphoto861

macrumors 6502
May 20, 2021
496
444
I don’t mean applying effects to thousands of images in batch. I mean editing as in making changes and adding effects per image. With a desktop os you can simply import as many images as your system can handle and switch between images during editing for real work being done. Try on an iPad and fail miserably.
It depends on the workflow... I load my RAW images into Lightroom Classic, and they automatically get synced to my iPad. I can then freely work on either device. It's an efficient system, and again, I edit thousands of images a month this way, not in a batch, but individually adjusting each image.

Now, if I were trying to do this kind of volume of images exclusively with an iPad, with no computer involved? Yeah, that would be a different story... importing, organizing, making multiple backups, editing, and exporting JPEGs would be agonizingly cumbersome. But in my workflow, rather than being a replacement for the Mac, the iPad acts as more of an extension of it, and it's quite efficient.
 
  • Like
Reactions: Basic75

TheRealAlex

macrumors 68030
Sep 2, 2015
2,982
2,248
9 Core M4 versions Don’t show much IPC improvements.

Only 10 Core M4’s are worth it because it’s 4 Performance Cores. And 6 Efficient Cores.

Don’t buy the 9 Core version.
 

Basic75

macrumors 68020
May 17, 2011
2,101
2,446
Europe
Wow, I'd forgotten just how big an advantage the 040 had over the 030.
The 040 was amazing and outperformed the 486 at the same frequency. A shame we never got a 100MHz version.
Alpha kicked the competition's ass, but only in FP. In integer loads it was in the lead but not nearly so much.
I remember the 21064 was quite bad in integer due to the lack of 8 and 16 bit instructions, even a fast 486 could give it a run for its money. The 21164 was a different beast that unleashed the true potential of the Alpha.
In all, only the 040 comes close to hitting the 100% YoY figure.
Yeah, I guess it usually took like 2 years for a doubling of single-thread performance. Good times with interesting developments and many competing architectures.
 

Basic75

macrumors 68020
May 17, 2011
2,101
2,446
Europe
I remember the day that I upgraded an old 450mhz Pentium III to a 600mhz one. I was shocked at how noticeable the speed difference was. (Yes, it was a 33% increase, but it felt more like a doubling in performance.)
If you jumped from a 450MHz Katmai to a 600MHz Coppermine the faster on-chip L2 cache would have made a large impact, even though it was only half the size.
 

Basic75

macrumors 68020
May 17, 2011
2,101
2,446
Europe
So even though an application may not have been designed to be multithreaded, they may actually make use of multiple cores.
That is not at all how multithreading works. An application has to essentially be designed from the ground up to use it.
 

Confused-User

macrumors 6502a
Oct 14, 2014
852
984
Yeah, I guess it usually took like 2 years for a doubling of single-thread performance. Good times with interesting developments and many competing architectures.
Yes. I think that's when people started confusing Moore's law, Dennard scaling, and performance increases. That confusion survives to this day, as you can see on this very page...

That is not at all how multithreading works. An application has to essentially be designed from the ground up to use it.
Mostly, with the caveats I provided above.
 
  • Like
Reactions: Basic75 and uller6

nathansz

macrumors 68000
Jul 24, 2017
1,686
1,942
don’t think you understand what improvements Apple actually wanted from Intel; has little to do with performance, which Intel in fact had at the time. What Apple wanted is performance per watt, i.e. more efficient chip designs

What Apple wanted was vertical supply chain integration

Everything else was/is a secondary consideration
 

trimblet

macrumors newbie
May 1, 2017
14
39
Moore’s law has nothing to do with performance. It’s an economic cost generalization that projects increases in transistor density.
And transistor density has nothing to do with performance? Obviously that's not the only purpose of denser chips: you can make smaller chips, etc. But, as David House said, transistor count may actually drive performance faster than predicted by raw transistors. See from Wikipedia:

The doubling period is often misquoted as 18 months because of a separate prediction by Moore's colleague, Intel executive David House. In 1975, House noted that Moore's revised law of doubling transistor count every 2 years in turn implied that computer chip performance would roughly double every 18 months (with no increase in power consumption).​
This was later formalized as Koomey's Law.
 
Last edited:

mous94410

macrumors 6502a
Aug 12, 2015
630
472
Apart of AI features, why so much power ? I hope that apple will do something for iPadOS to be able to use this power. But we say that every year so…
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.