Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Disappointed with Mac Pro 2023?


  • Total voters
    534

VivienM

macrumors 6502
Jun 11, 2022
496
341
Toronto, ON
don't disagree, it's likely good business sense for a number of reasons
but apple is not going to likely be making the fastest desktop/workstation computers on absolute performance.

Intel chips and amd are more than sufficient as well if you want to go down the road and likely less expensive for similar absolute performance. Apple Silicon can't do it all (cheap fast cool upgradeable) . But since power consumption and portability are important it's a great innovation. Apple will just not have the fastest desktops/workstations in an absolute sense and certainly not for the money.
Here is an awkward, but honest question: how many industries/workloads/etc still need the fastest desktop/workstations in an absolute sense?

I think part of the issue is that for real work, the amount of processing power required doesn't really change if the work doesn't change. If you're trying to crunch the same data/video/etc as you were 10 years ago, 10 years ago that might have required a serious Mac Pro, and today an M1 MacBook Air will do it just fine. (Now, compare this with web browsing or web technologies, where you require 4x the RAM and CPU to do the same thing as 10 years ago)

So, as time goes on, the amount of workloads that actually require/benefit from the fastest desktop/workstations in an absolute sense should go down. Some new or more challenging workloads (e.g. HD/4K/8K video, or something that was just impossible with the older hardware) may come along, sure, which is what has kept fastest desktop/workstations around.

But fundamentally, think of the demanding workloads of 30 years ago. 30 years ago people replaced two year old Mac IIs that had just cost $10K USD with Quadras with expensive accelerator cards to run Photoshop filters. And that made sense when those filters took several hours to run each time. I suspect you can run the same filters on, say, a 2006 Intel iMac in a few seconds. No one sells "Photoshop accelerator" cards anymore - the idea that you would buy a $2000-in-1993-money video card (or whatever the price of those Thunder II/IV/etc NuBus cards was - I'm having trouble looking it up.) for Photoshop is laughable. I don't think people doing print media have been buying top-of-the-line Macs with expensive accelerator cards for at least a decade or a decade and a half.

I would note that I've been seeing this on the Windows front too. A few months ago I was looking at Dell's web site for multiple-CPU workstations and... couldn't find any. Seems like you can now fit so many cores on one socket that no one builds a 2-socket workstation anymore, which is unfortunate for the shrinking number of people whose needs could still benefit from two sockets.
 

Longplays

Suspended
May 30, 2023
1,308
1,158
Workstation chips and GPUs are actually more expensive. You can get a cheaper PC with gaming hardware and similar absolute performance, but gaming equipment doesn’t have GPUs with high memory capacity. If you are looking for raw CPU/GPU performance and only work with simple workloads gaming PC is a good candidate.
Workstation parts are more expensive due to economies of scale.

Gaming parts have a relatively larger market than workstation but they have their limits as consumers cannot spend as much as companies that buy workstations.
 

VivienM

macrumors 6502
Jun 11, 2022
496
341
Toronto, ON
Workstation parts are more expensive due to economies of scale.

Gaming parts have a relatively larger market than workstation but they have their limits as consumers cannot spend as much as companies that buy workstations.
And companies that buy workstations are not willing to spend infinite money if there are cheaper alternatives that accomplish the same objectives good enough. Ask Silicon Graphics, Sun, DEC, etc how many companies are spending five figures on their workstations - oh wait, none of those companies are around anymore. And those who are still ostensibly around, e.g. IBM or HP, got rid of their traditional *NIX/RISC workstations long ago.

The economies of scale of x86/x64 and the cost savings of Windows NT and Linux just... ate the market... for workstations bought by willing/able-to-spend-for-performance companies.
 

Longplays

Suspended
May 30, 2023
1,308
1,158
The economies of scale of x86/x64 and the cost savings of Windows NT and Linux just... ate the market... for workstations bought by willing/able-to-spend-for-performance companies.
This has happened several times Mainframes -> Minicomputers -> Microcomputers -> SoCs... and it's usually correlated with jumps in levels of integration.

It counter intuitive, unless you understand how levels of integration work.

There is an emotional component to our thinking. ;-)

When Apple released the M1 (5nm) in November 2020, they were a few process nodes ahead (10nm & 7nm) of Intel (14nm from 2014-2020). The shrinking in area is quadratic.

What we are witnessing is the same thing that happened when the microprocessor took over the mainframe/supercomputers.

The perception was the system that took a whole room and had lots of blinking lights had to be the more powerful. However, what was happening was that the microprocessor guys were integrating the same functionality that took lots of separate boards on a mainframe down to a few chips.

There were some very specific use cases where the mainframe had the edge, but for the 99% of the rest of the applications, we were ending up with system on our desktops that were faster than a computer who took a whole room. Heck, you can now buy a GPU for a $1k that is more powerful than the fastest supercomputer from 2000, which cost millions of dollars, took an entire floor in a datacenter, and used almost 1 megawatt.

The microprocessor vendors also had access to larger economies of scale, which meant they could spend more money in development of their designs/tech so they were able to overlap the old large system vendors who had slower development cycles and smaller revenues.

The same thing is now happening with SoCs. They are having larger levels of integration, so they can fit a whole PC into a single chip. Which means that things run faster, with less power, and less cost. And since they are leveraging the mobile/embedded markets that are larger and are growing faster than the traditional PC/datacenter stuff.

The SoC vendors are the ones with access to the larger economies of scale. So they are developing things faster.

How large? SoC vendors who make up 100% of all smartphones shipped

Android (all price points)

- 2021: 1.124 billion units
- 2022: 0.979 billion units

Vs

iPhone ($429-1599)
- 2021: 235.8 million units
- 2022: 226.4 million units

As compared to all x86 vs Apple Silicon Personal Computers shipped

Windows (all price points)

- 2021: 322.2 million units
- 2022: 263.7 million units

Vs

Mac ($999 & up for laptops + $599 & up for desktops)

- 2021: 27.9 million units
- 2022: 28.6 million units

I'll add SoC vendors who make up 100% of all tablets Windows?

Android/Windows (all price points)

- 2021: 110.5 million units
- 2022: 101 million units

vs

iPad ($449-2399)

- 2021: 57.8 million units
- 2022: 61.8 million units

For emphasis Apple's total units shipped worldwide is roughly equal to AMD/Intel total units shipped worldwide

Below are the total units shipped of Macs, iPads & iPhones

- 2021: 321.5 million units
- 2022: 316.8 million units

vs

Windows x86 (all price points)

- 2021: 322.2 million units
- 2022: 263.7 million units

Apple devices out shipped all Intel/AMD PCs combined. Apple only caters to the top ~20% of any market they enter. Apple leveraged iPhone & iPad SoC R&D to create >90% of Apple Silicon.

<10% R&D for whatever Mac-specific requirements are paid for Mac revenue.

aapl-1q23-line.jpg


Which is why you end up with a mobile chip trading blows with a whole PC.


So you will see mobile SoCs getting more and more powerful at a faster rate than desktop microprocessors. And once they pass the inflection point, the desktop processor starts to actually lag in performance and can't catch up.

perf-trajectory_575px.png
 
Last edited:

leman

macrumors Core
Oct 14, 2008
19,521
19,675
Workstation parts are more expensive due to economies of scale.

Gaming parts have a relatively larger market than workstation but they have their limits as consumers cannot spend as much as companies that buy workstations.

Workstation parts also often use more expensive technology. Look at Intel for example. Their Xeon chips don’t use E-cores, have more RAM controllers and faster internal fabric, much more chance, and offer a dedicated matrix coprocessor. On the GPU front, workstation models offer more RAM, while gaming parts use highly overcklocked RAM to get that extra performance at a budget. On that note, AMD was recently caught enabling unsafe optimizations in their CPUs if the system detected that popular games were running. It could crash the system, but games crash anyway, so nobody would complain 🤪
 

VivienM

macrumors 6502
Jun 11, 2022
496
341
Toronto, ON
This has happened several times Mainframes -> Minicomputers -> Microcomputers -> SoCs... and it's usually correlated with jumps in levels of integration.

It counter intuitive, unless you understand how levels of integration work.
I agree entirely with your conclusion, though I am not sure I get there the same way.

It's funny, I remember thinking it was really weird that AMD was spinning off its fabs. In the PC era, at least, processor designing and fab ownership were always tied together. But in the smartphone era, that no longer appears to be the case, perhaps because the players in the smartphone sphere (e.g. Qualcomm) started off small and using third-party foundries and it has made no sense to dump the third party foundries and build your own fabs as volumes have gone way up.

Fundamentally, I would probably say that volume and economies of scale drives everything. Whatever product has the greatest volume will likely end up with the best (smallest) transistors, the biggest R&D budgets, etc... and over time, that either entirely eats the older, lower volume segments (e.g. *NIX-on-VAX being replaced by *NIX-on-x86), or the lower volume segments adapt to share as much of the higher volume segments' stuff as possible (e.g. IBM mainframes embracing PCI Express, Ethernet, or even CMOS chips).

And this is how we see today's companies leveraging the smartphone ecosystem, even if they're not using SoCs or ARM, e.g. IBM building mainframes out of 7nm zArchitecture chips made by Samsung or AMD creating the best product line they've ever had by being one generation behind at TSMC. And Apple has, obviously, gone a major step further by using actual 'smartphone' technologies in the current Macs.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,675
And companies that buy workstations are not willing to spend infinite money if there are cheaper alternatives that accomplish the same objectives good enough. Ask Silicon Graphics, Sun, DEC, etc how many companies are spending five figures on their workstations - oh wait, none of those companies are around anymore. And those who are still ostensibly around, e.g. IBM or HP, got rid of their traditional *NIX/RISC workstations long ago.

The economies of scale of x86/x64 and the cost savings of Windows NT and Linux just... ate the market... for workstations bought by willing/able-to-spend-for-performance companies.

But now we observe a somewhat reversed trend, as new problems require dedicated hardware. Look at the Nvidia Grace/Hopper.
 

Longplays

Suspended
May 30, 2023
1,308
1,158
I agree entirely with your conclusion, though I am not sure I get there the same way.

It's funny, I remember thinking it was really weird that AMD was spinning off its fabs. In the PC era, at least, processor designing and fab ownership were always tied together. But in the smartphone era, that no longer appears to be the case, perhaps because the players in the smartphone sphere (e.g. Qualcomm) started off small and using third-party foundries and it has made no sense to dump the third party foundries and build your own fabs as volumes have gone way up.

Fundamentally, I would probably say that volume and economies of scale drives everything. Whatever product has the greatest volume will likely end up with the best (smallest) transistors, the biggest R&D budgets, etc... and over time, that either entirely eats the older, lower volume segments (e.g. *NIX-on-VAX being replaced by *NIX-on-x86), or the lower volume segments adapt to share as much of the higher volume segments' stuff as possible (e.g. IBM mainframes embracing PCI Express, Ethernet, or even CMOS chips).

And this is how we see today's companies leveraging the smartphone ecosystem, even if they're not using SoCs or ARM, e.g. IBM building mainframes out of 7nm zArchitecture chips made by Samsung or AMD creating the best product line they've ever had by being one generation behind at TSMC. And Apple has, obviously, gone a major step further by using actual 'smartphone' technologies in the current Macs.
Hence exasperation with so many Intel fans mocking the Ultra chips as oversized smartphone chips.

That tech's eating x86 laptop lunch in the $999-up space. It could drop down from $999 to $704 if you know how to shop around. Laptops are 80% of all PCs shipped while the rest are desktops. Desktop workstations is, what? Less than 1%? More like 0.1%?

When the performance numbers comes out they start pointing out that higher-end & more expensive Xeon or Threadripper part does better but the price of a comparable system is not equivalent or even equal to a Ultra equivalent.

Then they bring up a use case that does not fit Apple's target use case as the userbase is dwindling.

AMD spinning off the fab is meant to increase utilization of assets by allowing competitors to use the same capacity when AMD isn't schedueld to use it. Also allows fabless semicondutors like AMD & then Apple to shop around for the best tech with matching capacity output.
 

theorist9

macrumors 68040
May 28, 2015
3,880
3,060
<10% R&D for whatever Mac-specific requirements are paid for Mac revenue.

aapl-1q23-pie.jpg


Which is why you end up with a mobile chip trading blows with a whole PC.


So you will see mobile SoCs getting more and more powerful at a faster rate than desktop microprocessors. And once they pass the inflection point, the desktop processor starts to actually lag in performance and can't catch up.

Can you please stop using misleading statistics? You've done that repeatedly this week. The first was when you tried to dismiss my large business example by saying it was an "outlier" because they represent only 1% of businesses in this country, while conveniently ignoring that they employ 38% of workers. And now you're representing relative revenue using quarterly results, even though it's know that relative revenue is highly seasonal. For instance, if we instead looked at Q4 2022, we'd get 13% for the Mac division (https://tidbits.com/2022/10/27/apple-weathers-stormy-seas-in-q4-2022/), which is twice the 6.6% figure from Q1 2023:

1686423625428.png



At the very least, you should average over one year, which averages out the seasonal variation. Here are the results for 2022 as a whole, which indicate that the Mac division generated 9.8% of Apple's revenue. This is half again as large as Q1 2023's 6.6% value (https://fourweekmba.com/apple-revenue-breakdown/)).

Last time I checked, the Mac division by itself would rank about 100th on the Fortune 500 list.

1686423636086.png

<10% R&D for whatever Mac-specific requirements are paid for Mac revenue.

Which is why you end up with a mobile chip trading blows with a whole PC.

So you will see mobile SoCs getting more and more powerful at a faster rate than desktop microprocessors. And once they pass the inflection point, the desktop processor starts to actually lag in performance and can't catch up.
By that argument, regular passenger cars should have long ago caught up with sports cars in performance, because the passengers cars are a bigger segment of the market. The problem with your argument is that both desktop and mobile chips use the same technology. And thus desktops can benefit from all the mobile advances, while having the advantage of being much less power-constrained. The question is whether Apple will take advantage of this by increasing the clock speeds of their desktops in the future. The AS chips are currently efficient enough that they could.
 
Last edited:

bobcomer

macrumors 601
May 18, 2015
4,949
3,699
I would note that I've been seeing this on the Windows front too. A few months ago I was looking at Dell's web site for multiple-CPU workstations and... couldn't find any. Seems like you can now fit so many cores on one socket that no one builds a 2-socket workstation anymore, which is unfortunate for the shrinking number of people whose needs could still benefit from two sockets.
That's the way I see it in the Windows (non gaming) world as well. Where once some needed the fastest machines we could buy can get by easily on consumer grade hardware. 16G is pretty much the minimum RAM buy and where an i7 or i9 was absolutely needed, an i5 is fast enough. SSD's made a huge difference over HD's. No need for a discrete video card either. That's why we see less upgradability, because the original machine is fast enough for its entire life. You can still easily get something that's spec'd up still but it's a decreasing market.
 

Longplays

Suspended
May 30, 2023
1,308
1,158
Dude, can you please stop using misleading statistics? This is the 2nd time you've done that this week. Quarterly results aren't representative, because they tend to vary widely. For instance, if we instead looked at Q4 2022, we'd get 13% for the Mac division (https://tidbits.com/2022/10/27/apple-weathers-stormy-seas-in-q4-2022/), which is twice as large as the 6.6% figure from Q1 2023:

View attachment 2216149


At the very least, you should average over one year, which averages out the seasonal variation. Here are the results for 2022 as a whole, which indicate that the Mac division generated 9.8% of Apple's revenue. This is half again as large as Q1 2023's 6.6% value (https://fourweekmba.com/apple-revenue-breakdown/)).
The charts is supposed to show where the money is coming from that funds R&D and economies of scale.

The premise remains the same... smartphone money funds SoC R&D that is applied to all of Apple's goods that uses the tech. iPhone chips' economies of scale is given equal preference to Mac chips at TSMC N5 and N3 fabs. To TSMC it's just another chip that they could care less what end use it would be for.

This is being replicated in Android SoC brands who are entering the ARM laptop market. Like say Qualcomm' NUVIA who are made up of ex-Apple Silicon engineers. This will materially hurt x86 when Windows 11 on ARM becomes that successful.

AMD/Intel's economies of scale for laptops, desktops and workstations will suffer when consumers without a care for upgradeability will see ARM laptops as a better value.

The value could be in cheaper, lighter, faster, cooler and longer battery life like any M1 or M2 Mac laptop.

In about a decade x86 will become a niche; like mainframes. Product refresh cycles will likely go longer what Intel/AMD/Nvidia are doing now.

Last time I checked, the Mac division by itself would rank about 100th on the Fortune 500 list.

View attachment 2216150

By that argument, regular passenger cars should have long ago caught up with sports cars in performance, because the passengers cars are a bigger segment of the market. The problem with your argument is that both desktop and mobile chips use the same technology. And thus desktops can benefit from all the mobile advances, while having the advantage of being much less power-constrained.
Where would the iPhone rank?

Passenger vehicles and sports cars do not materially benefit from Moore's Law. IIRC there was a Bill Gate joke about comparing PCs to cars in terms of performance and price.
The question is whether Apple will take advantage of this by increasing the clock speeds of their desktops in the future. The AS chips are currently efficient enough that they could.

Apple uses their larger/more complex cores to their advantage, by running them at a slower clock rate. While allowing them to do more work per clock cycle. This allows them to operate on the frequency/power sweet spot for their process. One has to note that power consumption increases significantly (way higher than linear) the higher the frequency.

There is also a key difference in business models: Apple is a system's vendor. Meaning that they sell the finished product, not just the processors. So they can use several parts from the vertical process to subsidize others. In this case, Apple can afford to make very good SoCs because they don't sell those chips elsewhere, meaning that they are not as pressured to make them "cheap" in terms of area for example. Since they're going to recoup the profit from elsewhere in the product.

In contrast; AMD and Intel sell their processors to OEMs, so they only get profit from the processor not the finished system. So they have to prioritize cost, by optimizing their designs for Area first and then focus on power. This is why both AMD and Intel use smaller cores, which allows them for smaller dies. But which have to be clocked faster in order to compete in performance, unfortunately that also increases power.

This is probably they key difference; Apple can afford the larger design that is more power efficient for the same performance. Whereas AMD/Intel have to aim for the smaller design that is less power efficient for the same performance.
 
Last edited:

VivienM

macrumors 6502
Jun 11, 2022
496
341
Toronto, ON
By that argument, regular passenger cars should have long ago caught up with sports cars in performance, because the passengers cars are a bigger segment of the market.
Arguably... they kind of did, at least on some metrics.

e.g. look at 0-60 times. A 1984 Corvette would do 0-60 in 7.9 seconds. A 1984 Ferrari Testarossa would be 5.2 seconds.

A 2023 Hyundai Sonata with the base 1.6L turbo engine will do 0-60 in 7.3 seconds. There are lots of non-sporty cars with 0-60 times in the mid-5 seconds today.

Meanwhile, today's sport cars would be in the low-mid 3 seconds.

Or look at another example - a 2003 Mercedes-Benz C32 AMG gets a 0-60 of 5.2 seconds. That was the supersporty variant of the MB C class in 2003; they offered three slower engines in the North American market (although the 1.8 might have only launched in the sedan in 2004...) that year. A 2023 Mercedes-Benz C300, with all-wheel drive that the C32 didn't have, does 0-60 in 5.3 seconds. That's the slowest, base engine they offer in the North American market. So 2023's base model matches the straight line performance of 2003's super-sporty variant of the same car.

Now, there are lots of other factors that make a good sports car other than straight line performance, and I think in most of those, a purpose-built sports car from 30-40 years ago would do better than a boring family car, but...
 
  • Wow
Reactions: Gudi

VivienM

macrumors 6502
Jun 11, 2022
496
341
Toronto, ON
That's the way I see it in the Windows (non gaming) world as well. Where once some needed the fastest machines we could buy can get by easily on consumer grade hardware. 16G is pretty much the minimum RAM buy and where an i7 or i9 was absolutely needed, an i5 is fast enough. SSD's made a huge difference over HD's. No need for a discrete video card either. That's why we see less upgradability, because the original machine is fast enough for its entire life. You can still easily get something that's spec'd up still but it's a decreasing market.
I agree with everything except the 'i7 or i9 was absolutely needed'. The whole i3/i5/i7 nomenclature was very cleverly designed to upsell people, when in reality the difference in performance was often... not that significant, especially if your workloads can't benefit from hyperthreading.

e.g. 4590 vs 4770 - the 4770 adds a base clock of 100MHz and hyperthreading.

It's more complicated on mobile, with some generations, etc, e.g. with Sandy Bridge, there are both dual and quad core mobile i7s while only dual-core i5s, but on desktop, I would argue that the i5 has been good enough for pretty much the entire time they've had the i3/i5/i7 (and later i9) nomenclature.

But while Intel might not get your ordinary gamer to pay extra for 100MHz, the fact that it's an i5 makes them feel inadequate and hand over the extra $100 for the i7.
 

VivienM

macrumors 6502
Jun 11, 2022
496
341
Toronto, ON
The premise remains the same... smartphone money funds SoC R&D that is applied to all of Apple's goods that uses the tech. iPhone chips' economies of scale is given equal preference to Mac chips at TSMC N5 and N3 fabs. To TSMC it's just another chip that they could care less what end use it would be for.
And smartphone money also funds the TSMC R&D that benefits AMD's Zen chips, the Samsung R&D that allows IBM to have their 7nm zArchitecture (and probably POWER?) chips, etc.

Many of the players whose chips have way, way, way less volume than PCs or smartphones have restructured their approach to semiconductors to leverage smartphone money. And then you have poor Intel, which dominated the world when 'PC money' enabled x86/x64 to dominate everything 2 decades ago but which is now working with dramatically slower volumes than the smartphone economy.
 
  • Love
Reactions: Longplays

Longplays

Suspended
May 30, 2023
1,308
1,158
And smartphone money also funds the TSMC R&D that benefits AMD's Zen chips, the Samsung R&D that allows IBM to have their 7nm zArchitecture (and probably POWER?) chips, etc.

Many of the players whose chips have way, way, way less volume than PCs or smartphones have restructured their approach to semiconductors to leverage smartphone money. And then you have poor Intel, which dominated the world when 'PC money' enabled x86/x64 to dominate everything 2 decades ago but which is now working with dramatically slower volumes than the smartphone economy.

Before 2030 AMD/Intel/Nvidia refresh cycle will lengthen to cover the worsening economies of scale. That's your sign that it is hurting from ARM laptops coming Q4 2023.

Android SoC are not focusing on the desktop space yet because less than 20% of all PCs shipped worldwide annually are desktops.
 

bobcomer

macrumors 601
May 18, 2015
4,949
3,699
I agree with everything except the 'i7 or i9 was absolutely needed'. The whole i3/i5/i7 nomenclature was very cleverly designed to upsell people, when in reality the difference in performance was often... not that significant, especially if your workloads can't benefit from hyperthreading.
I can't really agree with that, the difference was quite apparent to me, but no matter, not worth arguing about. As for i3, never bought one, celeron's soured me from the lowest end intel machines.
But while Intel might not get your ordinary gamer to pay extra for 100MHz, the fact that it's an i5 makes them feel inadequate and hand over the extra $100 for the i7.
That's true, the description is worth more than the machine with some. Perhaps even some of those that have to have an M2 Ultra over an M2 Max. :)
 

Longplays

Suspended
May 30, 2023
1,308
1,158
Top 5 Companies, Worldwide PC Workstation Shipments, Market Share, and Year-Over-Year Growth, 2022 (shipments in thousands of units)

Company2022 Shipments2022 Market Share2021 Shipments2021 Market Share2022/2021 Growth
1. Dell Technologies3,171.241.4%2,979.639.8%+6.4%
2. HP Inc.2,580.433.7%2,549.334.0%+1.2%
3. Lenovo1,860.024.3%1,920.925.6%-3.2%
4. ASUS24.50.3%19.70.3%+24.3%
5. NEC20.10.3%26.10.3%-22.7%
Total7,656.2100.0%7,495.6100.0%+2.1%

Source: https://www.idc.com/getdoc.jsp?containerId=prUS50454823
 
  • Like
Reactions: falainber

Gudi

Suspended
May 3, 2013
4,590
3,267
Berlin, Berlin
The question is whether Apple will take advantage of this by increasing the clock speeds of their desktops in the future. The AS chips are currently efficient enough that they could.
There are two different clock speeds for P and E cores. And when you actually utilize all the P cores, they will get hot and spin up the fans. I don't think the M1 Ultra would have a copper heatsink, if it wasn't needed. The next process node shrink to 3nm may allow for a further clock speed increase.
 
  • Like
Reactions: Longplays

theorist9

macrumors 68040
May 28, 2015
3,880
3,060
Arguably... they kind of did, at least on some metrics.

e.g. look at 0-60 times. A 1984 Corvette would do 0-60 in 7.9 seconds. A 1984 Ferrari Testarossa would be 5.2 seconds.

A 2023 Hyundai Sonata with the base 1.6L turbo engine will do 0-60 in 7.3 seconds. There are lots of non-sporty cars with 0-60 times in the mid-5 seconds today.

Meanwhile, today's sport cars would be in the low-mid 3 seconds.

Or look at another example - a 2003 Mercedes-Benz C32 AMG gets a 0-60 of 5.2 seconds. That was the supersporty variant of the MB C class in 2003; they offered three slower engines in the North American market (although the 1.8 might have only launched in the sedan in 2004...) that year. A 2023 Mercedes-Benz C300, with all-wheel drive that the C32 didn't have, does 0-60 in 5.3 seconds. That's the slowest, base engine they offer in the North American market. So 2023's base model matches the straight line performance of 2003's super-sporty variant of the same car.

Now, there are lots of other factors that make a good sports car other than straight line performance, and I think in most of those, a purpose-built sports car from 30-40 years ago would do better than a boring family car, but...
That data supports my view—that regardless of whether you look at decades ago or today, sports cars remain higher-performance than passenger cars. The one exception would be the 0-60 time for the Tesla Model S Plaid, but that's only because it uses entirely different technology, which wouldn't be applicable to comparing desktop and mobile processors.

What your data shows is that passenger cars of today are comparably fast to sports cars of previous years, but that's not relevant to my analogy. I was comparing contemporary desktop and mobile devices. Thus for my analogy, you'd need to compare contemporary family vs. sports cars.
 

theorist9

macrumors 68040
May 28, 2015
3,880
3,060
The charts is supposed to show where the money is coming from that funds R&D and economies of scale.
And the chart is misleading because it uses a quarter where Mac sales are unusually low relative to the rest of the company. Nowhere in your answer do I see an acknowledgement of that. When I get it wrong here on the site, I ackowledge it, and thank the poster for the correction. Why don't you?
Passenger vehicles and sports cars do not materially benefit from Moore's Law. IIRC there was a Bill Gate joke about comparing PCs to cars in terms of performance and price.
When Gates was explaining the inapplicability of Moore's Law to cars, he was talking about year-over-year advances. That has nothing to do with my analogy, which is about the relative performance of the same tech in the same year used in different applications.

Apple uses their larger/more complex cores to their advantage, by running them at a slower clock rate. While allowing them to do more work per clock cycle. This allows them to operate on the frequency/power sweet spot for their process. One has to note that power consumption increases significantly (way higher than linear) the higher the frequency.
There are two different clock speeds for P and E cores. And when you actually utilize all the P cores, they will get hot and spin up the fans. I don't think the M1 Ultra would have a copper heatsink, if it wasn't needed. The next process node shrink to 3nm may allow for a further clock speed increase.
Apple's chips are *so* efficient that, even with a quadratic or cubic increase in power with speed, they could implement a "turbo" boost for at least a few cores, giving significantly higher single-threaded performance in their desktop chips, while still having relatively low TDP. And that would be a big deal, since most apps today continue to be single-threaded. Hopefully they will do this with the M3.
 

falainber

macrumors 68040
Mar 16, 2016
3,539
4,136
Wild West
Here is an awkward, but honest question: how many industries/workloads/etc still need the fastest desktop/workstations in an absolute sense?

I think part of the issue is that for real work, the amount of processing power required doesn't really change if the work doesn't change. If you're trying to crunch the same data/video/etc as you were 10 years ago, 10 years ago that might have required a serious Mac Pro, and today an M1 MacBook Air will do it just fine. (Now, compare this with web browsing or web technologies, where you require 4x the RAM and CPU to do the same thing as 10 years ago)

So, as time goes on, the amount of workloads that actually require/benefit from the fastest desktop/workstations in an absolute sense should go down. Some new or more challenging workloads (e.g. HD/4K/8K video, or something that was just impossible with the older hardware) may come along, sure, which is what has kept fastest desktop/workstations around.

But fundamentally, think of the demanding workloads of 30 years ago. 30 years ago people replaced two year old Mac IIs that had just cost $10K USD with Quadras with expensive accelerator cards to run Photoshop filters. And that made sense when those filters took several hours to run each time. I suspect you can run the same filters on, say, a 2006 Intel iMac in a few seconds. No one sells "Photoshop accelerator" cards anymore - the idea that you would buy a $2000-in-1993-money video card (or whatever the price of those Thunder II/IV/etc NuBus cards was - I'm having trouble looking it up.) for Photoshop is laughable. I don't think people doing print media have been buying top-of-the-line Macs with expensive accelerator cards for at least a decade or a decade and a half.

I would note that I've been seeing this on the Windows front too. A few months ago I was looking at Dell's web site for multiple-CPU workstations and... couldn't find any. Seems like you can now fit so many cores on one socket that no one builds a 2-socket workstation anymore, which is unfortunate for the shrinking number of people whose needs could still benefit from two sockets.
Your premise is fundamentally wrong. 10 years ago and 30 years ago people were trying to use the computers for solving the problems they could do realistically with the compute power of the contemporary computers not the problems they wanted to solve. What you are saying is that Mac Pros are good enough to deal with compute tasks of the previous decade(s) and if one wants to do better they need to use modern computers (i.e. PCs)
 

VivienM

macrumors 6502
Jun 11, 2022
496
341
Toronto, ON

That data supports my view—that regardless of whether you look at decades ago or today, sports cars remain higher-performance than passenger cars. The one exception would be the 0-60 time for the Tesla Model S Plaid, but that's only because it uses entirely different technology, which wouldn't be applicable to comparing desktop and mobile processors.

What your data shows is that passenger cars of today are comparably fast to sports cars of previous years, but that's not relevant to my analogy. I was comparing contemporary desktop and mobile devices. Thus for my analogy, you'd need to compare contemporary family vs. sports cars.
The other thing that would be interesting is to measure the gap between family cars vs sports cars.

When that 1984 Corvette was 7.9 seconds, what was a mainstream family car getting?

And what about in 2023?

I haven't done the research (and don't really want to), but I wonder whether the gap has remained consistent or if it has shrunk.
 

falainber

macrumors 68040
Mar 16, 2016
3,539
4,136
Wild West
Before 2030 AMD/Intel/Nvidia refresh cycle will lengthen to cover the worsening economies of scale. That's your sign that it is hurting from ARM laptops coming Q4 2023.

Android SoC are not focusing on the desktop space yet because less than 20% of all PCs shipped worldwide annually are desktops.
You are rrying to prove one assumption with another assumption. This way one can prove anything.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.