Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

mr_roboto

macrumors 6502a
Sep 30, 2020
856
1,866
Since they are working in Edge you would think they have a plan to selectively de-bloat Electron. I get the desire to use the platform but if your app isn’t using the 10,000 different technologies in Chromium, figure out a way to remove them and only keep the stuff you need. JavaScript, HTML rendering, and CSS are probably relatively small parts of the overall package. I’d be happy to be corrected on any of this if anyone has real information on where the Electron bloat comes from.
Most of the time, the people writing and shipping Electron apps aren't going to be able to slim down Chromium - they don't know how. They're web devs who know HTML, CSS, and Node, and would just be overwhelmed by a complex C++ codebase like Chromium.

The Electron project doesn't provide tools to guide web devs through it, because their focus is on "use HTML+CSS+Node any way you want".

The Chromium project probably doesn't make it easy to slim down Chromium, because their focus is on delivering a feature-complete browser, not a toolbox for making your own browser with exactly the feature list you want.

So while it might seem like a simple thing, in practice this is not something any of the parties involved are set up to do.

More importantly, though, why would they have a plan to de-bloat Electron? I don't think anyone other than grumpy old folks like me see its bloat as a problem. And it gets even more ridiculous on Apple Silicon - I've noticed all the Intel apps left on my Mac are Electron, which is highly ironic since Electron apps should be the easiest to port to Apple Silicon.
You would think so, but when I was following progress on an Electron app that had lots of users anxious to get an AS native build, I found out that it's often not that simple. The app had been built and maintained on an older version of Electron. It took the dev quite a long time to port their own webapp codebase to work right on more modern Electron versions, and until they did, they couldn't ship an AS native app.
 
  • Like
Reactions: jdb8167

jdb8167

macrumors 601
Nov 17, 2008
4,859
4,599
Most of the time, the people writing and shipping Electron apps aren't going to be able to slim down Chromium - they don't know how. They're web devs who know HTML, CSS, and Node, and would just be overwhelmed by a complex C++ codebase like Chromium.
Totally agree which is why I suggested that Microsoft may want to do it if they plan on using Electron for more and more projects. Clearly Microsoft has the developers that can work with a complicated code base like Chromium and Blink.
 

ImpostorOak

macrumors member
Dec 27, 2009
86
43
It's funny to think that Intel had >3Ghz Pentium 4s like 20 years ago. A lot of things have obviously improved with CPUs since then but that part hasn't gone up that much.
 

theorist9

macrumors 68040
May 28, 2015
3,880
3,059
It's funny to think that Intel had >3Ghz Pentium 4s like 20 years ago. A lot of things have obviously improved with CPUs since then but that part hasn't gone up that much.
That's because that's when Intel and other CPU makers abandoned the "clock speed wars". More specifically, it was with the 4 GHz* Pentium Extreme (c. 2005) that Intel realized thermal limits prevented them from continuing to gain speed from higher clocks, and instead turned their focus to more cores rather than faster-clocked cores (though SC performance did continue to improve due to higher instructions-per-clock and other architectural improvements).

The problem, of course, is that most programs continue to be single-threaded, and thus don't benefit from more cores (so long as you're not saturating the cores with other running apps). At the same time, because of significant increases in overhead (e.g., Excel, Word, Acrobat Pro), a new version run on a new, fast processor can, ironically, show more lag than an older version of that app run on an old & slower processor.

*As you know, Intel and AMD have started to push that barrier again (e.g., the 6.0 GHz max turbo on the binned i9-13900KS), at the cost of high TDPs (particularly in Intel's case). But I don't know how much more they can do there.
 

monstermash

macrumors 6502a
Apr 21, 2020
974
1,059
They always say the new models have such and such number of cores. What about the CPU speed?
I know Apple Tech Support has no information on SSD speed. They probably also do not know about the CPU speed.
Is there a reliable source to obtain such information on M2 Max and M2 Pro on the MacBook Pro and Mini?
Probably because most people don't care.
 

VivienM

macrumors 6502
Jun 11, 2022
496
341
Toronto, ON
That's because that's when Intel and other CPU makers abandoned the "clock speed wars". More specifically, it was with the 4 GHz* Pentium Extreme (c. 2005) that Intel realized thermal limits prevented them from continuing to gain speed from higher clocks, and instead turned their focus to more cores rather than faster-clocked cores (though SC performance did continue to improve due to higher instructions-per-clock and other architectural improvements).

The problem, of course, is that most programs continue to be single-threaded, and thus don't benefit from more cores (so long as you're not saturating the cores with other running apps). At the same time, because of significant increases in overhead (e.g., Excel, Word, Acrobat Pro), a new version run on a new, fast processor can, ironically, show more lag than an older version of that app run on an old & slower processor.

*As you know, Intel and AMD have started to push that barrier again (e.g., the 6.0 GHz max turbo on the binned i9-13900KS), at the cost of high TDPs (particularly in Intel's case). But I don't know how much more they can do there.
Don't forget that Intel offered a Pentium D with two "HotBurst" cores.

To fill in the history lesson a bit more, in the late late 1990s, AMD started to have a seriously competitive Athlon line. Back then, processors were not typically advertised with model numbers but rather with clock rates. Intel wanted to respond and out-MHz/GHz them. They famously tried to make a 1.13GHz Coppermine PIII that was effectively overclocked; it ended up being discontinued/recalled because it was shown to be unstable by the enthusiast reviewers (I forget if it was Tom's Hardware, AnandTech, etc, but one or two of those guys were responsible for exposing this embarassing blunder). So their next architecture, the NetBurst architecture in the Pentium 4, was mandated by the marketing department to deliver high clock rates that would clobber the Athlons. They did that at a cost of less performance per clock, so the first Willamette P4s at 1.4-1.6GHz were barely faster than 1GHz PIIIs.

Now, back in those days, Intel had an advantage that they have now lost, namely the best transitors. So... maybe a year after Willamette, they moved to smaller transistors with the Northwood core, and it was... pretty good. Fairly hot, but... not too too bad. Some 'desktop replacement' laptops even came with desktop P4 chips.

But AMD was not being defeated, so the marketing department said they needed more GHz. The engineers responded with the Prescott core which, again, was supposed to deliver higher GHz by lowering performance-per-GHz. Despite smaller transistors (90nm), heat output went way up, performance was not that much better, and the highest clock rate Prescott shipped was 3.8GHz, only 400MHz more than the fastest Northwood.

Now, two things were happening at this time:
1) Increased heat output was becoming a challenge, so Intel came out with the "BTX" case standard that moved things around and made it easier to output the heat from a P4.
2) Mobile was growing, and the desktop P4s stuck in laptops were clearly not a viable option. So Intel went back to the PIII core (P6 microarchitecture), updated it, shrunk it to a smaller process, etc, and launched the 'Pentium M'. Lower clock rates, higher performance per clock. And they had a couple generations of those.

Unfortunately, AMD remained undefeated, and, instead, successfully managed to move marketing away from clock speeds. The "popular" AMD CPU of that time became the Athlon X2 3800+ (and its predecessor the Athlon 64 3xxx+), which... was not clocked at 3.8GHz.

Intel marketing, though, wanted to crush AMD with GHz again, so they had the engineering department work on another iteration of the NetBurst microarchitecture with lower-performance-but-clock and higher clock rates. That was the Tejas core, which ran substantially hotter than Northwood and Prescott... and was cancelled. That was the end of marketing driving architectural design.

As a short-term solution, Intel launched a dual-core version of the Prescott, called a Pentium D, then moved those chips to a 65nm process, but that was the end of the line for the "HotBurst" architecture. Around the same time, AMD launched popular dual-core chips, the Athlon X2. A first dual-core version of the Pentium Ms was shipped as the short-lived 32-bit-only Yonah "Core Duo".

Longer term, they decided to go to the Pentium M and scale that architecture back to desktop chips. The result was Conroe, which shipped in summer 2006 under the "Core 2 Duo" name. Probably the greatest leap forward in x86 CPUs ever. AnandTech, for example, described it as "the most impressive piece of silicon the world has ever seen - and the fastest desktop processor we've ever tested." (https://www.anandtech.com/show/2045) The most popular model, the E6600, was clocked at 2.4GHz... and utterly annihilated the Pentium 4s in every benchmark despite having 1.4GHz less. (My recollection is that that architecture was only offered as a dual, and later, quad core, though I might be forgetting a few weird low-end chips sold as Celerons or something.)

And, rather ironically, the 2.4GHz E6600 Conroe accomplished what 5-6 years of high-GHz NetBursts had not - shove AMD to the side, a spot from which they would never recover until the launch of the Zen microarchitecture over a decade later. The quad-core version, the Q6600, would become the de-facto enthusiast CPU standard of the late-2000s, replacing the Athlon X2 3800+.

(The Xeon version of the Conroe core was used in the first Mac Pros, I might add... and the mobile version landed in MacBooks and iMacs. And when Steve Jobs announced the Intel switch and talked about Intel's performance-per-watt roadmap, this is the chip he had in mind, not the Pentium 4 in the developer transition kit.)

Around that time, without hot P4s to cool anymore, Intel abandoned the BTX motherboard/case standard, which had only been used by a few machines from large OEMs like Dell, and went back to ATX which continues to this day.

Also interestingly - to this day, 17 years later, I would argue that the Conroe C2Ds are still the 'base' processor for Windowsland. Microsoft is killing them with Windows 11 and its steep hardware requirements, but I would believe that a Conroe with enough RAM can run Windows 10, productivity applications, etc perfectly adequately to this day. (Disclaimer - I sold my E6600 over a decade ago. I still have some 45nm C2D machines in the closet, but have not booted them up in 5+ years. So maybe web-technologies garbage like Electron has finally slain the mighty Conroe)

When they moved to the 45nm process a year or two later, they added various power management features that massively cut idle power consumption. (This is why I got rid of my E6600 - a 45nm C2D would idle at 30-40W less) And then they invented turbo boost, i.e. the opposite - something that, if the chip was cool enough, would allow it to consume more power and overclock itself to go faster for a short period of time.

Anyways, during all of this time, Intel always had the best transistors. The best transistors and a meh architecture created passable products like the Pentium 4s. The best transistors and a great architecture created legends like Conroe, Sandy Bridge, etc. that pushed both performance and performance-per-watt substantially forward and left AMD without a response for years. (Many Sandy Bridges are still in active use over a decade later...) And the best transistors kept the x86 juggernaut going/growing in the face of otherwise-technically-superior ISAs from others (e.g. PowerPC).

But then two things happened:
1) Intel had major, major league trouble getting their 10nm process running. Instead of a new process every two years (which powered the so-called tick-tock model) as we had seen since, well, forever, they ended up stuck at 14nm for a long time. 14nm desktop chips came out in 2015 (Skylake); 10nm ("Intel 7") desktop chips came out in late 2021 (Alder Lake), so six years instead of two.
2) The PC industry stopped being the driving force of innovation in semiconductors, replaced by smartphones. And so, with the money being in smartphones, the foundries that produce smartphone chips (Samsung, TSMC) became the leaders.
AMD, wisely, saw #2 coming, got rid of their fabs, and rearranged their business to have TSMC make their chips.

The bottom line is that the best transistors are not from Intel anymore - they are from TSMC. And that is how you end up with today's world:
1) Intel's stagnation at 14nm (and increase in clock rates/core counts/power consumption to mask that stagnation, see #3) was undoubtedly part of the factors that drove Apple to scale up their TSMC-made ARM chips for Mac use - my guess is that if Intel had continued innovating at the rate of the 2006-2014 era, Macs would still be Intel,
2) TSMC transistors have powered AMD's resurgence, as AMD is now offering the x86 chips with the best transistors (which led AMD to be the first to offer Windows enthusiasts a worthwhile upgrade to their ~2011-2015-era chips), and
3) Intel's only short-term response has been to jack up the clock rates, particularly relying on turbo boost. And as the clock rates and boost rates have gone up, power consumption has gone back up, with modern CPUs, especially the enthusiast K versions, hitting TDPs higher substantially than the peaks of the NetBurst era.
 

theorist9

macrumors 68040
May 28, 2015
3,880
3,059
Don't forget that Intel offered a Pentium D with two "HotBurst" cores.

To fill in the history lesson a bit more, in the late late 1990s, AMD started to have a seriously competitive Athlon line. Back then, processors were not typically advertised with model numbers but rather with clock rates. Intel wanted to respond and out-MHz/GHz them. They famously tried to make a 1.13GHz Coppermine PIII that was effectively overclocked; it ended up being discontinued/recalled because it was shown to be unstable by the enthusiast reviewers (I forget if it was Tom's Hardware, AnandTech, etc, but one or two of those guys were responsible for exposing this embarassing blunder). So their next architecture, the NetBurst architecture in the Pentium 4, was mandated by the marketing department to deliver high clock rates that would clobber the Athlons. They did that at a cost of less performance per clock, so the first Willamette P4s at 1.4-1.6GHz were barely faster than 1GHz PIIIs.

Now, back in those days, Intel had an advantage that they have now lost, namely the best transitors. So... maybe a year after Willamette, they moved to smaller transistors with the Northwood core, and it was... pretty good. Fairly hot, but... not too too bad. Some 'desktop replacement' laptops even came with desktop P4 chips.

But AMD was not being defeated, so the marketing department said they needed more GHz. The engineers responded with the Prescott core which, again, was supposed to deliver higher GHz by lowering performance-per-GHz. Despite smaller transistors (90nm), heat output went way up, performance was not that much better, and the highest clock rate Prescott shipped was 3.8GHz, only 400MHz more than the fastest Northwood.

Now, two things were happening at this time:
1) Increased heat output was becoming a challenge, so Intel came out with the "BTX" case standard that moved things around and made it easier to output the heat from a P4.
2) Mobile was growing, and the desktop P4s stuck in laptops were clearly not a viable option. So Intel went back to the PIII core (P6 microarchitecture), updated it, shrunk it to a smaller process, etc, and launched the 'Pentium M'. Lower clock rates, higher performance per clock. And they had a couple generations of those.

Unfortunately, AMD remained undefeated, and, instead, successfully managed to move marketing away from clock speeds. The "popular" AMD CPU of that time became the Athlon X2 3800+ (and its predecessor the Athlon 64 3xxx+), which... was not clocked at 3.8GHz.

Intel marketing, though, wanted to crush AMD with GHz again, so they had the engineering department work on another iteration of the NetBurst microarchitecture with lower-performance-but-clock and higher clock rates. That was the Tejas core, which ran substantially hotter than Northwood and Prescott... and was cancelled. That was the end of marketing driving architectural design.

As a short-term solution, Intel launched a dual-core version of the Prescott, called a Pentium D, then moved those chips to a 65nm process, but that was the end of the line for the "HotBurst" architecture. Around the same time, AMD launched popular dual-core chips, the Athlon X2. A first dual-core version of the Pentium Ms was shipped as the short-lived 32-bit-only Yonah "Core Duo".

Longer term, they decided to go to the Pentium M and scale that architecture back to desktop chips. The result was Conroe, which shipped in summer 2006 under the "Core 2 Duo" name. Probably the greatest leap forward in x86 CPUs ever. AnandTech, for example, described it as "the most impressive piece of silicon the world has ever seen - and the fastest desktop processor we've ever tested." (https://www.anandtech.com/show/2045) The most popular model, the E6600, was clocked at 2.4GHz... and utterly annihilated the Pentium 4s in every benchmark despite having 1.4GHz less. (My recollection is that that architecture was only offered as a dual, and later, quad core, though I might be forgetting a few weird low-end chips sold as Celerons or something.)

And, rather ironically, the 2.4GHz E6600 Conroe accomplished what 5-6 years of high-GHz NetBursts had not - shove AMD to the side, a spot from which they would never recover until the launch of the Zen microarchitecture over a decade later. The quad-core version, the Q6600, would become the de-facto enthusiast CPU standard of the late-2000s, replacing the Athlon X2 3800+.

(The Xeon version of the Conroe core was used in the first Mac Pros, I might add... and the mobile version landed in MacBooks and iMacs. And when Steve Jobs announced the Intel switch and talked about Intel's performance-per-watt roadmap, this is the chip he had in mind, not the Pentium 4 in the developer transition kit.)

Around that time, without hot P4s to cool anymore, Intel abandoned the BTX motherboard/case standard, which had only been used by a few machines from large OEMs like Dell, and went back to ATX which continues to this day.

Also interestingly - to this day, 17 years later, I would argue that the Conroe C2Ds are still the 'base' processor for Windowsland. Microsoft is killing them with Windows 11 and its steep hardware requirements, but I would believe that a Conroe with enough RAM can run Windows 10, productivity applications, etc perfectly adequately to this day. (Disclaimer - I sold my E6600 over a decade ago. I still have some 45nm C2D machines in the closet, but have not booted them up in 5+ years. So maybe web-technologies garbage like Electron has finally slain the mighty Conroe)

When they moved to the 45nm process a year or two later, they added various power management features that massively cut idle power consumption. (This is why I got rid of my E6600 - a 45nm C2D would idle at 30-40W less) And then they invented turbo boost, i.e. the opposite - something that, if the chip was cool enough, would allow it to consume more power and overclock itself to go faster for a short period of time.

Anyways, during all of this time, Intel always had the best transistors. The best transistors and a meh architecture created passable products like the Pentium 4s. The best transistors and a great architecture created legends like Conroe, Sandy Bridge, etc. that pushed both performance and performance-per-watt substantially forward and left AMD without a response for years. (Many Sandy Bridges are still in active use over a decade later...) And the best transistors kept the x86 juggernaut going/growing in the face of otherwise-technically-superior ISAs from others (e.g. PowerPC).

But then two things happened:
1) Intel had major, major league trouble getting their 10nm process running. Instead of a new process every two years (which powered the so-called tick-tock model) as we had seen since, well, forever, they ended up stuck at 14nm for a long time. 14nm desktop chips came out in 2015 (Skylake); 10nm ("Intel 7") desktop chips came out in late 2021 (Alder Lake), so six years instead of two.
2) The PC industry stopped being the driving force of innovation in semiconductors, replaced by smartphones. And so, with the money being in smartphones, the foundries that produce smartphone chips (Samsung, TSMC) became the leaders.
AMD, wisely, saw #2 coming, got rid of their fabs, and rearranged their business to have TSMC make their chips.

The bottom line is that the best transistors are not from Intel anymore - they are from TSMC. And that is how you end up with today's world:
1) Intel's stagnation at 14nm (and increase in clock rates/core counts/power consumption to mask that stagnation, see #3) was undoubtedly part of the factors that drove Apple to scale up their TSMC-made ARM chips for Mac use - my guess is that if Intel had continued innovating at the rate of the 2006-2014 era, Macs would still be Intel,
2) TSMC transistors have powered AMD's resurgence, as AMD is now offering the x86 chips with the best transistors (which led AMD to be the first to offer Windows enthusiasts a worthwhile upgrade to their ~2011-2015-era chips), and
3) Intel's only short-term response has been to jack up the clock rates, particularly relying on turbo boost. And as the clock rates and boost rates have gone up, power consumption has gone back up, with modern CPUs, especially the enthusiast K versions, hitting TDPs higher substantially than the peaks of the NetBurst era.
It's interesting to note that a key reason TSMC (and Samsung) have recently been able to produce better chips than Intel is because the former purchased several EUV photolithography machines from ASML (the only company that makes them; https://en.wikipedia.org/wiki/ASML_Holding), while Intel has been slow in acquiring them. This is ironic, as Intel has been involved with ASML since 1997, and purchased a 15% stake in the company in 2012.

These machines produce extreme ultraviolet (EUV) light, whose wavelengths are sufficiently small to etch low-nm scale features, and are essential for TSMC's & Samsung's production of chips with small feature sizes. [ASML is a Dutch multinational. I recall reading these machines are designed principally in ASML's US offices, and assembled by their facilities worldwide.]

Intel, by contrast, has been slow in its purchase of these machines. Here's a summary of the installed EUV base as of April 2022 by Scotten Jones from https://semiwiki.com/semiconductor-services/ic-knowledge/311036-intel-and-the-euv-shortage/ . You can see the clear contrast between Intel, Samsung and TSMC:

[The bullet points are a direct quote from Scotten Jones.]
  • Intel currently has 3 development fabs phases that are EUV capable and 1 EUV capable production fab although only the development fab has EUV tools installed. Intel is building 8 more EUV capable production fabs.
  • Micron Technology has announced they are pulling in EUV from the one delta node to one gamma. Micron’s Fab 16-A3 in Taiwan is under construction to support EUV.
  • Nanya has talked about implementing EUV.
  • SK Hynix is in production of one alpha DRAM using EUV for approximately 5 layers and have placed a large EUV tool order with ASML.
  • Samsung is using EUV for 7nm and 5nm logic and ramping up 3nm. Samsung also has 1z DRAM in production with 1 EUV layer and 1 alpha ramping up with 5 EUV layers. Fabs in Hwaseong and Pyeongtaek have EUV tools with significant expansion in Pyeongtaek underway and the planned Austin logic fab will be EUV.
  • TSMC has fab 15 phases 5, 6, and 7 running 7nm EUV processes. Fab 18 phase 1, 2, and 3, are running 5nm with EUV. 5nm capacity ended 2021 at 120k wpm and has been projected to reach 240k wpm by 2024. Fab 21 in Arizona will add an additional 20k wpm of 5nm capacity. 3nm is ramping in Fab 18 phases 4, 5, and 6 and is projected to be a bigger node than 5nm. Fab 20 phases 1, 2, 3, and 4, are in the planning stages for 2nm and another 2nm site is being discussed.
However, even though Intel has been a slow starter, it's now ramping up its own purchase of advanced ASML EUV machines:


Thus, while there is of course much more to small processes than owning an EUV machine, given Intel's general expertise they may catch up to TSMC within a few years.

I don't know why Intel was not more aggressive in acquiring these early on.

BTW, here's a pic of their Twinscan NXE:3400B during final assembly:

1681956775616.png

 
  • Like
Reactions: TarkinDale

mr_roboto

macrumors 6502a
Sep 30, 2020
856
1,866
To fill in the history lesson a bit more, in the late late 1990s, AMD started to have a seriously competitive Athlon line. Back then, processors were not typically advertised with model numbers but rather with clock rates. Intel wanted to respond and out-MHz/GHz them. They famously tried to make a 1.13GHz Coppermine PIII that was effectively overclocked; it ended up being discontinued/recalled because it was shown to be unstable by the enthusiast reviewers (I forget if it was Tom's Hardware, AnandTech, etc, but one or two of those guys were responsible for exposing this embarassing blunder). So their next architecture, the NetBurst architecture in the Pentium 4, was mandated by the marketing department to deliver high clock rates that would clobber the Athlons. They did that at a cost of less performance per clock, so the first Willamette P4s at 1.4-1.6GHz were barely faster than 1GHz PIIIs.
Ehhh.

People outside the chip industry often seem to think each major new design is a reaction to things which happened in the same calendar year. Nope. These projects take years, often three or more. Towards the end they're pretty inflexible, too. You can tweak some things in the last 6 to 12 months, but there's no scope for major new ideas. Since P4 was a brand new microarchitecture with a lot of innovative stuff (even if those innovations were in service of a flawed goal, which was to raise clock speed a lot), it's pretty certain it was already a multi year old project even before Coppermine was speed bumped to 1.13GHz. (Speed bumps of mature designs are much shorter term projects.)

Now, back in those days, Intel had an advantage that they have now lost, namely the best transitors. So... maybe a year after Willamette, they moved to smaller transistors with the Northwood core, and it was... pretty good. Fairly hot, but... not too too bad.
Intel didn't have the best transistors at the time, or if they did it wasn't much to talk about. Willamette was a 180nm chip, and Intel's 180nm was fairly generic IIRC. No exotic materials yet, no huge advantage in transistor performance. They weren't even the first to get to 180nm.

Back then Intel's main lead on the industry was high volume with high yield across many fabs and fab sites. Better transistors came along later.

I could go on but I don't want to spend hours trying to find things to nitpick. Just saying, you're repeating a lot of made-up PC enthusiast narratives about the history of all this, especially when it comes to how these companies decide to do things.

This is an ongoing problem. People right here on these boards invent all kinds of crazy theories about how various M2 features are explained by events X, Y, and Z where X/Y/Z happened less than a year before M2 released. Same principle applies, chip designs just take way longer than the public seems to think they do.
 

VivienM

macrumors 6502
Jun 11, 2022
496
341
Toronto, ON
People outside the chip industry often seem to think each major new design is a reaction to things which happened in the same calendar year. Nope. These projects take years, often three or more. Towards the end they're pretty inflexible, too. You can tweak some things in the last 6 to 12 months, but there's no scope for major new ideas. Since P4 was a brand new microarchitecture with a lot of innovative stuff (even if those innovations were in service of a flawed goal, which was to raise clock speed a lot), it's pretty certain it was already a multi year old project even before Coppermine was speed bumped to 1.13GHz. (Speed bumps of mature designs are much shorter term projects.)
Oh, sure, the decision to go with high clock rates happened before the little 1.13GHz Coppermine disaster... but... why would Intel have embraced this path of idiocy if it wasn't for the success of AMD's Athlon?

I suppose the calendar doesn't really agree with that, in the sense that the first Pentium 4s only shipped a little over a year after the Athlon launch, which sounds... way too short... to design a new architecture. Although... that assumes they get their information from public sources only. Presumably they would have known that AMD was cooking something big and threatening long before AnandTech & co got their review samples.

But, I guess the question is - if the high-GHz NetBurst idiocy wasn't about responding to the AMD Athlon and outGHzing it (which certainly was the view in PC enthusiast circles), why did they ever go down that path? And why did they intend on doubling down on it, even to the point of re-designing PC case standards, until they didn't?
 

VivienM

macrumors 6502
Jun 11, 2022
496
341
Toronto, ON
I could go on but I don't want to spend hours trying to find things to nitpick. Just saying, you're repeating a lot of made-up PC enthusiast narratives about the history of all this, especially when it comes to how these companies decide to do things.

This is an ongoing problem. People right here on these boards invent all kinds of crazy theories about how various M2 features are explained by events X, Y, and Z where X/Y/Z happened less than a year before M2 released. Same principle applies, chip designs just take way longer than the public seems to think they do.
The other thing is, I'm not sure that that's what my little narrative was saying. It's actually fairly clear when someone was caught by surprise and when they were not - the typical response, at least in x86 CPUs, to being caught by surprise is to increase clock speeds (and therefore heat) on an existing design and/or cut prices. Or add cores of an existing design. And start sending every media outlet an updated 'roadmap' that shows that not-so-far away there's a product that will put you back on unquestioned top... assuming the other guy/gal haven't improved their product by then.

I would actually argue that that's what we've been seeing out of Intel the last few years. To pick a silly example, Apple has 8 performance cores and 4 efficiency cores on M2 Max and had 8 performance cores and 2 efficiency cores in M1 Max. Intel's current Raptor Lake CPUs have 8 performance cores and 8-16 efficiency cores; indeed, when you open your wallet more, they give you more efficiency cores... on a product that's designed for gamers and enthusiasts. Why, precisely, would a gamer buying an i9 instead of i7 want the same number of performance cores and double the efficiency cores?

Apple's ratios of performance/efficiency cores make sense to me; AMD not doing a big.little architecture makes sense. Intel's ratios do not, unless this was improvised on a shorter-than-normal time frame. 16 efficiency cores on a gaming enthusiast CPU?!?

And similarly, Intel's recent desktop processors have been absolutely through the roof on heat and power consumption. And, so far, each generation seems to be getting worse. That suggests to me that, sure, the design was probably wrapped up a few years ago, but they upped the clock speeds (and perhaps cut the prices) at the last minute when they saw the numbers coming out of AMD's chips.

Intel's real response to the Zen-on-TSMC phenomenon, certainly, is not shipping yet. Sorry Pat Gelsinger...
 

Wokis

macrumors 6502a
Jul 3, 2012
931
1,276
The other thing is, I'm not sure that that's what my little narrative was saying. It's actually fairly clear when someone was caught by surprise and when they were not - the typical response, at least in x86 CPUs, to being caught by surprise is to increase clock speeds (and therefore heat) on an existing design and/or cut prices. Or add cores of an existing design. And start sending every media outlet an updated 'roadmap' that shows that not-so-far away there's a product that will put you back on unquestioned top... assuming the other guy/gal haven't improved their product by then.

I would actually argue that that's what we've been seeing out of Intel the last few years. To pick a silly example, Apple has 8 performance cores and 4 efficiency cores on M2 Max and had 8 performance cores and 2 efficiency cores in M1 Max. Intel's current Raptor Lake CPUs have 8 performance cores and 8-16 efficiency cores; indeed, when you open your wallet more, they give you more efficiency cores... on a product that's designed for gamers and enthusiasts. Why, precisely, would a gamer buying an i9 instead of i7 want the same number of performance cores and double the efficiency cores?

Apple's ratios of performance/efficiency cores make sense to me; AMD not doing a big.little architecture makes sense. Intel's ratios do not, unless this was improvised on a shorter-than-normal time frame. 16 efficiency cores on a gaming enthusiast CPU?!?

And similarly, Intel's recent desktop processors have been absolutely through the roof on heat and power consumption. And, so far, each generation seems to be getting worse. That suggests to me that, sure, the design was probably wrapped up a few years ago, but they upped the clock speeds (and perhaps cut the prices) at the last minute when they saw the numbers coming out of AMD's chips.

Intel's real response to the Zen-on-TSMC phenomenon, certainly, is not shipping yet. Sorry Pat Gelsinger...
I'd agree that for a gamer it's a way of wasting money but I think that community is also pretty aware of that fact. They wouldn't find use for a 9th performance core either.

You can fit about 3 efficiency cores on the same die space as one performance core, so for scalable parallel work it's decent use of compute per mm². I can definitely see enthusiasts and people with multi-core workloads reaping benefits from that design. HP among others do sell i9-equipped workstations.
 

theorist9

macrumors 68040
May 28, 2015
3,880
3,059
You can fit about 3 efficiency cores on the same die space as one performance core, so for scalable parallel work it's decent use of compute per mm². I can definitely see enthusiasts and people with multi-core workloads reaping benefits from that design. HP among others do sell i9-equipped workstations.
If the ratio of their computational power (perf:eff) is like Apple's (~5:1), then using 1/3 the die space to get 1/5 the computational power wouldn't be a good deal. Plus parallel workloads rarely scale linearly, so you'd lose even more (i.e., 10 cores at 1/5 performance won't get as much done as as 2 cores at full performance, because there's typically a performance loss when distributing parallel workloads across multiple cores).

Don't get me wrong—efficiency cores are a great idea. But I suspect that they afford neither die space savings, nor performance benefits for highly parallel workloads, vs. a purely performance core design.
 

JouniS

macrumors 6502a
Nov 22, 2020
638
399
If the ratio of their computational power (perf:eff) is like Apple's (~5:1), then using 1/3 the die space to get 1/5 the computational power wouldn't be a good deal.
It's more like 50% of the performance for 1/3 of the die area. While Apple's cores are about performance vs. efficiency, Intel's P-cores have been optimized for single-threaded workloads and E-cores for multi-threaded workloads.
 
  • Like
Reactions: theorist9

mr_roboto

macrumors 6502a
Sep 30, 2020
856
1,866
I suppose the calendar doesn't really agree with that, in the sense that the first Pentium 4s only shipped a little over a year after the Athlon launch, which sounds... way too short... to design a new architecture. Although... that assumes they get their information from public sources only. Presumably they would have known that AMD was cooking something big and threatening long before AnandTech & co got their review samples.
Yes, the players here likely get some info before the public does. How much and how accurate? Who knows.

But, I guess the question is - if the high-GHz NetBurst idiocy wasn't about responding to the AMD Athlon and outGHzing it (which certainly was the view in PC enthusiast circles), why did they ever go down that path? And why did they intend on doubling down on it, even to the point of re-designing PC case standards, until they didn't?
You might as well ask why Intel management spent incredible sums developing Itanium while abandoning 64-bit x86 to be defined by AMD. That was a far worse decision than committing the x86 processor line to NetBurst for several generations, IMO.

The best eras of Intel history happened when the C-suite understood engineers and their managers well enough to form lasting relationships with good ones, trust their advice, and so on. But a lot of the time, Intel has been run by execs susceptible to people who are better at playing politics than doing good work. Intel's toxic corporate culture creates and nurtures that kind of climber. Never experienced it myself, but I've worked with ex-Intel engineers and they always have bad stories to tell about how awful Intel middle management is.

Another factor is that you're using hindsight to declare it 'pure idiocy'. Clock speed isn't exclusively a marketing ploy; in the abstract it's a perfectly valid way of increasing performance. NetBurst isn't the only extreme "speed demon" microarchitecture to ever ship for revenue, and some of the other examples I can think of were designed for and marketed to much more technical customers than the general public.

IMO, some of it was just timing. The 180nm node was about when scaling stopped fully meeting expectations set by past node shrinks. Even the early and relatively mild problem (excessive transistor leakage current) hit NetBurst harder than its peers, because NetBurst inherently had a lot more transistors than other core designs. Later on, the cancellation of Tejas and replacement with Conroe coincided with the end of Dennard scaling, and I don't think these were unrelated. While Dennard was in effect, you could count on each new node solving both area and power problems. After, it became undeniable that the old ways weren't going to work any more. Designers increasingly had to treat performance as a power problem, because the only way forward was to design more efficient architectures. The next node wasn't going to save you any more. These factors probably were not fully anticipated or understood back when the NetBurst project started up.
 

VivienM

macrumors 6502
Jun 11, 2022
496
341
Toronto, ON
You might as well ask why Intel management spent incredible sums developing Itanium while abandoning 64-bit x86 to be defined by AMD. That was a far worse decision than committing the x86 processor line to NetBurst for several generations, IMO.
Probably at least partly because they thought they could be Apple.

In Apple land, big transitions follow a clear pattern. New thing is announced, and ships with an emulator or other compatibility layer that is Good Enough to allow old software to work on the new thing. Old thing is discontinued, and eventually, everyone has moved on to the new thing. (It helps that because Apple controls the hardware, third party developers can be confident that the old thing will die and effort spent porting to the new thing will pay off) That's how the PPC transition was, that's how the OS X transition went, that's how the Intel transition went, that's how the ARM transition has been going.

In DOS/Windows/x86 land, it's a very different process. Someone launches something that is fully compatible with the previous way of doing things and does the previous way faster, but also has additional abilities. People buy it because it's faster running the exact same software that they've already been using. Over time, as the installed base of additional abilities increases, Microsoft starts shipping operating systems that use the additional abilities. And, over time, those additional abilities become the new standard.

You can see this in lots of places, e.g.
- most 286s/386s and a good chunk of 486s probably went to the e-waste pile without ever running any protected mode OS
- many early AMD64 CPUs went to the e-waste pile without running a 64-bit OS - certainly, almost all the pre-Core 2 NetBursts, if not most of the Core 2s. Consumer PC manufacturers started switching to x64 versions of Windows in... 2008-10ish, corporate environments were still running 32-bit Windows 7 in the early-mid 2010s. And Microsoft only discontinued 32-bit Windows with 11.
- serial ATA came with a legacy interface that mimicked parallel ATA so you could run SATA drives on older operating systems; a similar thing was done for USB keyboards/mice
- whenever they've introduced new interfaces/busses (USB, SATA, VLB, PCI, AGP, PCI-E, etc), the old interface/bus was also available on most motherboards for at least a few years and generations
- when they moved to UEFI boot, there was a compatibility module that... indeed, needed to be on to boot the then-current version of Windows, I think
(meanwhile, when Microsoft tried to embrace touch screens with Windows 8, the large PC OEMs basically mutinied and sank the whole vision)
etc.

And this makes sense because, frankly, everybody in the PC industry has different incentives and different timing. Dell/Lenovo/HP want to sell computers that people will be able to run their existing software/peripherals with. Microsoft wants to sell new OS licences. Intel/AMD/etc want to sell new chips. And I do not think it is possible for Intel to launch a new incompatible thing on day X, Microsoft to have Windows ready for it on day X, and Dell/Lenovo/HP to ship systems based on the new thing on day X and remove the older systems from the market. And if the old and new are on the market at the same time, and the old runs the workloads that people actually need better than the new, then the old are what will actually fly off the shelves. And because HP/Dell/Lenovo understand this, they'll tell Intel to take a hike with their new incompatible thing and keep ordering the old.

I think Intel thought they could have a 'big reset' - get rid of the legacy cruft going back to the IBM PC, start on a new architecture with new everything, new operating systems that would emulate the old stuff Apple-style, etc. And it flopped spectacularly... (and also took down a bunch of big iron players along with it, though that is a separate story from its flop in the PC/desktop sphere)

Now, the interesting question is, what would have happened had AMD not come along and introduced a 'PC-style' 64-bit solution, i.e. processors that had a secret unused 64-bit mode but also ran 32-bit XP better than existing processors? Would Intel have dragged everybody kicking and screaming to IA-64? Would Microsoft have enabled >4GB RAM support through PAE on consumer versions of Windows and people would have hackishly kept 32-bit x86 going, potentially to this day? Would Intel have seen the light and invented their own 64-bit x86 extension?
 

mr_roboto

macrumors 6502a
Sep 30, 2020
856
1,866
I would actually argue that that's what we've been seeing out of Intel the last few years. To pick a silly example, Apple has 8 performance cores and 4 efficiency cores on M2 Max and had 8 performance cores and 2 efficiency cores in M1 Max. Intel's current Raptor Lake CPUs have 8 performance cores and 8-16 efficiency cores; indeed, when you open your wallet more, they give you more efficiency cores... on a product that's designed for gamers and enthusiasts. Why, precisely, would a gamer buying an i9 instead of i7 want the same number of performance cores and double the efficiency cores?
Intel's efficiency cores are best seen as a response to competitive pressure from AMD, not Apple.

Over many generations, Intel has taken what are now called its performance cores into a somewhat extreme corner of the possible design space. Intel has optimized them so thoroughly for single-thread performance at any price that they've become bad for many-core throughput. They're too big and too hot to pack a die with a lot of them, especially in consumer chips. Power comes down if you reduce clocks a ton (exactly what Intel does in big Xeon parts), but you're still left with a huge die. Big Xeons sell for $5000 or more, so it's fine there, but it's not fine in the consumer desktop space (at least with the margins Intel likes to target).

AMD has developed its Zen high performance cores along a different path. Intel usually beats Zen cores on single-threaded performance, but Zen cores are smaller and use much less power than Intel's. That plus AMD's multi-die approach allowed AMD to leave Intel in the dust on highly threaded workloads.

Intel's old management didn't really respond to this crisis effectively. Alder Lake (and now Raptor Lake) are quick responses. The real fix would be a new core design, but that takes a lot of time to get out the door (see our previous conversation). What we got instead is the latest iteration of Intel's traditional single-thread monster core paired with an "efficiency" core borrowed from the Atom line. It's efficient only in relative terms; it's really a mid-sized core with performance on par with Skylake cores, Intel's state of the art desktop core not so long ago.

So what you're getting is really 8 performance cores plus 8-16 multithreaded throughput cores.

Should a gamer want lots of these throughput cores? I would argue no, they don't seem likely to do much for games. But it's not hard to market things gamers don't need to gamers. When gamers evaluate CPU performance on its own, they often turn to Cinebench, and Cinebench loves throughput cores. When gamers evaluate CPU performance in games, the answer's often going to be the same because so many games like the high per-thread performance offered by the big P cores.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.