Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
cmaier, I really appreciate you sharing your from-the-trenches experience. Thanks.
I would like to ask you something that is really difficult to evaluate from my armchair perspective - I have often heard that the sheer volume of the x86 ISA, the "accumulated cruft", would make designing new x86 cores require more work/time/expense/debugging than designing, say a pure 64-bit ARM8 core.
It sounds plausible, but - by how much? Enough that it significantly affects decision to product cycle time, or can it be compensated by hiring more people? Does it have any specific consequences you’d like to mention, (apart from the more formal consequences of dependencies you’ve already discussed)?
In terms of time and effort for designing, I would say that one of the biggest x86 hurdles is what we call “verification.” There is an entire team of people responsible for making sure that the design works properly with a wide range of x86 software, by running thousands and thousands of instruction traces through the design and making sure the results are right. Entire banks of machines run around-the-clock making sure that a huge library of traces built up over many years, designed to stress the most tricky corner cases, work properly. As far as I know, the only two companies ever to successfully accomplish this are AMD and Intel. Even back when there were only about 20 chip designers, there were probably at least a half dozen verification engineers. Whereas, on at least one RISC chip I worked on, there were only 2 verification engineers, and we didn’t need to use the entire set of engineering desktop machines to run traces in the background around-the-clock. Also, since it was a startup, it obviously didn’t take them years to develop the set of traces.

x86 is particularly tricky because, at least for 32-bit code, you can do things like programmatically modify the instruction stream. Lots of weird things to test.

(for this discussion i am leaving out the issue of whether there are fundamental technical constraints that mean you can never have an x86 chip as good as the best possible RISC chip, and focusing just on the design effort)

From the point of view of design, the design of some blocks is about the same complexity - an ALU is an ALU. Some ALUs have to deal with things like square root and others don’t, but that’s the case both for x86 and non-x86. Other blocks, like the instruction decoder, are much more complex in x86, but that can be compensated for by having more designers. At the chip level, x86 will impose tougher constraints between blocks - lots of extra control signals, tags, etc. that have to be sent around the chip and make it from place to place in time. This can cause timing issues that result in a slower chip. But every chip has its own quirks that can do the same thing. Based on personal experience, I feel like it is harder to solve this on x86, but your mileage may vary. (My first experience with x86 was trying to speed up an existing design by squashing some of those paths. It took me 6 months, but I got it to the point where our next chip could be 20% faster. At the time I was cursing x86 a lot, because when I had to do the same thing on a PowerPC chip it was a heck of a lot easier. But some of that was likely just the design style of the blocks I inherited.)
 

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
You are comparing a quad-core CPU that tops out at 15 watts with a 8-core CPU that has long-term sustained limit at 35W and usually operates with a power level of 50-60 watts... so unless the Ryzen is beating the M1 by a factor of 3 across the board, I am not sure what your point is.

It’s tedious having to repeat this point. I wish Apple would introduce the high end MBPs/iMac already so we can put this to bed.
 

quarkysg

macrumors 65816
Oct 12, 2019
1,247
841
I honestly would have wished that Apple would have gone to AMD, to ensure maximum software compatability (so there would also be no reason to kill 32-bit as AMD supports it). I'm confident that some of my pro equipment will never work again on a Mac, as the guys behind the product look to only support Windows now after Apple killed 32-bit.
Unlikely tho. AMD CPUs may win by brute force, I.e. throwing unlimited power to the problem, but it far from the M1’s power efficiency. Even if AMD could theoretically produce a CPU as efficient as the M1, the GPUs are still power hogs.

I saw a recent YouTube video by MaxTech comparing an Acer (5900HS) against the M1 MBP. On battery power, the Acer is behind the MBP in GB5 benchmark both in ST and MT. When plugged in, the Acer is still behind in ST and slightly ahead in MT. This is between a 8C/16T 64W CPU against a 4+4 15W CPU.
 

leman

macrumors Core
Oct 14, 2008
19,522
19,679
AMD is the one that is going to beat M1, because they can easily switch to 5nm too once TSMC has enough capacity to produce those chips. And they have much greater software compatiblity.

5nm is not some sort of magic bullet... and AMD is not going to reduce the power consumption of their chips by 70% just by moving to 5nm.

The fact that you could buy an AMD "gaming PC" that was faster than the most expensive 32-core Mac Pro (Intel), was quite clear that AMD was the to-go solution for a long time already.

You can also buy an Intel gaming PC that is faster than the most expensive Mac Pro... different hardware, different design criteria. Workstation CPUs are expensive after all, this is not any different for AMD workstation chips either.

Regarding the "to-go" solution... performance-wise AMD only managed to catch up with Intel last year, as their previous designs had lackluster single-core performance. AMD had an edge in lower-end mobile chips concerning multi-core performance, since they are slightly more energy efficient and were able to produce cheap 8-core designs where Intel could not. At the same time, they had problems with volume, resulting in only few laptops shipping AMD hardware.
 

9927036

Cancelled
Nov 12, 2020
472
460
After years of shortcuts on security for gains in performance? Outright lies in how things work from a security perspective?

They've hopefully sown a fate that sees them shattered and broken.
There will me security issues in M1 too. It's a matter of time. After all there already are un-patchable security problems in the T2 chip.

 

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
There will me security issues in M1 too. It's a matter of time. After all there already are un-patchable security problems in the T2 chip.


Designed by the Nuvia guy everyone is so excited by, I think. (Based on names on patent applications, so that might not be true).
 

BeefCake 15

macrumors 68020
May 15, 2015
2,050
3,123
I actually hope that one day the OS and software are not bound by CPU architecture any more.
 

crazy dave

macrumors 65816
Sep 9, 2010
1,454
1,229
For me I think it’s because Qualcomm is already the prime alternative manufacturer of ARM chips and (seems) to have an exclusive arrangement with MS to produce WoA devices. So it’s more likely to make an immediate impact on the PC market (if their claims hold up) next year. Also, while Qualcomm is a big cpu designer, they’re all based on standard ARM cores. Now they’ll be designing custom cores. That’s a big shift. Finally, my understanding, perhaps wrong, was that the worst that could happen in the lawsuit was financial jeopardy for GW3 rather than the company or its products getting owned by Apple.

More troubling long term for Qualcomm is that, supposedly, the designers joined Nuvia to build server chips. If Qualcomm doesn’t do that, how long will they stay?

I agree that if the Nvidia acquisition goes through they’ll be a major player as well - and maybe bigger long term. But it’s not 100% clear it will go through due to the same reason that Apple purportedly turned down SoftBank: will the regulators allow it? Apple seemed pretty sure that they wouldn’t be allowed to buy ARM by UK regulators - not clear if those regulators will let Nvidia. Beyond the potential for the good things of the acquisition, there are some concerns from other ARM licensees about what their relationship will be after Nvidia’s acquisition of ARM. Regulators might listen to that.

Of course even without the acquisition going through Nvidia ? might become a big player. Server chips for now ... (well and Tegra) Still would have to convince MS to partner with them on a design if they go consumer unless MS opens up WoA.

@cmaier
 

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
Of course even without the acquisition going through Nvidia ? might become a big player. Server chips for now ... (well and Tegra) Still would have to convince MS to partner with them on a design if they go consumer unless MS opens up WoA.

@cmaier

Microsoft isn’t dumb - writing’s on the wall. They are going to open up WoA. They may do it in a different way than they did for x86, limiting it to non-retail, but they want windows to remain relevant so Dell, Asus, and every other OEM will be free to license WoA.
 

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
You mean like Javascript, Java, Python apps? We have this already.

Ah, that’s what he meant.

I remember when I first interviewed at Sun, back in 1997, I think. I was primarily interviewing with the UltraSparc team (a job I took, and then quit a few months later - another story), but they asked me to meet with the guy in charge of the UltraJava team (i may have the name wrong). The idea was a chip that essentially directly executed java bytecode (or a thin layer around it). I heard him out, and said, essentially “sorry, not interested.” He wouldn’t take no for an answer, and kept prodding me to explain why. I eventually had to tell him that I thought the entire concept was a terrible idea, that tons of performance would be wasted in any such attempt, and that java was going to be a dead end anyway. Wonder whatever happened with that project.
 
  • Like
Reactions: PhoenixDown

crazy dave

macrumors 65816
Sep 9, 2010
1,454
1,229
I actually hope that one day the OS and software are not bound by CPU architecture any more.
I would certainly like that as well.

The Java approach has a lot of pros but also downsides.

Edit: on the other way to achieve this transmeta was interesting but failed.

Microsoft isn’t dumb - writing’s on the wall. They are going to open up WoA. They may do it in a different way than they did for x86, limiting it to non-retail, but they want windows to remain relevant so Dell, Asus, and every other OEM will be free to license WoA.

I *think* it’s open to any OEM already just only snapdragon chips. Like any OEM can request it but it’s a process. Part of the issue is less standardization among ARM chips for things like boot process and other features so MS would have to be more specific about compatibility if consumers if consumers were allowed to just go buy it and install it.
 

macduke

macrumors G5
Jun 27, 2007
13,475
20,539
The thing is I don't think Apple is holding anything back right now because Intel isn't hot on their heels. They're full steam ahead. The biggest problem is we're about to slam into the end of Moore's Law for silicon.
 

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
The Java approach has a lot of pros but also downsides.



I *think* it’s open to any OEM already just only snapdragon chips. Like any OEM can request it but it’s a process. Part of the issue is less standardization among ARM chips for things like boot process and other features so MS would have to be more specific about compatibility if consumers if consumers were allowed to just go buy it and install it.

Is it? Does anyone sell machines with it?
 

Icelus

macrumors 6502
Nov 3, 2018
422
578
Ah, that’s what he meant.

I remember when I first interviewed at Sun, back in 1997, I think. I was primarily interviewing with the UltraSparc team (a job I took, and then quit a few months later - another story), but they asked me to meet with the guy in charge of the UltraJava team (i may have the name wrong). The idea was a chip that essentially directly executed java bytecode (or a thin layer around it). I heard him out, and said, essentially “sorry, not interested.” He wouldn’t take no for an answer, and kept prodding me to explain why. I eventually had to tell him that I thought the entire concept was a terrible idea, that tons of performance would be wasted in any such attempt, and that java was going to be a dead end anyway. Wonder whatever happened with that project.
This sounds a lot like ARM's Jazelle extension IIRC.
 

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
This sounds a lot like ARM's Jazelle extension IIRC.
Functionally, yes, though from an implementation standpoint I believe Jazelle does it by translating byte codes into ops in a manner similar to microcoding, whereas I surmise (I don’t have any actual knowledge) that Sun was not contemplating translating to SPARC, but was instead natively running byte code as its instruction set. I never worked with those guys and they didn’t tell me anything at the interview, so I’m just guessing.
 

bobcomer

macrumors 601
May 18, 2015
4,949
3,699
You mean like Javascript, Java, Python apps? We have this already.
No, that's not what I mean, though if everything ran with decent performance and whatever OS/CPU you're running, it's certainly closer.

What I want is something like how the IBM i (old name AS/400) runs. The OS and the user executables sit on top of a layer call the MI. The MI is what talks to the hardware. So if you switch the CPU for something else, you just change the MI, everything else stays the same. It's kind of hard to describe.

The AS/400 went from a proprietary CISC CPU, I think it was 48-bit to begin with, to RISC (several versions in between years), and the applications can be basically unchanged from when they were first compiled decades ago. (The applications have to go though a one time translation, but it always works.

But of course, that's not how PC's run but it would be nice if they did. I suppose you could do the same thing with a large hypervisor that totally isolated the hardware from the guest OS's, and only the hypervisor would need to be written for the base hardware.
 

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
No, that's not what I mean, though if everything ran with decent performance and whatever OS/CPU you're running, it's certainly closer.

What I want is something like how the IBM i (old name AS/400) runs. The OS and the user executables sit on top of a layer call the MI. The MI is what talks to the hardware. So if you switch the CPU for something else, you just change the MI, everything else stays the same. It's kind of hard to describe.

The AS/400 went from a proprietary CISC CPU, I think it was 48-bit to begin with, to RISC (several versions in between years), and the applications can be basically unchanged from when they were first compiled decades ago. (The applications have to go though a one time translation, but it always works.

But of course, that's not how PC's run but it would be nice if they did. I suppose you could do the same thing with a large hypervisor that totally isolated the hardware from the guest OS's, and only the hypervisor would need to be written for the base hardware.

Transmeta would have allowed something like that. There is always a performance penalty to pay for that. Rosetta is also a similar concept. The main difference is that MI was sort of a “pure virtual machine” to compile against, whereas modern takes on this involve translating from one target to another. However, java is probably the closest living equivalent. Java assumes an imaginary virtual machine, and is independent of the underlying hardware. Sun tried to base a ”javaOS” on such a concept, but it was not very popular.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.