Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

bobcomer

macrumors 601
May 18, 2015
4,949
3,699
Transmeta would have allowed something like that.
Interesting!
There is always a performance penalty to pay for that.
I definitely see that problem with it, but it's a total trade off Compatibility vs.d performace. I'd take compatibility on 99% of the tasks I see over performance.
Rosetta is also a similar concept.
That's pretty much what I thought, and it works pretty well.
However, java is probably the closest living equivalent. Java assumes an imaginary virtual machine, and is independent of the underlying hardware. Sun tried to base a ”javaOS” on such a concept, but it was not very popular.
I actually use Java quite a lot, it's IBM's go to language for running on different OS's. I just wish it were supported a bit better. For a long time it was too slow to be tolerable, but with modern hardware today, that's not as much a problem.

I don't think I ever heard of JavaOS. It sounds really good, but not for the year. But now, maybe it could work.

Structure information
- Platform independent
- supports 32-bit up to 128-bit operating systems, depending on used platform
- Microkernel
- needs low resources, 256 kbytes of RAM and 512 kbytes of ROM, for Internet application 4 mbyte RAM and 3 mbyte ROM
- small and efficient
- works with an Host-system or standalone
- HotJava as a window system installable
 

Yebubbleman

macrumors 603
May 20, 2010
6,024
2,617
Los Angeles, CA
dont get me wrong i love my m1 laptop its my most valued thing i own. but i love when we have these cpu makers fight ..it will only help us consumers get a better product
It would be nice if Intel got out of its slump. I don't believe AMD has enough resources to take its crown on as large of a level. But as far as the Intel to Apple Silicon transition is concerned, x86 has its limitations and Intel getting out of its slump likely won't be enough to overcome them. ARM (and Apple Silicon by extension) has a much more fruitful roadmap ahead. Intel company politics and the chaos and mismanagement therein is only partially to blame for that.
 

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
It would be nice if Intel got out of its slump. I don't believe AMD has enough resources to take its crown on as large of a level. But as far as the Intel to Apple Silicon transition is concerned, x86 has its limitations and Intel getting out of its slump likely won't be enough to overcome them. ARM (and Apple Silicon by extension) has a much more fruitful roadmap ahead. Intel company politics and the chaos and mismanagement therein is only partially to blame for that.

Not to mention that now that we’ve ditched x86 compatibility, apple is free to go other ways in the future if somebody comes up with a great idea for an instruction set. It can all be done pretty transparently using Rosetta to smooth the transitions, and ISV’s will be used to recompiling for new architectures.
 

crazy dave

macrumors 65816
Sep 9, 2010
1,454
1,230
And Microsoft themselves. None of them are big sellers.

Aye I was referring to the outside of MS themselves there are a few WoA machines being made. Poor hardware though and ... only recently getting x64 emulation despite being on the market for awhile. So hopefully future versions will be more market palatable.
 

jdb8167

macrumors 601
Nov 17, 2008
4,859
4,599
I actually hope that one day the OS and software are not bound by CPU architecture any more.
We already tried that with Java. Write once run anywhere hasn’t worked out great for PCs. It’s worked somewhat for servers. Servers alternatively can use node.js which is also cross-platform using JavaScript. Again not so much on PCs.

Edit: Someone up-thread pointed out Python too. Same thing except Python is used for a lot of scientific calculations and machine learning but not so much for user GUI apps.
 
Last edited:

LinkRS

macrumors 6502
Oct 16, 2014
402
331
Texas, USA
In all fairness, Intel did try to move past x86. Hell, IA64 is just now getting EOL'd. Still, there were quite a few missteps, including no viable backward compatibility.

Well the word "viable" with regards to compatibility should be quantified. I studied the IA64 architecture in-depth for a project at University, as it was being positioned to replace x86 at some point. The original Itanium CPU actually had hardware onboard to run IA32 code (see here: https://www.anandtech.com/show/171/5), problem was at the time of release, performance was slower than existing x86 processors, leaving Intel in a conundrum. When running IA64 native code, the Itanium systems worked remarkably well, problem was that most businesses ran x86 software, and were running it on expensive Itanium produced mediocre results. The IA64 architecture put most of the heavy lifting needed for its performance (which was based around parallel processing, EPIC stands for Explicitly Parallel Instruction Computing) on the compiler. If the compiler chosen for the project did not do a good job of creating the parallel bundles of instructions, Itanium systems would not perform optimally. It was a perfect storm for failure, as most businesses wanted to run legacy x86 code, compliers did not always produce optimal code, and Itanium systems were sold exclusively by HP. It was in this environment that AMD came along, and well the rest is history :).

BTW, by the second iteration of Itanium, Intel ditched the hardware IA32 layer and had written (properly) a software emulation layer that ran x86 code much better than the original Itanium did. I *think* they made this software available to the original Itanium systems, but it was too little too late. Just think, if Intel had gotten together with Microsoft and had gone the Rosetta/Rosetta 2 type route, perhaps we would have been running IA64 systems instead of x86-64 for the past twenty years?

Rich S.
 

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
Well the word "viable" with regards to compatibility should be quantified. I studied the IA64 architecture in-depth for a project at University, as it was being positioned to replace x86 at some point. The original Itanium CPU actually had hardware onboard to run IA32 code (see here: https://www.anandtech.com/show/171/5), problem was at the time of release, performance was slower than existing x86 processors, leaving Intel in a conundrum. When running IA64 native code, the Itanium systems worked remarkably well, problem was that most businesses ran x86 software, and were running it on expensive Itanium produced mediocre results. The IA64 architecture put most of the heavy lifting needed for its performance (which was based around parallel processing, EPIC stands for Explicitly Parallel Instruction Computing) on the compiler. If the compiler chosen for the project did not do a good job of creating the parallel bundles of instructions, Itanium systems would not perform optimally. It was a perfect storm for failure, as most businesses wanted to run legacy x86 code, compliers did not always produce optimal code, and Itanium systems were sold exclusively by HP. It was in this environment that AMD came along, and well the rest is history :).

BTW, by the second iteration of Itanium, Intel ditched the hardware IA32 layer and had written (properly) a software emulation layer that ran x86 code much better than the original Itanium did. I *think* they made this software available to the original Itanium systems, but it was too little too late. Just think, if Intel had gotten together with Microsoft and had gone the Rosetta/Rosetta 2 type route, perhaps we would have been running IA64 systems instead of x86-64 for the past twenty years?

Rich S.

IA64 performance was not that great, the architecture was pretty convoluted, and our opteron ran IA32 better than any IA64 chip did, whether by software or hardware. So I don’t think it was going anywhere. What it did do was give us an opportunity, because MS must have thought to itself, “we have to now support a new architecture one way or another. We might as well support AMD64.”
 

Berc

macrumors newbie
Apr 12, 2021
6
7
Arizona
Intel is a terrible company and I'm happy to see them struggling now. Just read about some of their strong arm tactics against AMD and other chip makers in their heyday. I also know several people that worked for them and overall one of the worst places to work.
Unless you have worked for them, that's a harsh statement. I did, and it was a great place to work. Although, they are well known to help you find the door when it's time. As for the litigation with competitors, there are always two sides to the story. BTW: Strong Arm was an advanced, ARM based microcontroller that was years ahead of its time.
 

jdb8167

macrumors 601
Nov 17, 2008
4,859
4,599
Unless you have worked for them, that's a harsh statement. I did, and it was a great place to work. Although, they are well known to help you find the door when it's time. As for the litigation with competitors, there are always two sides to the story. BTW: Strong Arm was an advanced, ARM based microcontroller that was years ahead of its time.
Which they got from DEC during a buyout and never really supported and the technology was subsequently dumped when they sold off their ARM XScale products to Marvell. Not really an Intel product.
 

Not Sure ☠️

macrumors newbie
Apr 23, 2020
23
80
The Ragged Western Edge
There will me security issues in M1 too. It's a matter of time. After all there already are un-patchable security problems in the T2 chip.

I don't know anyone who said there wouldn't be?

That said, with the direction ARM9 is going, and where the M1 is at today, I doubt we're going to see such severe things as have been haunting Intel for the past few years.
 

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
Unless you have worked for them, that's a harsh statement. I did, and it was a great place to work. Although, they are well known to help you find the door when it's time. As for the litigation with competitors, there are always two sides to the story. BTW: Strong Arm was an advanced, ARM based microcontroller that was years ahead of its time.
You don’t need to work there to know they are a horrible company. They used illegal tactics against AMD. They yelled at each other in front of interviewees in HQ hallways. They made you pee in a cup to interview there. They squandered Strongarm, which they got from DEC. Terrible company.
 

9927036

Cancelled
Nov 12, 2020
472
460
I don't know anyone who said there wouldn't be?

That said, with the direction ARM9 is going, and where the M1 is at today, I doubt we're going to see such severe things as have been haunting Intel for the past few years.
You pointed out all the Intel HW security problems. I pointed out there are also many Apple HW security problems. For what reason do you think there will less severe problems? The SoCs are only getting more complex and as we have seen integrating everything has serious consequences e.g. T2 chip controls the keyboard. Big mistake.
 

andrew8404

macrumors regular
Jun 16, 2009
196
71
Loma Linda, Ca
Unless you have worked for them, that's a harsh statement. I did, and it was a great place to work. Although, they are well known to help you find the door when it's time. As for the litigation with competitors, there are always two sides to the story. BTW: Strong Arm was an advanced, ARM based microcontroller that was years ahead of its time.

I know several people that worked for them. And they all had horror stories. All said it paid great but they treated them like garbage.
 

Gerdi

macrumors 6502
Apr 25, 2020
449
301
VLA can be parallelized as well. You might want to look at SIMD algorithms for things like UTF-8 validation or JSON parsing. They exist and work very well. The basic principle is that you have routines that detect bit patterns (ones that encode how long a sequence is) — these routines work in parallel and their output can be combined in order to quickly and efficiently detect sequence boundaries.

Let me guess, you have never worked on CPU architectures? In any case, the decoding properties depend on code properties in particular if it has certain prefix properties. In addition, an algorithm being O(n) depth, does not mean, that there are no local parallelization opportunities - we are talking about asymptotic complexities after all.*
To make a long story short, wake me up, when you found an O(1) algorithm for x64 variable length instruction decoding like you have on ARM - or better do not tell me but rather write a patent about it.

*SIMD is a good example. Not all SIMD operation are really O(1) with respect to vector length. In particular reduction operations (which include DOT product) are inherently O(logn). So the observation that SIMD operations can be used does not imply that the underlying algorithm algorithm has O(1) complexity.
 
Last edited:

Gerdi

macrumors 6502
Apr 25, 2020
449
301
This makes perfect sense. And this line of reasoning transcends ISA - it applies equally to x86, ARM and anything else. There is a reason that even high-performance ARM designs like X1 max out at 5 decoders (which is roughly equivalent to what’s found in x86 land).

Yup, the reason is, that 5 decoders are sufficient to feed the backends. Adding more decoders would not have helped with performance, because you would be backend limited most of the time.

But Apple has somehow managed to crack the problem. Maybe it’s their absolutely humongous reorder buffers (they can keep hundreds of loads and stores in flight), or maybe their branch predictors are that much better, but somehow they can have a 50% wider backend than anyone else and still keep it well utilized.

I don's see that Apple has cracked something here. They just made a very wide design - there is no principal problem Apple has solved. As i pointed out earlier, it is for instance trivial for AArch64 architectures to increase the number of decoders - unlike in x64 land.
 

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
Let me guess, you have never worked on CPU architectures? In any case, the decoding properties depend on code properties in particular if it has certain prefix properties. In addition, an algorithm being O(n) depth, does not mean, that there are no local parallelization opportunities - we are talking about asymptotic complexities after all.*
To make a long story short, wake me up, when you found an O(1) algorithm for x64 variable length instruction decoding like you have on ARM - or better do not tell me but rather write a patent about it.

*SIMD is a good example. Not all SIMD operation are really O(1) with respect to vector length. In particular reduction operations (which include DOT product) are inherently O(logn). So the observation that SIMD operations can be used does not imply that the underlying algorithm algorithm has O(1) complexity.

When you refer to x64 do you mean the 64-bit instruction set? I think that can be decoded in O(constant), no? I seem to remember we set up the prefix bits in a way to allow us to do that, but I haven’t looked at it in many years. (In other words, as long as you leave off the 32-bit/16-bit/8-bit instructions, and are only talking about decoding the 64-bit instructions).
 

Gerdi

macrumors 6502
Apr 25, 2020
449
301
You don’t need to work there to know they are a horrible company. They used illegal tactics against AMD. They yelled at each other in front of interviewees in HQ hallways. They made you pee in a cup to interview there. They squandered Strongarm, which they got from DEC. Terrible company.

I did work for Intel for 8 years. I believe the things about peeing in cups and yelling at interviews are more anecdotal. I do agree on the matter about the tactics against AMD and the handling of StrongARM (and DEC in general) - but thats more from a viewpoint from outside Intel. From the viewpoint from inside Intel - its a valid strategy to buy the competition if you cannot beat them.
 
  • Like
Reactions: ipponrg and Berc

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
I did work for Intel for 8 years. I believe the things about peeing in cups and yelling at interviews are more anecdotal. I do agree on the matter about the tactics against AMD and the handling of StrongARM (and DEC in general) - but thats more from a viewpoint from outside Intel. From the viewpoint from inside Intel - its a valid strategy to buy the competition if you cannot beat them.

Buying competition just to kill it is problematic. Intel would have been smart to put those DEC guys in charge - super bright bunch of folks. (That’s more or less what we did at AMD - let them run the show.)
 

Not Sure ☠️

macrumors newbie
Apr 23, 2020
23
80
The Ragged Western Edge
You pointed out all the Intel HW security problems. I pointed out there are also many Apple HW security problems. For what reason do you think there will less severe problems? The SoCs are only getting more complex and as we have seen integrating everything has serious consequences e.g. T2 chip controls the keyboard. Big mistake.
T2's flaw requires PHYSICAL access to compromise.

Meltdown could be exploited by a bad webpage loaded up in a virtual machine away from the host OS.

It's absolutely possible Apple could drop the ball just as hard at some point, but I think you're delusional if you think the T2 issue is in the same realm as Meltdown, let alone the dozens disclosed since then.
 

Gerdi

macrumors 6502
Apr 25, 2020
449
301
When you refer to x64 do you mean the 64-bit instruction set? I think that can be decoded in O(constant), no? I seem to remember we set up the prefix bits in a way to allow us to do that, but I haven’t looked at it in many years. (In other words, as long as you leave off the 32-bit/16-bit/8-bit instructions, and are only talking about decoding the 64-bit instructions).

Yes typically i refer to the 64bit instruction set, when referring to x64. I fully believe that a single instruction can be decoded with a upper bound constant O(1) or O(constant). The question i am raising is about the complexity of decoding n instructions (while n going towards infinity).
 

Gerdi

macrumors 6502
Apr 25, 2020
449
301
Buying competition just to kill it is problematic. Intel would have been smart to put those DEC guys in charge - super bright bunch of folks. (That’s more or less what we did at AMD - let them run the show.)

Agreed. I would have loved to see further iteration of Alpha - and also StrongARM. Intel at least developed XScale out of StongARM and then sold it again.
And putting people, who just have joined the company, into charge was always an issue at Intel. A senior fellow within Intel is essentially undisputed...and behaves like this...which is more problematic, in particular if these guys are not the brightest candles. This does not apply to the majority of senior fellows i might add.
 
Last edited:
  • Like
Reactions: Berc

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
Yes typically i refer to the 64bit instruction set, when referring to x64. I fully believe that a single instruction can be decoded with a upper bound constant O(1) or O(constant). The question i am raising is about the complexity of decoding n instructions (while n going towards infinity).

Ah, I’d have to think about that for a bit. I feel like it would be O(n), because you find the first instruction’s length looking at 1 prefix, then use that to locate the next prefix (etc.). But I may be forgetting some wrinkle.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.