Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

casperes1996

macrumors 604
Jan 26, 2014
7,599
5,770
Horsens, Denmark
32-bit Windows 10 can still run 16-bit code.
Huh, I could've sworn they got rid of that even for 32-bit Windows, but Googled it and it looks like you can still enable it as an optional "legacy" feature on 32-bit Windows indeed.
Guess the 80386 in 1985 hasn't quite been out long enough that we can expect everything to be updated for 32-bit yet :p Let alone the 1999 spec for x86-64 and first release in 2003
 

thedocbwarren

macrumors 6502
Nov 10, 2017
430
378
San Francisco, CA
Huh, I could've sworn they got rid of that even for 32-bit Windows, but Googled it and it looks like you can still enable it as an optional "legacy" feature on 32-bit Windows indeed.
Guess the 80386 in 1985 hasn't quite been out long enough that we can expect everything to be updated for 32-bit yet :p Let alone the 1999 spec for x86-64 and first release in 2003
That is why Windows is the way it is. Too much legacy. It is fascinating to see that market shift a but with 11. Of course I'm more focused on the AS change over with Apple.
 

bobcomer

macrumors 601
May 18, 2015
4,949
3,699
Guess the 80386 in 1985 hasn't quite been out long enough that we can expect everything to be updated for 32-bit yet :p Let alone the 1999 spec for x86-64 and first release in 2003
Yep! You have no clue just how much OLD software we have to run...
 

JMacHack

Suspended
Mar 16, 2017
1,965
2,424
Itanium didn’t use software emulation to run x86. It had an actual x86 core on there. It was crap.

And why is everyone pretending that if you sell chips that do not have real mode that means you can’t sell any chips that do? Intel is a big company with lots of chips. It can sell chips that cater to the ever-shrinking market that needs to run software from the 1990’s, as well as chips that burn a third less power for the same performance, or a third more performance for the same power, and do so by jettisoning compatibility with real mode and some of the other seldom-used modes.
Which brings me to question the wisdom of keeping the legacy baggage around. Obviously people have made the case that there exists programs that rely on old functions and standards that cannot (for whatever reason) be replaced without great cost.

There have been x86 add-in cards before, why not offload legacy functions to specific cards themselves?

Though I have no idea the feasibility of such a problem.
 

bobcomer

macrumors 601
May 18, 2015
4,949
3,699
There have been x86 add-in cards before, why not offload legacy functions to specific cards themselves?
It's definitely feasable, and I hope someone does it eventually. I ran DOS/Windows on my Amiga way back when. It would make running an M(whatever) so much more palatable.
 

thedocbwarren

macrumors 6502
Nov 10, 2017
430
378
San Francisco, CA
Well, that settles it! Leisure Suit Larry, and Space Quest, here we come! ?

BL.
And you know you can run these native-ish with a compiled version of scummvm on M1. So where I'm going with this is, why not do the same for legacy? Kind of like using dosbox or WINE for running legacy apps vs keeping old junk around.
 

casperes1996

macrumors 604
Jan 26, 2014
7,599
5,770
Horsens, Denmark
There have been x86 add-in cards before, why not offload legacy functions to specific cards themselves?

It's definitely feasable, and I hope someone does it eventually. I ran DOS/Windows on my Amiga way back when. It would make running an M(whatever) so much more palatable.
Is this not a matter of licensing? While there have been a few x86 chips that weren't Intel or AMD throughout the years it generally seems pretty difficult to get a license to make. - I find it remarkable Rosetta is even legal honestly. Either Apple paid a ****-ton for that or it's a similar trick to what was used with the ZX80 where it was legal to be binary compatible but not to use the same human-readable assembly language.
As for Intel or AMD making such a card; I mean Intel kinda has made x86 add-in cards like Xeon Phi (though I don't think that could function as a full x86 chip - Did it even have an MMU?) or the Compute Module they use in the NUCs - that's kind of a daughter board setup rather than a CPU in a logic board tray. But I don't think it's likely they'll want to make a market for x86 PCIe cards as co-processors handling legacy code if they can keep their CPUs as the main system chip instead.
 

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
Is this not a matter of licensing? While there have been a few x86 chips that weren't Intel or AMD throughout the years it generally seems pretty difficult to get a license to make. - I find it remarkable Rosetta is even legal honestly. Either Apple paid a ****-ton for that or it's a similar trick to what was used with the ZX80 where it was legal to be binary compatible but not to use the same human-readable assembly language.
As for Intel or AMD making such a card; I mean Intel kinda has made x86 add-in cards like Xeon Phi (though I don't think that could function as a full x86 chip - Did it even have an MMU?) or the Compute Module they use in the NUCs - that's kind of a daughter board setup rather than a CPU in a logic board tray. But I don't think it's likely they'll want to make a market for x86 PCIe cards as co-processors handling legacy code if they can keep their CPUs as the main system chip instead.

What most of the non-AMD x86—cloners did (RISE, Transmeta (i think), etc.) was use IBM as a fab - they had a fab license, so if you used them you were licensed. There were a couple other licenses out there. Can’t remember what the deal was with Cyrix/via/national. I remember some had licenses to stuff up to a certain point (e.g. no 32-bit or whatever).

The need for licenses for emulators is questionable, especially in light of Oracle v. Google. Op code encodings are a lot like sdks, from a copyright perspective, I would think. Patents are another matter, but it’s tough to get patents on encodings.
 

leman

macrumors Core
Oct 14, 2008
19,522
19,679
Is this not a matter of licensing? While there have been a few x86 chips that weren't Intel or AMD throughout the years it generally seems pretty difficult to get a license to make. - I find it remarkable Rosetta is even legal honestly.

I find it remarkable that the act of parsing and interpreting a bitcode can be illegal… that would make the very process of developing software impossible. So far Intel didn’t move against anyone who implemented ISA translators so I think it’s just empty threats.
 
  • Like
Reactions: bobcomer

mr_roboto

macrumors 6502a
Sep 30, 2020
856
1,867
The need for licenses for emulators is questionable, especially in light of Oracle v. Google. Op code encodings are a lot like sdks, from a copyright perspective, I would think. Patents are another matter, but it’s tough to get patents on encodings.
Intel made a little bit of noise about going after x86 emulators two or three years ago (iirc). I don't think much came of it because, as you say, established precedent makes it unlikely they could win a real court case against any opponent with similar resources, and those are the ones they'd want to stop.
 

casperes1996

macrumors 604
Jan 26, 2014
7,599
5,770
Horsens, Denmark
I find it remarkable that the act of parsing and interpreting a bitcode can be illegal… that would make the very process of developing software impossible. So far Intel didn’t move against anyone who implemented ISA translators so I think it’s just empty threats.
Yeah fair. There may also be a difference with the old ZX80 thing that Intel's complaint was that it was hardware doing opcode decoding - But at the time at least it was legal for the ZX80 to exist as long as its assembler language was different, even though it effectively produced x86 code (though with extensions)
 

casperes1996

macrumors 604
Jan 26, 2014
7,599
5,770
Horsens, Denmark
The need for licenses for emulators is questionable, especially in light of Oracle v. Google. Op code encodings are a lot like sdks, from a copyright perspective, I would think. Patents are another matter, but it’s tough to get patents on encodings.
That's true yeah.
What most of the non-AMD x86—cloners did (RISE, Transmeta (i think), etc.) was use IBM as a fab - they had a fab license, so if you used them you were licensed. There were a couple other licenses out there. Can’t remember what the deal was with Cyrix/via/national. I remember some had licenses to stuff up to a certain point (e.g. no 32-bit or whatever).
Hm. Not sure I follow the logic there. Pretty sure you couldn't go to TSMC today and say "You make x86_64 for AMD, make me one".
But yeah there's definitely been cases of only being allowed to go to 32-bit - but that was because of the cross-licensing situation with AMD owning the 64-bit portion and Intel owning the prior stuff I think. Is VIA still around making x86 chips? I think that deal only covered 32-bit because it was exclusively with Intel.
 

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
That's true yeah.

Hm. Not sure I follow the logic there. Pretty sure you couldn't go to TSMC today and say "You make x86_64 for AMD, make me one".
But yeah there's definitely been cases of only being allowed to go to 32-bit - but that was because of the cross-licensing situation with AMD owning the 64-bit portion and Intel owning the prior stuff I think. Is VIA still around making x86 chips? I think that deal only covered 32-bit because it was exclusively with Intel.
IBM had a specific ”foundry license.” Remember that IBM had a lot of clout in the early days with Intel, and only agreed to use 8088’s in the PC subject to having sufficient “second sources.” I interviewed with a lot of startups in the x86 world, and they were all banking on that license. (I also worked for a startup that was doing an x86 but never released it - or, rather, it released it without the x86 decoder being operational).

Anyway…

Via still has a license and makes some sort of x86 chips, I believe. Zhaoxin, too (Using whatever Via’s license is, I guess). And they are both 64-bit. Pretty sure AMD freely licenses the 64-bit stuff to anyone.
 

casperes1996

macrumors 604
Jan 26, 2014
7,599
5,770
Horsens, Denmark
BM had a specific ”foundry license.” Remember that IBM had a lot of clout in the early days with Intel, and only agreed to use 8088’s in the PC subject to having sufficient “second sources.” I interviewed with a lot of startups in the x86 world, and they were all banking on that license. (I also worked for a startup that was doing an x86 but never released it - or, rather, it released it without the x86 decoder being operational).
Ah I see. Makes sense cheers
Via still has a license and makes some sort of x86 chips, I believe. Zhaoxin, too (Using whatever Via’s license is, I guess). And they are both 64-bit. Pretty sure AMD freely licenses the 64-bit stuff to anyone.
Hm. I knew there was some Chinese company too that made 64-bit x86. Thought they made some deal with Alf to essentially make their own chips based on earlier zen architectures at least partially something like “we’ll give you our core designs from three full generations ago” but I may be mistaken.
would’ve thought AMD would keep their license more private though since it’s essentially their leverage for getting intel’s parts in cross license. But then again they have always bastioned open standards a bit more than the others
 

bobcomer

macrumors 601
May 18, 2015
4,949
3,699
Is this not a matter of licensing? While there have been a few x86 chips that weren't Intel or AMD throughout the years it generally seems pretty difficult to get a license to make. - I find it remarkable Rosetta is even legal honestly. Either Apple paid a ****-ton for that or it's a similar trick to what was used with the ZX80 where it was legal to be binary compatible but not to use the same human-readable assembly language.
As for Intel or AMD making such a card; I mean Intel kinda has made x86 add-in cards like Xeon Phi (though I don't think that could function as a full x86 chip - Did it even have an MMU?) or the Compute Module they use in the NUCs - that's kind of a daughter board setup rather than a CPU in a logic board tray. But I don't think it's likely they'll want to make a market for x86 PCIe cards as co-processors handling legacy code if they can keep their CPUs as the main system chip instead.

The idea would be to have a physical MB and x86 on a card and some kind of interface to route that through to the Mac. Just like we had a long time ago with the Amiga Bridgeboard, and I think there was a Mac version too.
 

kaioshade

macrumors regular
Nov 24, 2010
176
108
My biggest gripe with the Apple Silicon architecture right now is the lack of native iOS apps on macOS. Apple made a big deal how iOS apps could/would run on macOS easily. But very few iOS apps actually work on macOS. And the moment users were able to make iOS apps work via sideloading, Apple killed it. Why advertise a feature only to handicap it as they have?

Otherwise I am pretty happy with Apple Silicon / M1.
 

Kung gu

Suspended
Oct 20, 2018
1,379
2,434
My biggest gripe with the Apple Silicon architecture right now is the lack of native iOS apps on macOS. Apple made a big deal how iOS apps could/would run on macOS easily. But very few iOS apps actually work on macOS. And the moment users were able to make iOS apps work via sideloading, Apple killed it. Why advertise a feature only to handicap it as they have?
iOS apps on Mac is up to the developers. Its opt-in.
 

Michael Scrip

macrumors 604
Mar 4, 2011
7,975
12,674
NC
My biggest gripe with the Apple Silicon architecture right now is the lack of native iOS apps on macOS. Apple made a big deal how iOS apps could/would run on macOS easily. But very few iOS apps actually work on macOS. And the moment users were able to make iOS apps work via sideloading, Apple killed it. Why advertise a feature only to handicap it as they have?

Otherwise I am pretty happy with Apple Silicon / M1.

Apple said iOS apps can work on MacOS. But it totally depends on the developer.

I think simple iOS apps could work on a Mac with minimal changes.

But since the Mac doesn't have a touchscreen... the developer might have to modify the input methods for use on a Mac.

And while a Macbook has a webcam... it doesn't have depth-sensing FaceID cameras. So anything with AR is out. Snapchat wouldn't be very fun on a Macbook.

And what about any app that uses GPS like Uber and Lyft?

Or the rotation sensor? Magnetometer?

So yeah... the platform for MacOS and iOS are compatible software-wise.

But there are hardware features in iPhones that simply do not exist in a Macintosh. That's one big reason why developers aren't racing to make their iPhone apps work on a Mac.

Another big reason, like I said, is the whole touchscreen thing. Most iPhone apps aren't built for mouse and keyboard.
 
Last edited:
  • Like
Reactions: Ruftzooi

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
My biggest gripe with the Apple Silicon architecture right now is the lack of native iOS apps on macOS. Apple made a big deal how iOS apps could/would run on macOS easily. But very few iOS apps actually work on macOS. And the moment users were able to make iOS apps work via sideloading, Apple killed it. Why advertise a feature only to handicap it as they have?

Otherwise I am pretty happy with Apple Silicon / M1.

It’s not an issue with the Apple Silicon architecture. How many ios apps could you run on Intel macs?

Disabling sideloading (which was never a feature - it was a workaround that they never intended to permit) was necessary, due to copyright. If I submit an app to the App Store, I still own the copyright to it, and if I don’t want it to run on Macs (because, for example, it would eat into sales of my mac app, or I don’t want to have the support burden of dealing with customers who expect it to run on a mac, where I haven’t tested it), Apple has no right to permit installation onto Macs.
 
  • Like
Reactions: thedocbwarren

pdoherty

macrumors 65816
Dec 30, 2014
1,491
1,736
ARM has structural advantages over x86 and Apple just showed the world what those are. In response, Intel and AMD will have to eventually go to ARM or RISC. And they will have to go through the same, painful transition that Apple is going through right now.
RISC isn’t new or something Apple came up with - been around a very long time. I’m not sure why Intel has to go that way - they’ve done fine, despite the CISC vs RISC debate having been over for decades.


Intel's i9-12900 is faster than Apple's M1 in single-core and multi-core Geekbench 5. The i9-12900 should start shipping later this year or early next year. The i9-12900 beeds 250 Watts to beat the M1 running at 20 Watts though. Intel wins!
I don’t care about that on a desktop. For a portable like a tablet or laptop, sure, but on a desktop I want all the speed I can get whether it sips power or, as in your example, uses two 100-watt lightbulbs worth.
 
  • Like
Reactions: bobcomer

Kung gu

Suspended
Oct 20, 2018
1,379
2,434
RISC isn’t new or something Apple came up with - been around a very long time. I’m not sure why Intel has to go that way - they’ve done fine, despite the CISC vs RISC debate having been over for decades.
Actually Apple did co-develop ARM back in the day tho.
 

leman

macrumors Core
Oct 14, 2008
19,522
19,679
RISC isn’t new or something Apple came up with - been around a very long time. I’m not sure why Intel has to go that way - they’ve done fine, despite the CISC vs RISC debate having been over for decades.

No, they don't need RISC. But at the same time, there seems to be something fundamentally wrong with x86 implementations. I doubt that the CPU engineers at Intel or AMD are stupid, and yet they need 3-4x as much power to do the same amount of work that Apple Firestorm needs. Even Intel's upcoming Alder Lake — which features significant improvements of key structures — doesn't fare better here. In fact, Intel seems so sure that they won't be able to improve efficiency any time soon that they are betting on efficiency cores to increase the throughput.

So either Apple has some sort of super awesome secret sauce that allows them to make extremely energy-efficient CPUs, or there is something in the core x86 design that makes it extremely difficult to make energy-efficient CPUs. It's probably a bit of both. And while the professional opinion is split on this matter, I do believe that ISA does matter — at least slightly. Decoding the mess that's x86 instruction opcodes is much more complex and will require more power and a larger investment of silicon. Scheduling x86 instructions is more complicated as well. Basically, a substantial portion of the CPU transistors has to fight the ISA. It is very much possible that a move to a simpler, better designed ISA would allow Intel and AMD to design better processors.

I don’t care about that on a desktop. For a portable like a tablet or laptop, sure, but on a desktop I want all the speed I can get whether it sips power or, as in your example, uses two 100-watt lightbulbs worth.

Oh, but you should. Better energy efficiency means more work done with the same power usage and more performance in a more compact chassis. We haven't seen this yet, since M1 is specifically a low-power product (which is easy to forget since it holds it's own agains APUs that consume 4 times as much power), but higher-power systems are coming. And that's where it will start hurting. When a 16" MBP outperforms desktop workstations that's where things will get weird for x86.
 

Sander

macrumors 6502a
Apr 24, 2008
521
67
It’s been a long time since I’ve touched C, but I remember pointers being some black magic voodoo.
I never quite got this. What's so "magic" about pointers? The name suggests "It's not the thing, it just points to the thing". Maybe it would have helped if they had been referred to as "addresses" rather than "pointers", because that's what they really are. I believe that the "black magic" was introduced because of the teaching approach that "the actual hardware is irrelevant" and code is viewed as some abstract mathematical recipe (which I oppose to).

I always explain it like a computer memory can be compared to a (huge) number of boxes all neatly lined up, each of which can store an integer number between 0 and 255. Each of those boxes has a unique number, which we'll call its address. The computer can store and retrieve the context of each box (the number between 0 and 255). To the computer, they're just numbers, and it depends on context what those numbers mean. If a box contains the number 65, then it could mean just that - the number 65, or it could mean the letter A, or a pixel with an approximately 25% grey value, or the cpu instruction "LD H, L", or even part of something bigger like the word "Apple" or of some floating point number (which takes 4 or 8 of those boxes combined) or whatever.

Since this "meaning" is important, the programming language tries to keep track of it: "These couple of boxes together are actually one thing, namely the word Apple, or this floating point value, or an image, or a database, or an audio file, or the memory address of some other box," etc.

The "drawback" of languages which expose these pointers to the programmer is that this can "break the abstraction". For some things it's fine if you are able to say "increase the value of the number in this box by 1" - in the context of a grey pixel, it just became a tiny bit lighter; if it was the letter A then it will become the letter B. But if it were part of the executable code, and you tried to "add one to the instruction LD H, L", and computers didn't hate being anthropomorphized so much, they would probably say "stop it, you're making me uncomfortable."
 
  • Like
Reactions: leman
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.