That wasn't really my point with the emulation argument. I wasn't really talking about running emulators on Apple Silicon Macs necessarily, simply making the argument that software archival has value but I don't think it is so threatened by deprecations as a result of emulation and virtualisation down the road.The amount of software that doesn't run is all the convincing I need. Sure, you can emulate a console from the 1980s on the M1, but let's look at other emulators, shall we? Can you run PCem, which emulates an x86 Win9x era PC, on Mac? You cannot. Can you run Xenia to emulate the Xbox 360 on Mac? You cannot. Can you run PCSX2 to emulate PS2 games? You cannot (without great difficulty). Can you run RPCS3 to emulate PS3 games on Mac? You cannot. Can you run Spine, the up-and-coming PS4 emulator, on Mac? You cannot. Can you run Cemu to emulate Wii U games on Mac? You cannot. Can you run Yuzu to emulate Switch games on Mac? You cannot.
None of this software is available for the Mac. Windows and Linux only. Two operating systems which are also no longer natively available for the Mac. The only thing you get with Mac anymore is exclusive Apple software, exactly the way Apple wants it to be (don't you want to buy and use all of these perfect magical Apple products? Don't you love our harmonious ecosystem? Apple makes everything for you. Of course you want it, join us, join us now). I'll take that loss and gain immeasurable freedom and access to the rest of the software library available to the planet.
Though I will say you need to add another (with great difficulty) since you can in fact compile RPCS3 for macOS and while it's not as straight forward as a Linux build a lot of the work has been put in to make it so. The makefiles have unique macOS entries to link against the proper frameworks and all.
I don't know most of the emulators you mention though but I'll tell you this; if they're unavailable that's not because of M1 or some locking down of anything. They're also not available on an Intel Mac.
I think you're perfectly correct in this assessment - Though my take on the phrase is that it's precise value is contextual and in a casual conversation I'd accept a threshold as low as a 10% chance or thereabouts. But as that StackExchange page you reference also says "It's completely arbitrary"I’m pretty sure “statistically impossible” isn’t a thing. I’m guessing you got that 10^-50 number from the same StackExchange article I found searching for the phrase— I can’t find any authoritative reference for the phrase though.
Something can be statistically possible while practically impossible— it’s statistically possible to flip heads 1000 times, but you aren’t going to do it in your lifetime, or if you take a gaussian distribution out far enough on the tails it still has a non-zero probability but you’ve reached the physical limits of the system in question.
It’s possible to be statistically insignificant, which doesn’t say anything about the likelihood of the event, and merely means you don’t have enough data to determine one way or the other.
I think someone used a bogus phrase looking to add credibility to a bogus argument and you and @TiggrToo are trying to map a formal definition to language that doesn’t exist. Perhaps they meant “impossible” and think adding the “statistically” modifier sounds cool, or maybe they mean “unlikely”.
Furthermore, Apple has stripped 32-bit hardware from their chips. I believe starting with the A7. This allows them to focus more of their silicon budget on a faster, better 64-bit CPU. If all major systems could drop 32-bit and adjust the boot up process to fit, x86 chips could drop not just 32-bit hardware but also 16-bit hardware, which is in every single x86-64 chip right now. When an x86 system boots, it boots in "real mode", i.e. 16-bit mode - At least for multi boot, GRUB will put the chip into protected mode before handing it over to the kernel, which is 32-bit mode. But then the kernel needs to put the chip in long mode before it's read to properly use it in 64-bit and thus address all the system's memory in a flat memory model. Before all this all sorts of segmentation nonsense has to be set up that no modern OS really uses because we instead utilise a paging system, but it needs to be done for legacy reasons and for hardware support. And then of course there's the A20 gate. A hack that stuck with us because of flawed software that IBM wanted to remain compatible with and now all PCs need extra hardware and all PC operating systems need code to deal with this frustrating mess, just because of a mistake made several decades agoThis is incorrect. Supporting 32bit applications means supporting an additional hardware operation mode (sometimes with different rules!), supporting and testing two pairs of libraries and also supporting a tricky set of edge cases of transitioning between the 32bit and 64bit boundary. There is also no reasonable way to transition to high-performance ARM while keeping 32-bit support (because 32-bit ARM is crap).
A20 line - Wikipedia
en.wikipedia.org
Huh, never even really thought of that as an option back then. Could've been an interesting alternate timeline to seeOSX shouldn't have had x86 libraries period, IMO. The ONLY Intel Macs that ever existed with 32 bit processors were the very first using the Core Duo. Literally a 6 month period in 2006 led to 12 years of maintaining a pointless architecture in the OS and Xcode. Every other computer sold past that point supported x86_64
That is purely on Apple. It was a regression for sure. Arguments about the "consumer friendliness" of keeping 32 bit support are quaint, but it shouldn't have been possible in the first place. They had the opportunity to cut the knot in the transition, didn't, and created this situation.