Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Would you buy an ARM iMac?


  • Total voters
    215
Sure, but that's a market with zero customers. When macOS users express concern about losing virtualization, they're talking about their need/desire to run VMware, Parallels, and Docker for Mac (among other tools). Nobody gives really two blinks about ARM virtualization on ARM.

Actually, I was curious about ARM-based virtualization. But I guess there really is no market for it.

When it comes to x86 on ARM, wouldn't that be "emulation" instead of "virtualization?"
 
I'd definitely jump off the wagon, count me out; as others have already posted, I too cannot see any benefit to switching over to ARM on desktop/laptop computers unless someone wants to end up with a 27-inch iPad.
 
  • Like
Reactions: Robnsn2015
Just now I would rather a ARM co-processor for 'natively' testing iOS App's on the Mac or offloading tasks that would make use of ARM like batch processing hundreds of images.
 
I dunno... ARM Alpine Linux in all its glory.

Or Ubuntu,
Or Fedora,
Or Debian,

etc.

Its Linux folks - its 99% written in C with processor-independence in mind, the community is positively allergic to binary 'blobs' that can't be re-compiled and its been running on ARM, PPC, MIPS, IA64 for years (along with all the well-known open source projects - Apache, Ngnix, Node.js, Java, Mongo, Python, PostgreSQL... and a lot of minor ones, too).

A lot of the modern 'container' technologies aren't based on CPU-level virtualisation but use 'chroot jails' which are processor independent (e.g. Docker, LXD etc. - both available on ARM Linux). The reason that Docker on Mac fires up a virtual machine is not because it needs x86, but because it only works with a Linux kernel. Sure - if you're pulling binary packages from repositories that don't have ARM builds you may have a problem, but the ARM repositories are pretty well populated - and you can always build from a tarball.

There's a ton of interest in ARM-based servers right now in a world where the power consumption of data centres is a big deal, and modern "cloud" technologies have vastly reduced the dependence on IIS, SQL Server and other Windows-centric tech that was propping up the x86. If you're working in Javascript/Node/Python or even C/C++ you'll barely notice the difference.

(Heck, the ARM has had full-blooded Unix since 1990 - I was running a web server on an Acorn R260 running RiscIX back in 1994 - took me an hour or so to get HTTPD to compile and that was because of 'Unix flavour problems' with the locale, nothing to do with the processor architecture. Processor dependency is soooo Windows 95...)

Don't get me wrong: right now, some people need to run x86 Windows or x86 Linux - but Apple isn't going to take your x86 away overnight. Over a 5 year timescale, though, processor architecture is going to become less and less significant. Unless there's an unexpected number of people out there still writing lovingly hand-crafted assembler...

Remember - the main reason that Windows is so bad is that it's been hamstrung by the need to run ancient software...
 
Its Linux folks - its 99% written in C with processor-independence in mind, the community is positively allergic to binary 'blobs' that can't be re-compiled and its been running on ARM, PPC, MIPS, IA64 for years (along with all the well-known open source projects - Apache, Ngnix, Node.js, Java, Mongo, Python, PostgreSQL... and a lot of minor ones, too).

Yes, you are correct. If your virtualization need is limited to Linux and your workflow involves locally compiling the tools and utilities you need, an ARM VM will probably be suitable for what you're doing. If you're doing anything other than Linux, and if you're reliant on third-party package repositories, your experience will land somewhere between "awkward" and "intractable."

Also, minor nit: none of those things are "Linux" at all. You don't need Linux to run Apache, Ngnix, Node.js, Java, Mongo, Python, or PostgreSQL. :)

A lot of the modern 'container' technologies aren't based on CPU-level virtualisation but use 'chroot jails' which are processor independent (e.g. Docker, LXD etc. - both available on ARM Linux). The reason that Docker on Mac fires up a virtual machine is not because it needs x86, but because it only works with a Linux kernel.

This is only telling a small slice of the story. It is correct that ARM Docker works technically and many put it to good use for vertical stack, in-house deployments. However, you're ignoring the scale and scope of the greater Docker ecosystem where people rely on published Dockerfiles and pulling images from the Docker Hub registry for software deployment and layered construction of custom containers. All of this is foundationally built upon and requires x86 containers. Every single time someone types `docker pull` or stuffs a `FROM` line in a Dockerfile they're writing, they're doing something that is fundamentally architecture-dependent.

Docker Desktop for Windows and for Mac leans on the underlying virtual box in order to participate in this ecosystem. A purely ARM Docker for Mac would provide just a sliver of the utility and interoperability that people rely on with the current solutions.



Sure - if you're pulling binary packages from repositories that don't have ARM builds you may have a problem

Like every single image on hub.docker.com.


Don't get me wrong. I've got a pile of Raspberry Pi boxes doing all manner of things. I'm quite familiar with ARM on Linux, it's capabilities, and its limitations. The situation is not nearly as compliant as you describe. You're also ignoring a large part of why people use virtualization in macOS and the ways they expect to be able to operate with modern container/cloud technologies.
 
Last edited:
The situation is not nearly as compliant as you describe. You're also ignoring a large part of why people use virtualization in macOS and the ways they expect to be able to operate with modern container/cloud technologies

...but, as I said, this isn't about getting up on the day of WWDC 2020 and throwing out all of your x86 boxes (the thread title is a bit silly in that respect) - unless Apple do something very stupid, its a 4-5 year clock that hasn't even started ticking yet.

That's if you're still going to want to run VMs locally for cloud/web development and not (d'uh!) in the cloud. Already, if I wanted to start a web development project it might make more sense to just go online and spin up a new server instance sitting somewhere in the cloud (which could be a rack in your basement...) that is an exact clone of the target environment. Sure, I need broadband to interact with it - but if the linux images are already in the cloud, the git/docker/npm/whatever repositories are in the cloud, the dataset is in the cloud etc. then its the server that needs the bandwidth - and its sitting in a data center somewhere with a nice fat pipe to the internet.

...there's some interesting developments in Visual Studio Code at the moment with 'remote development' that lets you run the IDE on your desktop but seamlessly edit/compile/run/debug on a Linux box, WSL or docker container (yes, its x86 right now but they've just announced an ARM Linux version of the remote 'server' for the Pi).

...then you have to throw in the possible benefits of ARM-based Macs for the not insignificant iOS, Android and embedded development community. It seems likely to me that - while its been essential in the past - the virtualisation option will rapidly become just for legacy Windows x86 software (...with more recent, .net/C#/CLR-based software that is already semi-processor-independent rapidly popping up on ARM Windows).

So the question is, should the need to run legacy Windows software hold the Mac back in the long term?
 
Remember - the main reason that Windows is so bad is that it's been hamstrung by the need to run ancient software...

I use macs and Windows PCs everyday and I respectfully disagree that Windows is ‘so bad’. Although I prefer the look and feel of macOS Windows has been quite decent since Windows 7 came out in 2009 in my view. It is perfectly usable day to day. Both Mac and Windows each have their advantages and disadvantages
 
I use macs and Windows PCs everyday and I respectfully disagree that Windows is ‘so bad’.

Maybe "so bad" wasn't the right term - but then most of the people in this conversation are jumping through hoops to run Windows under virtualization so that they can use MacOS as their daily driver. Plus, I was thinking slightly longer-term.

A lot of the criticisms of Windows are connected with the extent to which Microsoft are forced to maintain compatibility for older applications and hardware. Windows XP would have been pretty solid but it was plagued with security issues, and a large part of that was because legacy issues meant that everybody used it in admin mode so they could run software designed for 16-bit Windows - plus all those open ports and services that would presumably break something if they were disabled.

Think about it: 16-bit "DOS" mode support in Windows became optional in Windows 8 in 2012 and isn't even dead in Windows 10 (the 32-bit version, at least). Tech-wise, that's a bit like Apple dropping Apple IIGS emulation (I don't even remember that being a thing) - comparing it to Classic support (effectively dead with the Intel switch in 2005) would be generous because even the 68k+Classic Mac OS was technically more advanced than a CP/M clone running in 16-bit mode.

Windows is still stuck with the archaic concept of drive letters (another CP/M artefact which wasn't a brilliant idea even when the IBM PC came out) plus those silly backslashes and lukewarm support for symbolic-links - even though alternatives exist, you can't avoid them because MS doesn't dare depreciate them. Apple would have killed them with fire by now (when was the last time you used a colon-separated pathname on MacOS)?

Big annoyance with Windows 10? Well, data-slurping, ads and forced updates aside, try the whole 'two different control panels, modern and classic, with lots of duplicated features but some only available via one of the other' thing. They're stuck between two UI paradigms and can't make a clean break with the old style.

Then, hardware-wise, there's us Mac people complaining about USB-C ports when the Windows world hasn't yet killed off the flippin' PS/2 port (....but I neeedsss it for my preciousss rackmount KVM switch...)

The Mac is where it is today because Apple have been willing and able to radically change the platform on multiple occasions (Apple II to Mac, 68k to PPC, Classic to OS X, PPC to Intel, now 32 bit to 64 bit...) and get the transition substantially done and dusted within a few years. The PC world has maybe done it once (DOS-based to NT-based and that took ten years from the first release of NT 3.1 to actually launching XP as a mainstream OS).

Meanwhile, there's a distinct lack of "thinking different" in this thread... but I can understand that people just don't trust today's Apple not to prematurely kill Intel Macs, or not to use it as an excuse to padlock the gate to the walled garden.

I guess it also depends on whether you think Macs only "turned the corner" thanks to the Intel transition. I don't remember it that way - the iMac and the iPod 'halo' effect had put the Mac back on the map - if it hadn't the Intel switch wouldn't have been front-page news that it was. Personally, although Intel virtualisation has been useful, it was the switch to Unix that did it for me, but I doubt that's a mainstream driver, either.
 
  • Like
Reactions: 09872738
While I understand Apple prices have gotten seriously expensive, I see no need to an ARM to buy a Mac other than the new Mac Pro. Adjustable Rate Mortgages are just crazy talk. A standard loan will do.
 
  • Like
Reactions: Nugget and cram501
Maybe "so bad" wasn't the right term - but then most of the people in this conversation are jumping through hoops to run Windows under virtualization so that they can use MacOS as their daily driver. Plus, I was thinking slightly longer-term.

I'm not sure what hoops you run through to get Windows up and running in a VM, but for me it is easy and painless. The host is a convenience, not a necessity. You can get decent results in Windows, Linux, or Mac Os.


Then, hardware-wise, there's us Mac people complaining about USB-C ports when the Windows world hasn't yet killed off the flippin' PS/2 port (....but I neeedsss it for my preciousss rackmount KVM switch...)

The flexibility of Windows 10 hardware and software is a strength, not a burden. I would say the same thing about Linux. I'm not sure why you are blaming Windows for your ancient hardware configurations choices.


The Mac is where it is today because Apple have been willing and able to radically change the platform on multiple occasions (Apple II to Mac, 68k to PPC, Classic to OS X, PPC to Intel, now 32 bit to 64 bit...) and get the transition substantially done and dusted within a few years. The PC world has maybe done it once (DOS-based to NT-based and that took ten years from the first release of NT 3.1 to actually launching XP as a mainstream OS).

The Mac is where it is because of the iPod and the NeXt boondoggle. Without the iPod, Apple would most likely be a topic old timers talked about. Without NeXt and the swap to x86, I'm not sure Apple would still be producing computers.

Windows didn't have to swap architectures because it is the operating system that made x86 because of its flexibility. I loved building my own 8mhz screamers back in the day. Apple was playing catch up and made required technology changes to correct old decisions.

I would say the evolution for Windows, Linux, and Mac OS have gone hand in hand as technology has progressed.
 
There's a ton of interest in ARM-based servers right now in a world where the power consumption of data centres is a big deal, and modern "cloud" technologies have vastly reduced the dependence on IIS, SQL Server and other Windows-centric tech that was propping up the x86.

(Just to be clear, this is to elaborate on the above post, not to disagree or argue it.)

Microsoft has SQL Server on Linux and I believe they have it on ARM as well (I am not sure if it is released yet- the Google search, as usual, showed up with a lot of unrelated junk when I searched.) With it being on Linux, ARM should not be a hard step for them, just as they already have Windows Server on ARM. I love database servers and would love to see SQL Server on ARM. And Oracle, MariaDB, etc. etc. etc. I've never been an Intel fan; I think they are lazy in what they produce because until lately, there has been no need to push to boundaries of what they can build. They need to change that perspective because various ARM chipmakers (including Apple) are breathing hot and heavy down the inept Intel's neck. Just look at what can be done with the new Raspberry Pi 4, when you kick the RAM up to 4GB.

I love Linux, love ARM as well. That said, I have no desire to see Macs based on ARM at this point. If I have to abandon my software stack with a switch to ARM, I am not staying with Apple any more than I stayed with Windows when I saw the hideous Windows 8. Funny thing is, I really like Windows 10, but it came too late; I had moved on to Macs long before.
 
It was asked what problems ARM would solve. Here's two:

1: Lower TDP = I can use it in the same room that I have my U87 set up in without the fear of the fan kicking on high-speed and ruining a vocal/acoustic-guitar track. As-Is, I have to keep the iMac in a control room and use an iPad as a transport/control surface. (I've tried fan control software without success)

2: It would free the Mac from Intel's lackluster release cycle, inherited security risks, production bottlenecks, and other legacies.

I hear you on 1. I would love to have a self-contained fanless recording solution for a vocal booth. I'm experimenting with a gen 1 iPad Pro and an iOS-compatible Audient interface.

Regarding 2, designing their own chips comes with pluses and minuses. Yes, they don't have to worry about legacy compatibility as much, they can decide what acceptable TDP is for different machines. Without manufacturer overhead they could save money per-part by designing everything in-house (with licensing fees to ARM)

At the same time you do run up against the same laws of physics - if you want workstation-class performance it'll have to be a bigger, more complex chip with more cores, which means initial yields are usually lower. Could they get an ARM CPU as powerful as, say, a 12/16/22-core Xeon E5? Maybe, but it might still require a fan. Not unachievable though, and it would be a PR win if they could partner with TMSC to open plants in the USA.

What I'd worry about is, knowing Apple, what if they decided to split from mainstream ARM architecture? It might make porting more difficult, restrict the toolchains to Apple-only, etc.
 
Without NeXt and the swap to x86,

Newsflash - the switch to NeXt/OS X started 5-6 years before the switch to x86. If the transition had taken as long as MS's DOS-to-NT switch, Apple would still have been reliant on Classic emulation in 2005/6 and wouldn't have been able to switch to x86. The NeXTStep OS started on the 68k processor and the only reason that it could 'save' Apple was that - as with most Unix-a-likes it was mostly written in processor-independent C making it relatively easy to support PPC (OS X), ARM (iOS) and x86.

Windows didn't have to swap architectures because it is the operating system that made x86 because of its flexibility.

Windows couldn't swap architecture because it was hamstrung by the need to run ancient 16-bit x86 code, which doesn't have proper 32-bit addressing so even code written in C ends up processor-specific (near and far pointers anybody?). DOS/Windows pretty much invented the idea of even high-level code being processor specific. Windows NT itself does/did support MIPS, Alpha, PPC and ARM but MS dropped most of those (both Intel and MS have a monopoly built on legacy x86 code and turkeys don't vote for Christmas). The only reason that x86 exists today is that IBM chose it for the original PC and used their dominant position to create a near monopoly based on a proprietary, processor-specific (and a kludgy 16-bit one at that) platform that has held the PC industry back for years.

The modern x86 is a decent processor if, like Apple, you can run it in full 32 or 64 bit mode and ignore the legacy stuff, but its still carrying around a mass of extra circuity just to translate the archaic x86 instruction set into its internal RISC code.

I love database servers and would love to see SQL Server on ARM. And Oracle, MariaDB, etc. etc. etc.

Well, I don't know about SQL Server or Oracle (when I google them all I get is this big red fiery eye and a sense of impending dread) but, otherwise, blow $50 on a Raspberry Pi kit and just fire them up. Not saying that a Pi is an enterprise-ready solution just yet, but that's not really the point - these packages have pretty much all been "ported" to ARM already and shouldn't be rocket science to get running on a hypothetical future ARM Mac. At least for the major open source packages it seems to be more a case of the software waiting for the hardware - an ARM-based Mac could be just the thing to get that ball rolling.
 
I would purchase an ARM mac in a throttled heartbeat if the laptop was sold for 99.99 as the pinebook

Apple cannot compete with Pine64 on price for several reasons. First, Apple has a lot of expenditures to take care of that the Pine folks don’t. Though originally built on f/l-oss BSD-UNIX, Apple’s various operating systems are all developed and maintained in-house. Pine’s machines rely, unless I’m very much mistaken, on software written, developed, and maintained almost entirely externally to their organization, so all the attendant costs there are borne by the developer community, not Pine, including all overhead that would be necessary, including HR, insurance, retirement, etc.

Second, R&D. Apple has spent heaven-only-knows how much money designing and testing the chassis, ergonomics, etc., of their systems. Everyone copying their designs has had the bulk of that work done FOR them, in many cases, BY APPLE, (whom I am generally loathe to defend, but in this case, my higher allegiance must be to the truth as I see it, my feelings about Apple as a company notwithstanding,) and if you put the Apple MacBook Air (2012 through 2017 models) side-by-side with the original PineBook, it becomes, I think, fairly obvious where Pine got their design cues from. It doesn’t look like an exact, carbon copy, but... one award the PineBook wasn't ever going to win was “most original,” that’s for damned sure.

Third, does Pine have the same return policy as Apple? Same shipping? Same deep pockets that I’m sure take an army of lawyers to defend, which itself costs, I’m sure, a fortune...

Many costs Apple had to bear that Pine doesn’t, and those costs, a lot of them, get baked right into the cost of everything Apple sells. Pine doesn’t have to shoulder just about any of that.

I’m not saying one’s better than the other, each has advantages and disadvantages. BUT what I * AM * SAYING IS that that is a big part of the reason why Apple isn’t going to compete with Pine, price-wise. They can’t, really. The flip-side of this is that Pine cannot compete with Apple in several areas. But for some people, that doesn’t matter, which is why there’s a market for both.

It’s kind of like Mercedes and Chevrolet. Technically they're competitors, but if you want a job-site pickup truck in the US with four-wheel drive, a lot of ground clearance, and the ability to schlep around a couple tons of crap, you're probably looking for a chevy, not a Mercedes. IF, on the other hand, you want a luxurious, comfortable, fancy 4-door sedan with which to impress rich friends or wealthy clients, you might consider a Mercedes. Does Chevrolet have something like that? Sure, but even the nicest Chevy does not have this one feature—BEING a Mercedes, which, by contrast, every single Mercedes DOES have.

Nothing Pine makes could ever compete with anything Apple makes in the “BEING an Apple computer” department. That is where part of the money you pay for a Mercedes OR an Apple computer goes. Also... SOMEONE has to pay for that unimaginably expensive new “campus” Apple just built themselves, on some of the most expensive property on Earth, and for their executives and majority shareholders to be able to afford the occasional private island purchase, or spaceflight, etc.
 
Newsflash - the switch to NeXt/OS X started 5-6 years before the switch to x86. If the transition had taken as long as MS's DOS-to-NT switch, Apple would still have been reliant on Classic emulation in 2005/6 and wouldn't have been able to switch to x86. The NeXTStep OS started on the 68k processor and the only reason that it could 'save' Apple was that - as with most Unix-a-likes it was mostly written in processor-independent C making it relatively easy to support PPC (OS X), ARM (iOS) and x86.

That obfuscates what I said but doesn't change it. By the time Jobs was brought in to Apple, NeXt had already been ported to x86.


Windows couldn't swap architecture because it was hamstrung by the need to run ancient 16-bit x86 code, which doesn't have proper 32-bit addressing so even code written in C ends up processor-specific (near and far pointers anybody?). DOS/Windows pretty much invented the idea of even high-level code being processor specific.

That just isn't true. In the past, Windows has supported x86, x86-64, mips, Alpha, ARMv7, ARM64, Itanium, and Power PC. They may have supported more but that was off the top of my head. I'm not sure what their current supported architectures are now, but it would at least include the x86 and ARM instruction sets.

I like Windows flexibility and I also like its backwards compatibility and see it as a benefit.
 
Well, I don't know about SQL Server or Oracle (when I google them all I get is this big red fiery eye and a sense of impending dread) but, otherwise, blow $50 on a Raspberry Pi kit
Did that years ago. Funny thing, though... I like response from my database server.
 
Windows’ great backwards compatibility for software (and supporting things like ancient PS/2 ports or whatever) has no bearing on the stability/usability of the OS for millions of average users buying brand new laptops and other hardware who don’t need old software. It simply gives those people who do need backwards compatibility the choice to run and do what they want on their computers.

I also like how Windows 10 supports quite old hardware specs and runs fine. I have an ancient Core 2 Duo PC that still works perfectly on Windows 10 and is great for web browsing, videos etc. with full security updates. While my old 2009 24 inch iMac of the same vintage hasn’t received a security update since El Capitan.

I understand the need to ‘move on’ eventually with technology but Windows has been perfectly fine, usable and performs well in my use even with all the ‘legacy cruft’ it has inside of it. I doubt most people would notice any day to day benefit if Microsoft were to take Apple’s approach and continuously strip out legacy features and backwards compatibility.

In summary, I feel there is no massive benefit for removing backwards compatibility to streamline Windows when the OS works fine for most people and many view backwards compatibility as a great feature. Regardless I like being able to use Windows and Mac with their different approaches rather than having every OS follow one example.

Once again both OS’s have their advantages and disadvantages and choice is good.
 
Last edited:
  • Like
Reactions: Robnsn2015
Did that years ago. Funny thing, though... I like response from my database server.

Well, get one of these then: https://www.gigabyte.com/uk/ARM-Server - or it looks like you can now spin up a virtual ARM server on Amazon cloud: https://aws.amazon.com/ec2/instance-types/a1/

Failing that, the Raspberry Pi 4 at least has 4GB RAM and "proper" USB3 and Gigabit Ethernet ports (big problem with previous Pis was that most of the I/O was funnelled through a single, and somewhat buggy, USB 2 port) - but as I already said, still not exactly enterprise-ready!

Point is, the software is mostly there - its mainly that its early days for hardware and there isn't currently much between server-class stuff and hobbyist boards like the Pi and its various imitators. A (well thought-out) ARM Mac could fill that gap.

Not that you need an ARM desktop to develop (the majority of) web/cloud apps for ARM servers, any more than you need an x86 desktop to develop for x86 servers...
 
That obfuscates what I said but doesn't change it. By the time Jobs was brought in to Apple, NeXt had already been ported to x86.

So what? NeXTStep would have been useless to Apple if they hadn't been willing and able to transition from MacOS 9 to a completely new OS - plus OSX started out on PPC (which AFAIK wasn't supported by NeXTStep). If NeXTStep hadn't been available they'd probably have gone with BeOS (which already ran on Macs and would have been more of an off-the-shelf replacement than NeXTStep - but with hindsight NeXTStep's advantage was probably that it came with Steve Jobs attached).

That just isn't true. In the past, Windows has supported x86, x86-64, mips, Alpha, ARMv7, ARM64, Itanium, and Power PC.

I already noted that Windows NT has supported other architectures. Emphasis on (a) NT and (b) has (past tense - except ARM). Those were all modern 32 or 64-bit operating systems. The real drag-anchor on Windows has been ancient 16-bit code and Win3.1/9x/DOS compatibility - particularly in in-house 'corporate' applications. That's why Windows NT failed to take off on anything other than x86 (...or maybe Intel pressured MS to kill it, or maybe MS worried that if they weaned their customers off legacy code they might start looking at MacOS or Linux... but that's conspiracy theory territory).

I like Windows flexibility and I also like its backwards compatibility and see it as a benefit.

Fine. However, some of Apple and MacOS's benefits come from Apple having had the freedom/courage to burn its bridges on at least 5 occasions (6502 -> 68k, 68k->PPC, MacOS -> OS X, PPC->x86, x86-32 -> x86-64 - and I guess you could add iOS in the sense that Apple didn't have to try and make it look like OS X in the way that MS did with Windows CE/Mobile). If Apple were still worrying about backward compatibility with Apple II or Classic Mac, the Mac wouldn't be what it is today... although at least Mac was always a proper 32-bit architecture - unlike 8086...80286 - so they've never had the equivalent of 16-bit DOS/Windows code to cope with.
 
So what? NeXTStep would have been useless to Apple if they hadn't been willing and able to transition from MacOS 9 to a completely new OS - plus OSX started out on PPC (which AFAIK wasn't supported by NeXTStep). If NeXTStep hadn't been available they'd probably have gone with BeOS (which already ran on Macs and would have been more of an off-the-shelf replacement than NeXTStep - but with hindsight NeXTStep's advantage was probably that it came with Steve Jobs attached).

Agreed. So what? NeXt was a failure until it was incorporated into Apple. It had nothing to do with Apple being willing to make that move. Jobs forced the transition because their old OS and architecture was failing and NeXt was his baby. The NeXt evolution to OS X was a boon to Apple and probably saved the Mac all together.


I already noted that Windows NT has supported other architectures. Emphasis on (a) NT and (b) has (past tense - except ARM). Those were all modern 32 or 64-bit operating systems. The real drag-anchor on Windows has been ancient 16-bit code and Win3.1/9x/DOS compatibility - particularly in in-house 'corporate' applications. That's why Windows NT failed to take off on anything other than x86 (...or maybe Intel pressured MS to kill it, or maybe MS worried that if they weaned their customers off legacy code they might start looking at MacOS or Linux... but that's conspiracy theory territory).

That is my point exactly. Backwards compatibility has nothing to do with supporting other architectures and hasn't had a negative impact on its success. We have already agreed that NT (what everything is based on now) CAN, is, and has supported other architectures.

Backwards compatibility had an impact on why those other architectures didn't succeed but x86 drove them into the ground on price, performance, and accessibility. Backwards compatibility is a benefit, not a hindrance which has been shown by Windows x86 success. I'm not sure why you view it as a lodestone around Windows neck when its success is apparent.

Windows 10 is pretty decent and so is Mac Os. I use them both without any problem.


Fine. However, some of Apple and MacOS's benefits come from Apple having had the freedom/courage to burn its bridges on at least 5 occasions (6502 -> 68k, 68k->PPC, MacOS -> OS X, PPC->x86, x86-32 -> x86-64 - and I guess you could add iOS in the sense that Apple didn't have to try and make it look like OS X in the way that MS did with Windows CE/Mobile). If Apple were still worrying about backward compatibility with Apple II or Classic Mac, the Mac wouldn't be what it is today... although at least Mac was always a proper 32-bit architecture - unlike 8086...80286 - so they've never had the equivalent of 16-bit DOS/Windows code to cope with.

That was a requirement to attempt to compete. Apple had to adjust because it was on the verge of failing and had to find a way to survive. The Mac Os helped but the iPod pulled them out of the fire

Who cares what the word size was with the 8086/8088 (40 years old now!), they changed computing as we know it.
 
Who cares what the word size was with the 8086/8088 (40 years old now!), they changed computing as we know it.

Microsoft do, since they're still maintaining parts of Windows that run 40-year-old code. Intel/AMD do, because their CPUs still have to include circuitry to deal with 16-bit instructions. ARM do, because it is one of the reasons their cores can have so many fewer transistors than x86 ones.

If you're thinking of the IBM PC, then the processor to "change computing as we know it" was always going to be whichever one IBM chose... and they weren't choosing on technical merit - the IBM PC was a "me too" CP/M-86 machine, just like several others on the market at the same time, rushed out by IBM when they suddenly noticed that personal computers were about to eat their lunch - which was probably why they went with something that was sorta software compatible with CP/M... of course, the PC was pretty much the end of that sort of compatibility, because whereas CP/M applications had always been written to be 'patchable' with things like the control codes or video RAM addresses to work across diverse CP/M systems, once PS-DOS came along everything was just hard-coded for the proprietary IBM-PC hardware and firmware... and no, folks, whatever the revisionist history may say, the IBM PC was no more "open" than any of its contemporary personal computers - only IBM were allowed to make fully IBM-PC compatible machines until someone 'clean-roomed' the BIOS and ran the gauntlet of IBM's lawyers to establish its legality (wouldn't have happened with modern software patent law).

Sorry, but the IBM PC/x86/Wintel monster's main contribution to the personal computer industry has been to suppress a whole series of technically superior platforms (very nearly including Mac) and slow down the adoption of cross-platform standards.
 
Sorry, but the IBM PC/x86/Wintel monster's main contribution to the personal computer industry has been to suppress a whole series of technically superior platforms (very nearly including Mac) and slow down the adoption of cross-platform standards.

It was obvious back in the mid to late 80's that DOS/Windows/x86 was going to win.

Apple and other environments promoted more expensive, closed ecosystems. They wanted to control their environment and charge more for it. Did some superior technologies fall by the wayside? I'm sure they did as happens all the time in the tech industry. A product needs to be marketed and priced to compete successfully regardless of its technical merits. Many of those weren't.

IBM was very much the same. Fortunately, Microsoft owned the license to MS-DOS (I think derived from CP/M?) and sold it to IBM which rebranded it PC-DOS. PC-DOS wasn't open but the generic PC with MS-DOS was.

If the x86 and MS-DOS hadn't saturated the environment, I seriously doubt we would have an cross platform standards. It would be closed system after system.

It wasn't the x86 that almost did the Mac in... It was Apple/Jobs and the closed eco-system coupled with a large price tag and a lack of direction when Jobs wasn't there. It wasn't worth the extra cash for most people.
 
Windows’ great backwards compatibility for software (and supporting things like ancient PS/2 ports or whatever) has no bearing on the stability/usability of the OS for millions of average users buying brand new laptops and other hardware who don’t need old software. It simply gives those people who do need backwards compatibility the choice to run and do what they want on their computers.

I also like how Windows 10 supports quite old hardware specs and runs fine. I have an ancient Core 2 Duo PC that still works perfectly on Windows 10 and is great for web browsing, videos etc. with full security updates. While my old 2009 24 inch iMac of the same vintage hasn’t received a security update since El Capitan.

I understand the need to ‘move on’ eventually with technology but Windows has been perfectly fine, usable and performs well in my use even with all the ‘legacy cruft’ it has inside of it. I doubt most people would notice any day to day benefit if Microsoft were to take Apple’s approach and continuously strip out legacy features and backwards compatibility.

In summary, I feel there is no massive benefit for removing backwards compatibility to streamline Windows when the OS works fine for most people and many view backwards compatibility as a great feature. Regardless I like being able to use Windows and Mac with their different approaches rather than having every OS follow one example.

Once again both OS’s have their advantages and disadvantages and choice is good.

Good points. My concern isn't the millions of "average users buying brand new laptops" (who probably should be using Chromebooks anyway), it's the enterprises and government entities that spend billions on technology. Those organizations REQUIRE backward compatibility.

That being said, there are rumors that Microsoft is working on a "Core OS" that is free from backward compatibility requirements.
 
  • Like
Reactions: JacobHarvey
I'm impressed with the performance of my iPad Pro. I don't do as much Windows based stuff on my device as I used to. I would be ok with ARM processors if it meant that iOS and Mac could share the pool of games coming out and simply convert between touch and mouse/keyboard control based on the tablet/desktop mode.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.