Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
We have seen DRM methods so sensitive that simply changing the graphic card is enough to set them off. Let's face it there is a lot of variety within the x86 architecture so I seriously doubt a truly "homogenous, singular, unified" market is anything but niche.
If I may chip in:
Hypervisors and operating systems are a layer of abstraction intended - among other things - to make life easier for developers. They can use known application programming interfaces which within reason make their code run with the same results on server hardware produced by Dell or Lenovo, and whether clients connect over a Mellanox or an Intel network interface, and whether data is stored on mechanical drives or solid state ones, behind storage adapters from various suppliers.
Behind the scenes people like me are still cross-referencing hardware compatibility lists and software version interoperability lists every time hardware is purchased or software is upgraded, to ensure rock-solid performance and supported configurations, but a developer like @Nugget can be pretty damn sure that if he writes software for Linux or Windows on an x86-64 platform, it will behave the same in my data center as in his data center or with Microsoft or Amazon in Azure or AWS respectively, give or take some odd performance differences that may or may not be easily explained.

What Apple has done the last few years, is to heard developers toward libraries and habits that make their code significantly easier to port across specifically Apple's platforms. This has not been the case on other platforms which many of us are completely dependent on for our daily income, and there has been no incentive for most developers to think of low-level differences between x86 and ARM, because they've been developing for one OR the other. THIS is why @Nugget keeps repeating that the tools HE needs (along with most other backend developers - which again are the absolute majority of commercial developers) show little signs of suddenly becoming more friendly to cross-architecture development. And once again: Unless there's a sudden crisis that forces a change to this reality, this fact won't change in a hurry in the corporate space. For many of these people, the Mac will return to being something they enjoy using in their spare time rather than a nice tool for work.

Me? I need a number of common shell tools and git version control to do most of my work, along with some way of remotely accessing our servers. I'm not worried about the Mac's ability to keep doing what I require of it. But no: If I was in the middle of programming software that had been aimed at x86 servers and developed with a locally hosted toolchain, I would not be confident in my ability to convince management that the Apple Silicon Mac was still the best tool for the job.
 
When talking about LLVM on Darwin yes. I did mention that.

I was talking about the Swift compiler in this quoted passage though. That project is run by Apple, and my point is that Swift is envisioned as another front end to LLVM, not *just* a language for Apple platforms. Although Apple’s priority is in the language itself and Apple platforms, along with Linux as the “other platform core engineers must not break”.

Oh right sorry, confused. I was thinking of Clang as being historically a C++ front end to LLVM (in addition to the other stuff it does). It seems like Adoption would help Apple overall as long as it doesn't break anything on Apple's end. It's not like they would be running their test suite for all operating systems. Apple's internal builds are probably only configured for the platforms they actually care about.
 
The vast, vast, vast majority of software developers are writing software which is not intended for end users to run. When it comes to lines-of-code written, or the number of employed software developers it's not even remotely close. Internal, back-end, and what you call "niche" development is in fact the overwhelming majority.

Swift and Livecode are perfectly fine platforms if you want to take an application you've written and produce it for multiple platforms among a user base that has different kinds of devices or operating systems. That's the opposite of what my job is (and the majority of software developers working today). I have a homogenous, singular, unified "market" which is our internal production infrastructure and a developer environment which is chosen to be compatible with the production infrastructure. Any discussion of cross-platform development is the reverse of what you are imagining it to be. We would need to introduce cross-platform development not in the service of a varied deployment target, but rather to support variety among developer environments. There's a strong economic and business case to simply avoid that risk and expense by mandating x86 for developer machines. This means that Apple Silicon Macs will not be suitable for our developers because without performant x64 emulation there's no reasonable way to incorporate them into the development workflow we use. Any efforts to do so would incur risks and costs which far outweigh whatever marginal benefit we'd receive from those aspects of Arm which are technically superior to x64.

When it comes to cloud and in house production systems, x64 enjoys a near universal monopoly. Arm cloud resources are available, but they're like a fart in a tornado when it comes to mindshare and marketshare. That's not going to change any time soon because there's just not any pressure to do so.

Nobody is ignoring the tools you have discovered in your hurried armchair research. They're just not relevant to the argument you're trying to keep alive. You seem to have a strong emotional connection to this otherwise anodyne discussion about CPU architectures. I'd hope that most of us do not share this confounding influence. I don't have any loyalty or fondness for any CPU instruction set. I just have a job to do and a desire to choose the tools which place the fewest barriers in between me and success. Arm is not the right tool today.

Be curious, not judgmental.

You tell others not to be judgmental after judging them in every single post - comments like "your hurried armchair research" are examples of that. Furthermore, your "internal production infrastructure and developer environment which is chosen to be compatible with the production infrastructure" is a unique case FOR YOUR ORGANIZATION, and therefore is not reflective of the market as a whole. As for ARM cloud marketshare, you might want to doublesheck your numbers - AWS is increasingly reliant on ARM-based hardware, as is OneDrive, Dropbox, and other cloud providers. Keep in mind that the Graviton2 processors mentioned in the Amazon report are designed in house by Amazon's cloud computing team.



From the latter article:

Microsoft has made no secret that it wants to have 50 percent of its server capacity on Arm processors, and has recently started deploying Marvell’s “Vulcan” ThunderX2 processors in its “Olympus” rack servers internally.

Last I checked, that's nowhere close to a "near-universal monopoly,", but if you need more proof:


Furthermore, more companies are developing ARM-based datacenter CPUs, including Marvell, Ampere (founded by a former Intel engineer), Amazon, and Google. With this increased competition in the datacenter from both AMD and ARM-based processors, that dominance will begin to wane.


In conclusion, your edge case is not reflective of the industry as a whole, not of the major players in it. Your arguments are specific to your company's particular needs and requirements, and should not be extrapolated to provide a picture of the market as a whole (or the future of ARM and/or Apple), especially when the market is moving in the opposite direction of your position.
 
  • Love
Reactions: Maximara
In conclusion, your edge case is not reflective of the industry as a whole, not of the major players in it. Your arguments are specific to your company's particular needs and requirements, and should not be extrapolated to provide a picture of the market as a whole (or the future of ARM and/or Apple), especially when the market is moving in the opposite direction of your position.
As I said before sure they (various tools to built cross-platform code) are kind of useless for internally made or highly niche software but my view is with the programs now available is doing such things in house or using such highly niche software really that good an idea anymore?

Perhaps using highly niche software still is a good idea (though if off the shelf stuff can do enough perhaps that might be a better alternative) but as you state it is no way a sign of where the industry as a whole is or even where it is going.
 
You tell others not to be judgmental after judging them in every single post - comments like "your hurried armchair research" are examples of that. Furthermore, your "internal production infrastructure and developer environment which is chosen to be compatible with the production infrastructure" is a unique case FOR YOUR ORGANIZATION, and therefore is not reflective of the market as a whole. As for ARM cloud marketshare, you might want to doublesheck your numbers - AWS is increasingly reliant on ARM-based hardware, as is OneDrive, Dropbox, and other cloud providers. Keep in mind that the Graviton2 processors mentioned in the Amazon report are designed in house by Amazon's cloud computing team.
Right. Beside, using a niche case to extrapolate what the industry as a whole is doing or going is a majorly bad idea. There is the high risk of tunnel vision where changes outside the niche eventually effect the niche and one is either left on a road to nowhere or has to spend more money then would have been needed if the tend had been spotted sooner.

In conclusion, your edge case is not reflective of the industry as a whole, not of the major players in it. Your arguments are specific to your company's particular needs and requirements, and should not be extrapolated to provide a picture of the market as a whole (or the future of ARM and/or Apple), especially when the market is moving in the opposite direction of your position.
Exactly. In fact, one of the products of that "armchair research" showed that the trend is towards Phartphones with is ARM based. Couldn't find that particular article I originally used but found one that says much the same thing: IDC: 87% Of Connected Devices Sales By 2017 Will Be Tablets And Smartphones
 
Last edited:
You tell others not to be judgmental after judging them in every single post - comments like "your hurried armchair research" are examples of that.

I stand by my characterization of that single user's posts to this topic and I believe that a dispassionate reader will agree with my assessment.


Furthermore, your "internal production infrastructure and developer environment which is chosen to be compatible with the production infrastructure" is a unique case FOR YOUR ORGANIZATION, and therefore is not reflective of the market as a whole. As for ARM cloud marketshare, you might want to doublesheck your numbers

None of your links speak to market share at all. They are all aspirational and focus on capabilities. Of course, the future may find that Arm infrastructure is adopted on a wide scale basis in the cloud and perhaps even later in self-hosted data center infrastructure. I don't doubt or deny that Microsoft "wants to have 50 percent of its server capacity on Arm processors." That makes sense.

My comments are and have always been on the state of the industry as it exists right now. All the major cloud providers offer Arm solutions (which they absolutely should). Dell, HP, SuperMicro, et al. are experimenting with Arm bare metal solutions.

Arm may very well be the future, and there's an undeniable technical argument for that outcome.

But let's be real. x86 server and cloud infrastructure is hardly a niche platform. It's the overwhelming majority platform by a very large margin. You act like like my employer is the only company on the planet still using Xeon servers.

Your "A Huge Week for Arm in the Datacenter" article is announcing the availability of a new Arm server from "Bamboo Systems" and news about a future Arm CPU from Ampere which will start sampling in Q4 of this year. How does that speak to marketshare? That sounds like a nascent, emerging market to me. Arm marketshare will be noteworthy when a press release from "Bamboo Systems" doesn't make the news.

I don't recall ever saying that Arm won't be the future. With some luck it may be, and we may all be better off for it. But right now, today, that's not the case. It's not even a foregone conclusion.

What's been discussed here -- and to the point of this conversation -- does Apple have enough pull in the marketplace to move the needle on Arm in the data center? An x86-64 production infrastructure makes it risky and expensive to integrate an Arm development environment. Are there enough developers willing to hitch their fate to macOS and put pressure on companies to migrate production to Arm? Or is the inertia behind x86-64 in the data center strong enough to squash the appeal of macOS for developer machines. There are a lot of influencing factors and I have no idea how it will play out in the long term.

For my company, we'll almost certainly just drop support for macOS and developers will be forced to migrate to Windows or Linux when Intel Mac hardware is no longer reliably available. Unless, of course, there's some unknown magic in the works that allows for x86 virtualization without much of a performance and usability penalty.

In conclusion, your edge case is not reflective of the industry as a whole, not of the major players in it.

It's hilarious to describe x86-64 server infrastructure as an "edge case." Like "niche" it seems to be a terminology designed more to inflame than inform.
 
Last edited:
Right. Beside, using a niche case to extrapolate what the industry as a whole is doing or going is a majorly bad idea. There is the high risk of tunnel vision where changes outside the niche eventually effect the niche and one is either left on a road to nowhere or has to spend more money then would have been needed if the tend had been spotted sooner.

I've made no claims about where the industry is going. I'll adapt to where it is going when it gets there. Any developer workstation I buy today will not still be in operation whenever some hypothetical glorious Arm infrastructure arrives (if it ever does). Certainly having some of the developers in my company on their current x86-64 machines and needing to support other developers on Arm machines would be a horrible choice for my company to make, and this is hardly a novel circumstance. There's no way to incrementally make that migration without incurring great risk and expense for very little upside.

I'm still curious how you think Rosetta 2 is at all relevant to this discussion, if you can find the time to clarify your thoughts from earlier.

Exactly. In fact, one of the products of that "armchair research" showed that the trend is towards Phartphones with is ARM based. Couldn't find that particular article I originally used but found one that says much the same thing: IDC: 87% Of Connected Devices Sales By 2017 Will Be Tablets And Smartphones

I don't expect that we will be replacing our production infrastructure or developer workstations with tablets and smartphones in the next three years either.
 
Last edited:
You tell others not to be judgmental after judging them in every single post - comments like "your hurried armchair research" are examples of that. Furthermore, your "internal production infrastructure and developer environment which is chosen to be compatible with the production infrastructure" is a unique case FOR YOUR ORGANIZATION, and therefore is not reflective of the market as a whole. As for ARM cloud marketshare, you might want to doublesheck your numbers - AWS is increasingly reliant on ARM-based hardware, as is OneDrive, Dropbox, and other cloud providers. Keep in mind that the Graviton2 processors mentioned in the Amazon report are designed in house by Amazon's cloud computing team.



From the latter article:



Last I checked, that's nowhere close to a "near-universal monopoly,", but if you need more proof:


Furthermore, more companies are developing ARM-based datacenter CPUs, including Marvell, Ampere (founded by a former Intel engineer), Amazon, and Google. With this increased competition in the datacenter from both AMD and ARM-based processors, that dominance will begin to wane.


In conclusion, your edge case is not reflective of the industry as a whole, not of the major players in it. Your arguments are specific to your company's particular needs and requirements, and should not be extrapolated to provide a picture of the market as a whole (or the future of ARM and/or Apple), especially when the market is moving in the opposite direction of your position.

Thanks for the links. Do we have any reliable figures for the percentage of data center workloads / percentage of total CPU cores that are running on ARM vs x86 or other architectures?

I have read that ARM servers are only about 0.5% of global numbers (with 98.9% being x86) - but that this is set to grow to 2.5-3.5% by 2024 (IDC figures?)

If Apple Silicon is successful and adopted by developers, then this might encourage the use of ARM in the data centre, for reasons outlined by Linus Torvalds.
 
None of your links speak to market share at all.

I have one that does: Arm's market share and targets across key technology markets in 2018 and 2028 fiscal years. About the only thing that blows goats (ie is baaaaad :) ) is Data center/cloud. The prediction for 2025 are likely over optimistic but at least the 2018 future are form actual data.

I would love to know why, in 2011, IHS thought that 23% of the PC market would be ARM in 2015 given there wasn't anything out at that time to justify (basing that on an OS that hadn't even seen the light of day was IMHO reckless).
 
What's been discussed here -- and to the point of this conversation -- does Apple have enough pull in the marketplace to move the needle on Arm in the data center? An x86-64 production infrastructure makes it risky and expensive to integrate an Arm development environment. Are there enough developers willing to hitch their fate to macOS and put pressure on companies to migrate production to Arm? Or is the inertia behind x86-64 in the data center strong enough to squash the appeal of macOS for developer machines. There are a lot of influencing factors and I have no idea how it will play out in the long term.

This is the question that interests me as well.

I'm also curious about just how much dependency there really is between the architectures of the development and deployment platforms.

Linux Torvalds notably commented that the reason x86 is so prevalent in the data center is because this is what developers are using "at home", and that as a consequence ARM will have little penetration. Maybe Apple will change that, but few developers want to make their lives more complicated by adding additional hoops to jump through, or using immature tools.

I would be interested in compiling a list of application types and how these would be supported on an Apple Silicon Mac.
e.g.

1) MacOS native apps: presumably no issues if programming to Apple APIs and using XCode to target Apple Silicon or fat-binaries.

What languages will be supported on ASi versions of XCode? The same as currently? i.e. Swift, Objective-C, C/C++, Java, Python, Ruby ? What about other languages and tools? I think Pascal, Basic, Forth may be available, but unlikely to be used for modern apps. According to https://developer.arm.com/solutions/infrastructure/developer-resources/languages-and-libraries, NodeJS and GoLang are supported.

However, I would expect that the choice of language and development frameworks to be *far* more limited than x86.

What frameworks / IDEs are supported? I heard that Electron-based apps have some issues on the DTK. What other frameworks are available?

2) Full-stack web-apps

Will ARM64 browsers perform the same way as the x86 versions? Will client libraries like ReactJS, VueJS and AngularJS still work?

What back-end frameworks are supported and known to work? NodeJS - sounds like it, Python Flask/Django?, Ruby-on-Rails? GoLang? Spring? Laravel / PHP?

Which Java app-servers will work? e.g. Tomcat. What about enterprise servers like WebLogic, JBoss, WebSphere?


3) Mobile apps - iOS - should be fine with XCode. What about Android? Will developer toolkits be available?

4) Compatibility between ASi and x86 versions

This is what Linus Torvalds implied was a big issue. Will apps/scripts prepared with the same frameworks using ARM libraries really work differently when deployed to an x86 target?

There's already been a discussion about the complexities with moving Docker containers across architectures. Is the problem likely to be more widespread?

What would stop a web-app, say written in ReactJS + NodeJS, written and tested on an ASi Mac, from running exactly the same way when deployed to an x86 server? Differences in libraries? Differences in threading models, memory management, data type implementation (e.g. big-endian/little-endian, word length)?

All in all, there are a lot of unknowns - that would make me hesitant to jump in feet-first.
 
  • Like
Reactions: Nugget
I have one that does: Arm's market share and targets across key technology markets in 2018 and 2028 fiscal years. About the only thing that blows goats (ie is baaaaad :) ) is Data center/cloud. The prediction for 2025 are likely over optimistic but at least the 2018 future are form actual data.

I would love to know why, in 2011, IHS thought that 23% of the PC market would be ARM in 2015 given there wasn't anything out at that time to justify (basing that on an OS that hadn't even seen the light of day was IMHO reckless).

The Statista link doesn't appear to work without an account....can you summarize the details?

Data center / cloud is a huge market though - for Intel, the Data Center group has 75% the revenue of the entire Client Computing group - about 36% of their entire Q2 2020 B$19.8 revenue.

I imagine the IHS figure of 23% of the PC market being ARM in 2015 assumes that tablets and phones are "PCs", and you could argue that they are indeed personal computing devices.

The fundamental question in my mind is "can ARM-based PCs successfully compete with x86 as fully-fledged professional productivity machines, or will they be relegated to consumer use with limited software choices provided by the hardware vendors?"

To paraphrase Steve Ballmer, this will depend on "Developers, Developers, Developers"...if no-one either writes native software for ARM-PCs (made by Apple, Microsoft etc), or can develop cross-platform software on them, then they will have a limited audience. Could you live with only Apple or Microsoft software and a handful of popular apps like Chrome browser?
 
The Statista link doesn't appear to work without an account....can you summarize the details?

Ah Crap I thought that was only need for the details as I didn't have one when I found it. Basically all the categories except two were in the 90+% range. The Data center / cloud one was a pathetic 4% with a projected ~20%

Could you live with only Apple or Microsoft software and a handful of popular apps like Chrome browser?
Windows is Microsoft software :p

Seriously, "The driving force behind Microsoft’s push for the Arm-based PC is to turn the PC from a declining market to an expanding market and therefore grow its PC software and services revenue." ( Exiting x86: Apple & Microsoft Embrace Arm-based PC) Take a good look at the charts in that article and note the criteria:
  1. The design and production strategies of x86 vendors do not change. They continue to both design and manufacture the x86 processors.
  2. Apple & Microsoft do not aggressively invest in migration tools to Arm-based processors. Wikibon believes this assumption is unlikely.
In other worlds the best situation for the x86 platform. When the best situation results in a decline its time to stop rearranging the deck chairs and get off the Titanic. :p

"Previous Wikibon research shows that Arm is about 25% of the cost and 25% of the power requirements of x86 processors, for about the same performance."

Note I am not even highlighting the power consumption in this. With reasonable translators (NOT emulators) the changeover should be reasonable.

"The primary reasons for this transition are lower costs, a reduction in power requirements, and a common platform enabling applications to run on smartphones, tablets, and PCs" As Global market revenue share of leading tablet application processor (AP) vendors between 2014 and 2020 shows even we assume the "other" is also x86/x64 the best revenushare we can pull is 39%; the remaining 61% is ARM.
 
Last edited:
Note I am not even highlighting the power consumption in this. With reasonable translators (NOT emulators) the changeover should be reasonable.
Except in the datacenter no sane operator will accept emulation OR translation: They will want native software with predictable performance and predictable patterns when things start falling apart.
Basically the same shift is needed for consumer devices too: Linux like macOS and to a limited degree Windows all boot natively on Arm hardware. That part of the problem is solved. Next the frameworks people use need to be ported or officially sunset by their developers so people have to move on to more modern ones that are portable or already ported. Apple has enough power over their dev community that this has already happened. Microsoft is trying but is being hampered by the whales who’ve invested millions or billions into Win32 or old .Net Framework and don’t see much point in moving with the times when everything just keeps working, and working predictably.
Once the tools across platforms are at parity and produce “similar enough” results, there could be a move of developers in at least less risk averse environments towards other platforms, possibly helped by TCO motivators like energy efficiency or scalability per unit of rack space.
 
Except in the datacenter no sane operator will accept emulation OR translation:

Isn't emulation/translation exactly what the IBM System i aka AS/400 was based upon? The systems were originally built with a CISC processor which was replaced by POWER processors. Apparently applications did not need to be recompiled - the system merely had to generate new cached machine code.
 
The guy in the video appears to have only a vague and imprecise understanding of how virtualization works

Yup, that much was clear to me just reading the comments about it.

I've been working with virtualisation for my day job since 2001 (first workstation, then vSphere, HyperV, KVM, etc.).
 
Running windows on mac, is a necessity.
Today I visited a doctor in hospital.
He had a macbook pro. Besides health issues, we talked a little about macs.
He told me that he loves his mac, but he is forced to use windows through parallels, because lot of medical applications run only in windows. Fortunately, parallels allow him to have them all on his macbook pro.

This is just an ordinary example of professionals, that really need running windows on their macs.

So, apple, microsoft, they have to find a way to do it on new macs.

All that shows is your doctor likes spending more time screwing around with computers in the course of his job than just using the right tools for the job.

I'm a Mac guy as much as anyone but if you hav mission critical windows applications required for your job, you run a windows box.
 
  • Like
Reactions: Maximara
Isn't emulation/translation exactly what the IBM System i aka AS/400 was based upon? The systems were originally built with a CISC processor which was replaced by POWER processors. Apparently applications did not need to be recompiled - the system merely had to generate new cached machine code.
Roseeta 2 is a really interesting piece of translation software: "Rosetta 2 can convert an application right at installation time, effectively creating an ARM-optimized version of the app before you’ve opened it. (It can also translate on the fly for apps that can’t be translated ahead of time, such as browser, Java, and Javascript processes, or if it encounters other new code that wasn’t translated at install time.)"

So for that software that can be translated there isn't "new cached machine code". Of course such machine conversion is not as fast as human optimized code and so the program takes a proformance hit.
 
All that shows is your doctor likes spending more time screwing around with computers in the course of his job than just using the right tools for the job.

I'm a Mac guy as much as anyone but if you hav mission critical windows applications required for your job, you run a windows box.
Perhaps he didn't want to mess around purging all the bloat/ad ware that ships with the average PC. Sure you can get models that don't have that crap (the type a hospital uses) but they cost.

If you were the average user who only knew there was bloat/ad ware that collected information was the default for the average computer would you really want to have private confidential information anywhere near that? Yes we know better but remember we are talking about the average user.
 
Last edited:
Isn't emulation/translation exactly what the IBM System i aka AS/400 was based upon? The systems were originally built with a CISC processor which was replaced by POWER processors. Apparently applications did not need to be recompiled - the system merely had to generate new cached machine code.
Touchè. :)

OK, there are grey areas: If you squint hard enough this is similar to what something like Java or Erlang do too: They present the programmer with a set of APIs that are common across platforms, and do translation to the platform you're running behind the scenes. Optimally, this is what you want to see as a developer: Somebody else puts their neck on the line and says "we guarantee that our solution will work for all supported use cases" with some definition of what "supported" actually means.

In Apple's case, Rosetta 2 is provided as a concession to users with no guarantee of its general applicability or long-term availability, and already out-of-the-box there are stated exceptions to where it does work (can't be used for running VMs, for example). With that in mind I would not want to be commercially dependent on anything that doesn't already have a roadmap for the porting process to Apple Silicon.

But as I alluded to in my earlier post: What's mostly lacking from a corporate developer perspective is framework support for the Arm platforms: If that is in place and guaranteed to not cause strange issues when you move your code from one platform to the other, many or most developers using that framework can potentially use either platform for development and initial testing. Once that has been battle tested, cross-platform development may be viable, depending on how conservative the company culture is.
 
  • Like
Reactions: Nugget
Perhaps he didn't want to mess around purging all the bloat/ad ware that ships with the average PC. Sure you can get models that don't have that crap (the type a hospital uses) but they cost.

If you were the average user who only knew there was bloat/ad ware that collected information was the default for the average computer would you really want to have private confidential information anywhere near that? Yes we know better but remember we are talking about the average user.
You're speaking of a commercial user who likely didn't want to use their hospital-issued HP or Dell and rolled their own accepting some inconvenience to have their native environment be nice. Bloatware would not be the issue in such a case.
The average user doesn't know of anything but bloatware and crapware and Start menus acting as ad platforms for games and crap. The average user would never even consider spending a crapton of money on a Mac that underperforms for the price and can't be serviced by the guy 'round the corner and can't be upgraded when the kids want to play the next generation of games and doesn't have a start menu and where you must press cmd rather than ctrl to copy-and-paste or print a document onto the carcass of a dead tree.
 
Roseeta 2 is a really interesting piece of translation software

Rosetta 2 is a translation layer that works only for 64 bit Intel macOS software running on 64 bit Apple Silicon macOS machines. For that very narrow task it appears to do a great job. It has no role in any broader cross architecture compatibility.

Moreover, the way that Rosetta 2 is able to perform so well relies on this narrow scope. One of the techniques Apple have employed is to allow for linking Intel binaries against Arm libraries where there are common libraries between both architectures. This greatly reduces the amount of code that actually needs to be emulated. It’s not a technique that can be made more generalized and it only works because Apple have tight and complete control over both sides of the equation. (Now we know why Apple were so aggressive about dropping support for 32 bit macOS binaries, for example)

For users who need Windows or Linux cross architecture compatibility, Rosetta 2 is not a relevant or interesting tool. Rosetta 2 will not help you unless the piece of software you want to run is a relatively modern Intel 64 bit macOS application and the machine you want to run it on is an Apple silicon Mac.

This is why I have been so puzzled that you keep bringing it up in this discussion. I don't understand how it relates to the topic at hand.
 
Last edited:
  • Like
Reactions: cool11 and Mikael H
Rosetta 2 is a translation layer that works only for 64 bit Intel macOS software running on 64 bit Apple Silicon macOS machines. For that very narrow task it appears to do a great job. It has no role in any broader cross architecture compatibility.

Moreover, the way that Rosetta 2 is able to perform so well relies on this narrow scope. One of the techniques Apple have employed is to allow for linking Intel binaries against Arm libraries where there are common libraries between both architectures. This greatly reduces the amount of code that actually needs to be emulated. It’s not a technique that can be made more generalized and it only works because Apple have tight and complete control over both sides of the equation. (Now we know why Apple were so aggressive about dropping support for 32 bit macOS binaries, for example)

For users who need Windows or Linux cross architecture compatibility, Rosetta 2 is not a relevant or interesting tool. Rosetta 2 will not help you unless the piece of software you want to run is a relatively modern Intel 64 bit macOS application and the machine you want to run it on is an Apple silicon Mac.

This is why I have been so puzzled that you keep bringing it up in this discussion. I don't understand how it relates to the topic at hand.

The XCode cross-compilation (x86 and ARM) is where I think the discussion should be focused. Rosetta 2 effectively does the same thing as the original Rosetta that shipped during the PPC - Intel transition - with one significant difference. Rosetta translated code only at runtime, whereas Rosetta 2 translates the code upon installation. In theory, that should allow for better app performance since you're no longer translating code on the fly. With that being said, it is still a work in progress (and probably will continue to be until the first AS-based Macs are released), so there are still bugs to be squashed. The ability to build for x86 and ARM in one file (i.e., 'fat binary', actually automates a lot of the conversion, as your code is compiled for both platforms by XCode itself (if you enable that option). There will always be specific function calls that do not have cross-platform equivalents, but the developer documentation that comes with the DTK goes into detail on what the differences are and how to address them.

Going back to cross-platform (not cross-architecture) compatibility with Linux and Windows, we already know that Microsoft has unified their Office code base across all major OSes (Windows, MacOS, iOS, and Android). Furthermore, since Microsoft already has an ARM-based version of Office specifically for machines such as the Galaxy Book S, Lenovo C630, and Surface Pro X, the bulk of that conversion work is already complete outside of OS-specific function calls. Adobe also had AS-ready versions of Photoshop and other Creative Suite apps ready as of WWDC, although I do not know if they have a unified code base like Microsoft. Here's where it gets interesting: this week is the Microsoft Ignite conference (all virtual for 2020), and I've already seen one session discussing ARM development. So not only is Microsoft actively developing for ARM and x86 themselves, but they are also starting to share information with developers on how to do the same. This may be more of an indicator on Microsoft's part regarding an eventual (and gradual) distancing from x86, or it could be part of their efforts to extend market share by going dual-platform with future versions of Windows.

With that being said, what Microsoft is doing in regards to ARM/x86 development is different from Apple's approach, where XCode handles much of the heavy lifting in terms of compiling for each platform. Apple also has a large developer base on the iOS/iPadOS side of the equation, many of whom would love to extend their apps onto the Mac because it expands their potential audience. We know that not all iOS apps are candidates for conversion to MacOS simply because they require features and hardware (accelerometer, gyroscope, etc.) that are not present in a MacOS computer.

Regarding Apple dropping 32-bit support in Catalina, I agree that it was a necessary prerequisite for the ARM transition. I also believe that the reason they did it last year was to give developers time to update their applications and acclimate to the new environment before announcing the Apple Silicon transition. If Apple had thrown ARM on top of 32-bit deprecation, a lot of developers would have thrown a fit because of how much was thrown at them in a relatively short timeframe. There's also some details regarding this transition that are under strict NDA, which is a shame because it would clear up some of the misconceptions and FUD surrounding this project.
 
The XCode cross-compilation (x86 and ARM) is where I think the discussion should be focused. Rosetta 2 effectively does the same thing as the original Rosetta that shipped during the PPC - Intel transition - with one significant difference. Rosetta translated code only at runtime, whereas Rosetta 2 translates the code upon installation.
Actually Rosetta has JIT as well: "Rosetta 2 can convert an application right at installation time, effectively creating an ARM-optimized version of the app before you’ve opened it. (It can also translate on the fly for apps that can’t be translated ahead of time, such as browser, Java, and Javascript processes, or if it encounters other new code that wasn’t translated at install time.)"

Contrast this with what Microsoft has done with the current 32-bit x86 to ARM stuff:

"An emulator module (xtajit.dll) employs a form of just-in-time (JIT) translation to convert x86 code to ARM (shown above) within a loop, as the x86 process is executing. On each pass, a chunk of x86 code is translated to ARM, and the translation is executed.

All of this, as you might have guessed, can make the experience of running x86 programs a comparatively slow experience. However, a cache of already-translated code (located in C:\Windows\XtaCache) eliminates much of the overhead. A compiler (xtac.exe) and background caching service (XtaCache) handle full binary translation and caching. Hybrid binaries (located in C:\Windows\SyChpe32) containing x86-to-ARM stubs also help to reduce overhead." - Teardown: Windows 10 on ARM - x86 Emulation

As I said rumor is there is a joint effort by Apple and Microsoft to do deal with the x86 code to ARM issue: Microsoft has the 32-bit stuff Apple doesn't care about (but the PC community does) and Apple has a robust 64-but method (which Microsoft says won't be a thing until 2021).
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.