Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
  • Like
Reactions: ian87w
They do state that Rosetta 2 supports JIT. I assume it works by intercepting executable pages. One your x86 JIT has written new code, they can flag the page as dirty and send it to the ARM transpiler. Since most JITs are per-function AOT compilers, it should work well. The only problem might be with tracing-based JIT which generates code based on data patters... but even then it’ll just be a bit added latency. If the performance if the chip is good enough, it might run as fast as native or faster.
[automerge]1592896116[/automerge]
P.S. By the way, I feel mightly childishly pleased with myself, since I have predicted they would be transpiling x86 to ARM the moment this discussion started ;) Looks I was 100% spot on!

You don't know what the base addresses of the code are so how can you start compiling?
 
You don't know what the base addresses of the code are so how can you start compiling?

I don't really see the problem. JIT-compiled code has to contain correct pointers anyway. So if you encounter a jump into JIT-compiled code, you can assume that all the addresses in it are correct. You can just translate accordingly. The rest is solvable with PIC and relocation tables.
 
I don't really see the problem. JIT-compiled code has to contain correct pointers anyway. So if you encounter a jump into JIT-compiled code, you can assume that all the addresses in it are correct. You can just translate accordingly. The rest is solvable with PIC and relocation tables.

I don't see how you can assume that the code that you jump into is JIT-compiled. If you are creating it on the fly in VM and executing, how will your JIT engine know that it's code and not data?

What do you do if the addresses are a computed offset?

Or if the addresses and the machine code can change?
 
Last edited:
I don't see how you can assume that the code that you jump into is JIT-compiled. If you are creating it on the fly in VM and executing, how will your JIT engine know that it's code and not data?

Regular data pages are not executable, code pages need an additional flag. In practice the JIT-compiler allocates a page with executable status, writes stuff into it and then jumps in. The OS can detect if a code page is modified and mark it as dirty, when the application then tries to execute a jump to an address in a dirty page, the OS can invoke the transpiler.

What do you do if the addresses are a computed offset?

You mean like in jump tables (especially ones that use relative addresses)? Yeah, I'd be curios to see how Apple handles those. But I think it's again something that is solvable. For example, you could insert a trap when you encounter a jump to a computed offset. This trap would scan an address list to match the x86 address to the appropriate transpired ARM address (the results could be cached in some sort of splay tree or similar data structure). The performance hit would obviously be significant, but it should at least work correctly.

Or if the addresses and the machine code can change?

Under which circumstances would this happen? If this occurs because you have changed the JITted code, the dirty page approach I described above should solve it.
 
Regular data pages are not executable, code pages need an additional flag. In practice the JIT-compiler allocates a page with executable status, writes stuff into it and then jumps in. The OS can detect if a code page is modified and mark it as dirty, when the application then tries to execute a jump to an address in a dirty page, the OS can invoke the transpiler.

You mean like in jump tables (especially ones that use relative addresses)? Yeah, I'd be curios to see how Apple handles those. But I think it's again something that is solvable. For example, you could insert a trap when you encounter a jump to a computed offset. This trap would scan an address list to match the x86 address to the appropriate transpired ARM address (the results could be cached in some sort of splay tree or similar data structure). The performance hit would obviously be significant, but it should at least work correctly.

Under which circumstances would this happen? If this occurs because you have changed the JITted code, the dirty page approach I described above should solve it.

I did generated machine code in the 1980s and about fifteen years ago so it's been a long time. My recollection is that I just malloc'd a memory segment, wrote out machine code and jumped into an address. I don't recall whether or not I had to set an execute bit or otherwise indicate that. I know that there is hardware protection on memory segments so maybe I had to do that.

The stuff in the 1980s was on ancient hardware and you could modify code segments that you reserved. I recall porting from VMS to Alpha OSX and then Alpha NT and Intel NT and we had some tens of thousands of lines of BLISS code that did codegen. Back in the 1970s, we had program overlays so code changed a lot during a run. We had to code in the program overlays ourselves - no VM.

I think that there are ways to deal with problems but you don't get native execution. I used FX!32 back in the 1990s and that was the best translation software that I have seen. It did JIT and saved the results and you could schedule translation overnight. This was about 1994 I think. So it's no surprise that they can do this in 2020. I was also invited to work on the JIT for Firefox back in 2008 but was busy with family health issues.

I actually think that most software will be fine. If you need another architecture or operating system, just get another machine and VNC into it. That's our new work model. I'm able to do quite will with GB LAN. I suspect that I wouldn't be able to tell the difference that it was on another machine with 10 GB LAN.
 
Since Apple A-series processors will be exclusive to Apple, so bootcamp support depends on Apple sharing info with Microsoft for Windows to work on future Macs
 
So are you guys saying that it is impossible for Parallels to run on this new ARM system? Just like that a whole company is out of business except for the previous years.Can Apple help Parallels run windows if they want to or it is impossible due to architect ?
 
So are you guys saying that it is impossible for Parallels to run on this new ARM system? Just like that a whole company is out of business except for the previous years.Can Apple help Parallels run windows if they want to or it is impossible due to architect ?

I don't think that it's impossible. But it would be very, very challenging.
 
How would it be impossible being it was demoed?


So are you guys saying that it is impossible for Parallels to run on this new ARM system? Just like that a whole company is out of business except for the previous years.Can Apple help Parallels run windows if they want to or it is impossible due to architect ?
 
So are you guys saying that it is impossible for Parallels to run on this new ARM system? Just like that a whole company is out of business except for the previous years.Can Apple help Parallels run windows if they want to or it is impossible due to architect ?
It's not that Parallels or VMWare can't run on ARM, it is that the VIRTUAL OS depends on x86 hardware which is no longer there...
 
So, is there a possibility, that there will be no way of running parallels in new os and apple silicon?
 
So, is there a possibility, that there will be no way of running parallels in new os and apple silicon?

No they showed Linux running in the built in virtualisation tools. No news yet on if Windows is possible, guess we need the dev kits shipped and we will start to learn more.
 
I was getting very close to pulling the trigger and ordering a 10th gen 13 inch pro this week. Now I have no clue. Wondering if I should just get a magic keyboard for my iPad Pro to tie me over until an ARM MacBook releases?
Hahahahaha my ipad keyboard gets here Thursday
I do a huge amount now on iPad basically all but capture one and PS which I can‘t Do
we have a few Mac pros a high end pc for raw capture so I am going to lengthen out those cancel my laptop purchase had a i9 32 gig new gpu 2tb build and said it can wait Now
 
On porting and running other operating systems:

There are two kinds of changes that we're discussing.

1) Changing processor architecture
2) Changing operating system

If you're changing the architecture but not the operating system, you can port your software (not difficult but done by developers), recompile, relink and package. You can also use a machine-code translator (JIT and/or Translation) to run software built for another architecture but using the same operating system. This would be like running mac/x86 programs on mac/ARM.

You can run another operating system on the same architecture. This is via virtualization and virtualization uses the CPU to execute the same hardware but under a containerized different operating system. An example of this would be running Windows x86 on mac/x86.

What some are asking is combining these two things. That is changing the architecture AND changing the operating system. You would need the rebuild/port/translation for the whole operating system to the new architecture that would run in a virtual machine on the target. Could this be done? It could be but I think that you'd need the operating system vendor to do the port and I don't see Microsoft doing that for Apple. Could you translate it on the fly? Theoretically, yes. But I suspect that there will be code in the operating system that doesn't translate.
 
Generally ARM is more power efficient. So either you could build a MBA with same power and perhaps double the battery life. Or you could in the same thermal capacity (think MBP) provide more power.

In the longer term Apple can provide more dedicated silicon integrated with the CPU as they today do with SSD controllers. This could result in very good performance for specialized use cases.

I don't see my new 12.9" iPad so efficient my battery drain fast and charge time way too long
 
I have a feeling that the micro-led premium 14” device rumoured is the ARM MacBook... Also noticed the micro-LED rumours seems to get mixed up with the iPad Pro. Maybe we’re getting a combo device that is an iPad and a MacBook with Apple Pencil support running ARM with a microLED screen that flips back like the HP 360.
 
Exactly, I run Ubuntu on my Jetson Nano comfortably and that's only at A57 1.4GHz CPU. Granted it has an NVIDIA GPU to heavy lift.

Arm will be fine for macOS and will be well worth it for mobile. Apple's Silicon is very good and it will be a good transition. Yes we lose the general platform but nothing new for Macs and we can manage. Open source development is in a lot of abstractions these days and will actually help make Arm development stronger.

Linux runs an a lot of different processor architectures.
 
Last edited:
I’m sure Apple will want to get all Macs onto the new silicon as quickly as possible. Apple said the Intel transition would take up to 2 years, and they did it in half the time. Granted, the 2nd generation 64-bit Core 2 Duo MacBooks lasted longer than the 1st Generation 32-bit Core Duo MacBooks, but this time around Apple is in control of the roadmap.
Issue is TSMC.

Even with Huawei gone, Apple needs an absurd amount of capacity from TSMC just by themselves. Apple is going to try and put everything on 5nm. 2020 capacity will be completely eaten by the iPhone and whichever macOS machines Apple makes flagship. Going into 2021, Apple will still need iPhone SoCs, but now they also want to get the iPad, Watch, and a whole host of notebooks and desktops new SoCs too.

At the same time, AMD is looking longingly at the entire PC market. All of it could be theirs, but Apple is literally paying TSMC to keep AMD off their supply lines until 2022.

Your point about the Core Duo / Core 2 is right on. That was a weird time with Intel, they made the Pentium M dual core and then made it obsolete with a new arch basically the same year. And Core 2 was a champion. But Apple is in the driver's seat and from all indications they are making sure that they are transitioning to the "Core 2" of the A series. 5nm will likely be the biggest manufacturing boost we see for years, and I'm confident Apple will debut with support for LPDDR5 - a year ahead of their x86 rivals.
 
Last edited:
  • Like
Reactions: pshufd
Issue is TSMC.

Even with Huawei gone, Apple needs an absurd amount of capacity from TSMC just by themselves. Apple is going to try and put everything on 5nm. 2020 capacity will be completely eaten by the iPhone and whichever macOS machines Apple makes flagship. Going into 2021, Apple will still need iPhone SoCs, but now they also want to get the iPad, Watch, and a whole host of notebooks and desktops new SoCs too.

At the same time, AMD is looking longingly at the entire PC market. All of it could be theirs, but Apple is literally paying TSMC to keep AMD off their supply lines until 2022.

Your point about the Core Duo / Core 2 is right on. That was a weird time with Intel, they made the Pentium M dual core and then made it obsolete with a new arch basically the same year. And Core 2 was a champion. But Apple is in the driver's seat and from all indications they are making sure that they are transitioning to the "Core 2" of the A series. 5nm will likely be the biggest manufacturing boost we see for years, and I'm confident Apple will debut with support for LPDDR5 - a year ahead of their x86 rivals.
The fab that TSMC is planning to build in Arizona will have the capacity for about 20 million chips per year, which should be enough to satisfy Mac demand, though that won’t be up until 2024.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.