Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

dapa0s

macrumors 6502a
Original poster
Jan 2, 2019
523
1,032
The power is there, but AMD and even possibly Intel will catch up in time. I fear the future market fragmentation, with developers having to develop specifically for Apple Silicon ARM and just not having the time to do so.

Not to mention that games for Apple Silicon are just not a thing, and gaming is a huge part of the PC market, and realistically will probably never be a thing, since Apple and gaming just don't work together.

Even the new 10nm Intel CPUs will be much better than before, and AMD is already doing great in raw power.

The idea of Apple controlling both software and hardware is great, something they've been trying to do for decades, but the big question is how the support from the developers will be.

I look forward to the power, but I'm just not so sure about the future.

I am a complete noob and have no idea what I'm talking about in this area, but I'm just wondering what other people here think.
 

jazz1

Contributor
Aug 19, 2002
4,677
19,813
Mid-West USA
The power is there, but AMD and even possibly Intel will catch up in time. I fear the future market fragmentation, with developers having to develop specifically for Apple Silicon ARM and just not having the time to do so.

Not to mention that games for Apple Silicon are just not a thing, and gaming is a huge part of the PC market, and realistically will probably never be a thing, since Apple and gaming just don't work together.

Even the new 10nm Intel CPUs will be much better than before, and AMD is already doing great in raw power.

The idea of Apple controlling both software and hardware is great, something they've been trying to do for decades, but the big question is how the support from the developers will be.

I look forward to the power, but I'm just not so sure about the future.

I am a complete noob and have no idea what I'm talking about in this area, but I'm just wondering what other people here think.
OP, are you thinking that the others will come on so strong, that Apple's silicon will kick Apple Silicon to the curb? Even worse, that Apple will abandon Apple Silicon and return to Intel?

I hope the Apple brand/iOS/OS is stronger than that as long as they keep Apple Silicon competitive. Of course this is coming from the guy who has been purchasing Apple products since the //+. Owned a Newton, owned an XL Mac, and many other Apple products ;)
 
  • Like
Reactions: SamRyouji

dapa0s

macrumors 6502a
Original poster
Jan 2, 2019
523
1,032
Erm, no. I've been buying Apple products for decades as well lol.

The others (amd, intel, etc.) WILL come on strong, the added competition from Apple will just increase innovation, and I don't think that Apple will go back to intel.

I think that the biggest problem will be getting developers on board for Apple. Apple will have to take a huge chunk of the market for developers to bother developing specifically for Apple Silicon, and that's something we just can't predict at this moment.
 

pshufd

macrumors G4
Oct 24, 2013
10,151
14,574
New Hampshire
ARM has structural advantages over x86 and Apple just showed the world what those are. In response, Intel and AMD will have to eventually go to ARM or RISC. And they will have to go through the same, painful transition that Apple is going through right now.

Intel's i9-12900 is faster than Apple's M1 in single-core and multi-core Geekbench 5. The i9-12900 should start shipping later this year or early next year. The i9-12900 beeds 250 Watts to beat the M1 running at 20 Watts though. Intel wins!

AMD recently won a big contract with Cloudflare for edge servers. Intel's performance was fine but their CPUs used hundreds of watts more than AMD's. I don't think that Apple will care to compete in many of the places where Intel and AMD compete like servers. But there will be other companies like nVidia making ARM server chips which should have the big performance per watt advantages over x86, just like Apple.
 

pshufd

macrumors G4
Oct 24, 2013
10,151
14,574
New Hampshire
Erm, no. I've been buying Apple products for decades as well lol.

The others (amd, intel, etc.) WILL come on strong, the added competition from Apple will just increase innovation, and I don't think that Apple will go back to intel.

I think that the biggest problem will be getting developers on board for Apple. Apple will have to take a huge chunk of the market for developers to bother developing specifically for Apple Silicon, and that's something we just can't predict at this moment.

Microsoft knows that they have to move to ARM and that's going to kickoff a big move to porting.
 

dapa0s

macrumors 6502a
Original poster
Jan 2, 2019
523
1,032
Interesting. So you think that the whole market will shift towards ARM, because it is inherently superior? Which, in turn, would force developers to focus on developing for ARM, and it won't matter if they develop for Apple ARM or AMD Arm, since it will be the same thing, like in the case of AMD and Intel today?
 

Michael Scrip

macrumors 604
Mar 4, 2011
7,975
12,673
NC
The power is there, but AMD and even possibly Intel will catch up in time. I fear the future market fragmentation, with developers having to develop specifically for Apple Silicon ARM and just not having the time to do so.

Not to mention that games for Apple Silicon are just not a thing, and gaming is a huge part of the PC market, and realistically will probably never be a thing, since Apple and gaming just don't work together.

Even the new 10nm Intel CPUs will be much better than before, and AMD is already doing great in raw power.

The idea of Apple controlling both software and hardware is great, something they've been trying to do for decades, but the big question is how the support from the developers will be.

I look forward to the power, but I'm just not so sure about the future.

I am a complete noob and have no idea what I'm talking about in this area, but I'm just wondering what other people here think.

Couple of things...

Intel and AMD might be "catching up" but Apple will also continue advancing Apple Silicon.

It's not like Apple built the M1 and said "we're done" :)

As for developers... they already have to decide whether to support Macs or not.

If they do choose to support Macs... they will support whatever chips are inside those Macs. It was Intel for the last 15 years... and now nearly every new Macintosh will have Apple Silicon going forward (everything except the Mac Pro, for now)

Every decent Mac developer should support Apple Silicon. It would be foolish not to.
 

casperes1996

macrumors 604
Jan 26, 2014
7,599
5,770
Horsens, Denmark
Hello :) - As my signature says, I'm a computer science student, currently doing my master's. Hope I can help alleviate some concerns.
The power is there, but AMD and even possibly Intel will catch up in time. I fear the future market fragmentation, with developers having to develop specifically for Apple Silicon ARM and just not having the time to do so.
Sure. Intel and AMD will improve, but Apple will improve too. The M1 isn't the end foe Apple just like the 11th gen isn't the final generation of chips from Intel. That's not to say that AMD and/or Intel won't ever make a chip rivalling Apple's contemporary offering - anything can happen and AMD was far behind before they revealed the Zen architecture, but competition in the market will just force everyone to make better products.
As for developer effort, for 95% of situations the extra effort to develop an Apple Silicon version of your software over an x86 version is literally 0. You check a little checkbox and the compiler will make an AARCH64 version. That's it. You may need to add a line or two to a build script if you're not using Xcode.
Most code is not written to be architecture dependant. The greater effort is and has always been porting from Windows APIs to macOS APIs not the CPU architecture. The switch from PowerPC to Intel was also a bit harder/made new ports easier because PowerPC is big endian which means it reads/writes memory starting from the "big end" where both x86 and ARM are little endian. There may be some testing involved which may make some developers initially hesitant because they may have bugs in their code already which just don't reveal themselves on the current platform but well written higher level code will just work regardless of what CPU is in the machine, after re-compiling it.
That said, some developers are sitting on code that is either machine dependant, like virtualisation software relying on VT-D/x or other operating systems for that matter. Or even where the code has been written to give optimised performance using AVX instructions and has no fallback for non-AVX supporting devices. This means the software also won't run on older x86 chips but they'll have to rewrite that code with a pure standard instruction based approach or use NEON intrinsics instead. But this is a tiny amount of software. All software on the Mac has already been developed specifically for the Mac, or using something like Electron - nothing will really change for either approach.
Not to mention that games for Apple Silicon are just not a thing, and gaming is a huge part of the PC market, and realistically will probably never be a thing, since Apple and gaming just don't work together.
This isn't really different from Intel Macs, other than two factors.
1) Bootcamp isn't an option
2) Even a lowly MacBook Air is now actually capable of playing games where it wasn't on Intel.
I am very optimistic on the future of games on the Mac - all games won't be available. Most probably won't. But the ones that will be available will run a hell of a lot better than they ever have in the past and we're already seeing that. - And Bootcamp eventually coming to Apple Silicon Macs is not entirely impossible and until then a lot can be played with CrossOver and virtualisation anyway. Remarkably much on the M1 considering the overhead.

For more of my thoughts on this I recommend the MacGameCast podcast where me and some other folks have discussed the future of Mac gaming at length. Both our first episode and our episode with Andrew Tsai go into Apple Silicon specifically.
It's available on Apple Podcast, Spotify, YouTube - basically everywhere. Here's an Apple Podcast link
Even the new 10nm Intel CPUs will be much better than before, and AMD is already doing great in raw power.
In raw power, yes, but the M1 is doing it at 1/8th the wattage. Apple has a lot of headroom to grow.
Intel's Alder Lake is promising and I have high hopes for that. AMD's new cache system likewise is promising and I have high hopes for it. Ideally all three players will do well and push each other to greater and greater products.

But I think Apple Silicon is good and will continue to be incredibly impressive. I have no worries as far as the future goes in all but one area. I can envision what we'll see for all Macs except one. The Mac Pro. I'm leaning on thinking it'll use a form of chiplet solution to leverage the economy of scale of M1X chips - or M2X at that time perhaps - Into M2Z, M2WX, etc. (my naming ideas).
 

pshufd

macrumors G4
Oct 24, 2013
10,151
14,574
New Hampshire
Interesting. So you think that the whole market will shift towards ARM, because it is inherently superior? Which, in turn, would force developers to focus on developing for ARM, and it won't matter if they develop for Apple ARM or AMD Arm, since it will be the same thing, like in the case of AMD and Intel today?

They will wind up in the same mode as Apple - you have your developer tools that can target either platform. So your tools create kits for both platforms until the old one eventually gets desupported. Why do you think Microsoft tried WARM a long time ago?

A huge problem for x86 is variable-length instructions. ARM instructions are all the same length so you can look at the next instruction, the one after that, and the one after that at the same time because you know where they start. With variable length instructions, you don't know where the next instruction, the one after that and the ten after that start. I think that Intel and AMD try all of the address, decode everything and then toss out the errors and this is very expensive. And this is one of the reasons why Apple Silicon has ridiculously good performance per watt. There is no way to fix this on x86 so both Intel and AMD are going to higher-wattage chips to achieve more performance.

M1 is running at 3.2 Ghz so that Apple can build thin and light devices. Intel and AMD are competing at 5 Ghz. I do not know that Apple could run at 5 Ghz, but, if they could, they'd really set back Intel and AMD on achieving the best single-core performance. Again, I think that Apple values efficiency over absolute performance.
 
  • Like
Reactions: z3an and dapa0s

LeeW

macrumors 601
Feb 5, 2017
4,342
9,446
Over here
I have no doubt Intel and AMD will catch and possibly even pass Apple silicon in the future, performance-wise. Will it really bother Apple? Not likely. I expect their future chips will be more than sufficient for their devices.

That is where the real point is, Intel and AMD are building for the wider market, Apple is building for Apple and really doesn't have to compete with Intel or AMD in that regard, they are not chasing nor need the work that Intel/AMD do.

M1 has already demonstrated quite efficiently just how much better Apple Silicon is for Apple devices and that is only going to improve. The M1X should bring a heck of a lot more to the table in terms of performance and overall capability.

Whilst gaming is not really a 'thing' for Mac what we have seen over the years through general improvements in performance is the likes of Steam creating a marketplace for gaming on the Mac and more developers will follow. Not in any way that you will see equivalent gaming opportunities on the Mac as with the PC, but probably enough to satisfy most. For everyone else, they know they need to have a PC as well. Or use something like Shadlow PC.
 
  • Like
Reactions: dapa0s

casperes1996

macrumors 604
Jan 26, 2014
7,599
5,770
Horsens, Denmark
Interesting. So you think that the whole market will shift towards ARM, because it is inherently superior? Which, in turn, would force developers to focus on developing for ARM, and it won't matter if they develop for Apple ARM or AMD Arm, since it will be the same thing, like in the case of AMD and Intel today?

A lot of the PC market is at least investigating ARM chips that much is true.
Calling it inherently superior or "structurally better" is a bit misleading though I think and it doesn't fully give credit to the chip engineers who worked on Apple's cores. It's a bit reductionist to say they're better because they're ARM. x86 and ARM's ISA are just instruction sets in principle. You wouldn't consider a Pentium 4 and a core i9 11900K the same even though they're both x86 chips.
ARM has however been more "courageous" about throwing out legacy stuff. For example a lot of the ARMv9 designs ARM has showed off do not have any 32-bit hardware support at all anymore. Apple started this trend, removing physical 32-bit support in their chips I believe with the A7 but ARM is officially following now. x86 still mandates a "real" mode, which is 16-bit operation.
If you ask me, the AMD parts of x86-64 are way more sensible than the Intel parts but if not for the really old legacy parts it's honestly a solid ISA.

As mentioned in my last post, developers these days don't develop for a specific CPU architecture (generally speaking). Code will in 95% of cases be portable among different CPU architectures, assuming the rest of the software stack is there (OS, libraries, frameworks)
 
  • Like
Reactions: dapa0s

pshufd

macrumors G4
Oct 24, 2013
10,151
14,574
New Hampshire
A lot of the PC market is at least investigating ARM chips that much is true.
Calling it inherently superior or "structurally better" is a bit misleading though I think and it doesn't fully give credit to the chip engineers who worked on Apple's cores. It's a bit reductionist to say they're better because they're ARM. x86 and ARM's ISA are just instruction sets in principle. You wouldn't consider a Pentium 4 and a core i9 11900K the same even though they're both x86 chips.
ARM has however been more "courageous" about throwing out legacy stuff. For example a lot of the ARMv9 designs ARM has showed off do not have any 32-bit hardware support at all anymore. Apple started this trend, removing physical 32-bit support in their chips I believe with the A7 but ARM is officially following now. x86 still mandates a "real" mode, which is 16-bit operation.
If you ask me, the AMD parts of x86-64 are way more sensible than the Intel parts but if not for the really old legacy parts it's honestly a solid ISA.

As mentioned in my last post, developers these days don't develop for a specific CPU architecture (generally speaking). Code will in 95% of cases be portable among different CPU architectures, assuming the rest of the software stack is there (OS, libraries, frameworks)

The inherent superiority is in the fixed-length instructions. This allows instructions to be decoded in parallel so that they are ready for sorting and execution in the decoder buffer. This video has been around since 2020 and does a nice job explaining it for the non-engineer.

Edit: I think that it would be more accurate to say that the ARM ISA makes it a lot easier to implement parallel decoding far more efficiently compared to the x86 ISA.

 
Last edited:

bobcomer

macrumors 601
May 18, 2015
4,949
3,699
You can't ignore a three to one performance per watt efficiency advantage.
Sure we can, running the software we need is far more important than that. In fact, electricity for computers is a VERY minimal part of business, so much so that nobody cares about it. It comes under the lights budget.

Big server farms, yes, that makes a difference, desktop PC's and local servers, not so much.
 

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
A lot of the PC market is at least investigating ARM chips that much is true.
Calling it inherently superior or "structurally better" is a bit misleading though I think and it doesn't fully give credit to the chip engineers who worked on Apple's cores. It's a bit reductionist to say they're better because they're ARM. x86 and ARM's ISA are just instruction sets in principle. You wouldn't consider a Pentium 4 and a core i9 11900K the same even though they're both x86 chips.
ARM has however been more "courageous" about throwing out legacy stuff. For example a lot of the ARMv9 designs ARM has showed off do not have any 32-bit hardware support at all anymore. Apple started this trend, removing physical 32-bit support in their chips I believe with the A7 but ARM is officially following now. x86 still mandates a "real" mode, which is 16-bit operation.
If you ask me, the AMD parts of x86-64 are way more sensible than the Intel parts but if not for the really old legacy parts it's honestly a solid ISA.

As mentioned in my last post, developers these days don't develop for a specific CPU architecture (generally speaking). Code will in 95% of cases be portable among different CPU architectures, assuming the rest of the software stack is there (OS, libraries, frameworks)

As a former x86-64 CPU designer (and, in fact, one of the first 15 such designers), I’ve often said intel and AMD should just drop legacy support. We designed AMD64 to be much easier to decode than legacy x86, and dropping support for 32 and 16 bit stuff would likely simplify the decoders a ton, allow much wider issue, etc. Still wouldn’t be as clean as a RISC architecture, of course.
 

casperes1996

macrumors 604
Jan 26, 2014
7,599
5,770
Horsens, Denmark
The inherent superiority is in the fixed-length instructions. This allows instructions to be decoded in parallel so that they are ready for sorting and execution in the decoder buffer. This video has been around since 2020 and does a nice job explaining it for the non-engineer.

Fixed length is an advantage yes, but parallel fetch and decode is still possible on x86 it just requires a bit of extra work and of course comes at a perf/watt cost. But fixed function hardware exists for the purpose increasing the efficiency relative to a software implementation.
But x86's more complex instructions thus also gives each specific chip more control over how it optimally does any given instruction. Now that difference is somewhat minimise by reorder buffers since ARM chips can look several instructions ahead as well

I like the fixed length of instructions of ARM. I think register naming is more sane on ARM (though AMD at least made the x64 registers' naming sane - And to Intel's credit their initial 8086 logic for naming the registers what they did made sense at the time). I think if you're hand-writing assembly it's pretty nice to be able to reference memory directly without needing to do a ld first, but most of us aren't really doing that anymore.

My point really was just that we need to attribute most of the credit to the chip engineers and not some inherent superiority of the ISA. While the variable instruction length is a hurdle the designers at AMD and Intel need to work around that Apple's engineers didn't have to fight with, a lot of things goes into chip design and I really want that to be acknowledged so we don't get a narrative of "M1 is better because it's ARM". There are plenty of ARM chips that couldn't ever compete with Intel's and AMD's newer offerings. Similarly there are new x86 chips that can't hold a candle to some of the ARM chips out there like M1 or in larger scale infrastructure, Amazon's Graviton processors
 

pshufd

macrumors G4
Oct 24, 2013
10,151
14,574
New Hampshire
Sure we can, running the software we need is far more important than that. In fact, electricity for computers is a VERY minimal part of business, so much so that nobody cares about it. It comes under the lights budget.

Big server farms, yes, that makes a difference, desktop PC's and local servers, not so much.

How about laptops?

My workplace got rid of desktops and provided employees with laptops. 2015 MacBook Pros in fact. They ran hot and noisy and there were lots of complaints, particularly running Zoom. What I see in the corporate world is more and more work moving to the cloud and employees running on laptops for mobility and the ability to work anywhere.

Why did Microsoft start putting so much effort into WARM in the past couple of years?

Why does Google want to make their own chips for their computers?

Why does nVidia want to buy Arm Holdings?

Why is Intel doing a deal for RISC-V?

Why did AMD announce that they're working on ARM chips?
 
  • Like
Reactions: osx86 and JMacHack

pshufd

macrumors G4
Oct 24, 2013
10,151
14,574
New Hampshire
Fixed length is an advantage yes, but parallel fetch and decode is still possible on x86 it just requires a bit of extra work and of course comes at a perf/watt cost. But fixed function hardware exists for the purpose increasing the efficiency relative to a software implementation.
But x86's more complex instructions thus also gives each specific chip more control over how it optimally does any given instruction. Now that difference is somewhat minimise by reorder buffers since ARM chips can look several instructions ahead as well

I like the fixed length of instructions of ARM. I think register naming is more sane on ARM (though AMD at least made the x64 registers' naming sane - And to Intel's credit their initial 8086 logic for naming the registers what they did made sense at the time). I think if you're hand-writing assembly it's pretty nice to be able to reference memory directly without needing to do a ld first, but most of us aren't really doing that anymore.

My point really was just that we need to attribute most of the credit to the chip engineers and not some inherent superiority of the ISA. While the variable instruction length is a hurdle the designers at AMD and Intel need to work around that Apple's engineers didn't have to fight with, a lot of things goes into chip design and I really want that to be acknowledged so we don't get a narrative of "M1 is better because it's ARM". There are plenty of ARM chips that couldn't ever compete with Intel's and AMD's newer offerings. Similarly there are new x86 chips that can't hold a candle to some of the ARM chips out there like M1 or in larger scale infrastructure, Amazon's Graviton processors

Watch the whole video. M1 has twice the decoders and three times the decoder buffer size. AMD was asked why they only had four decoders and the response is that there weren't benefits to doing more. The video also explains how the four decoders work with variable length instructions. It also talks about the custom silicon for specialized functions in the M1. But this isn't an inherent advantage of the ISA. Apple can do anything that they want to and run with low-level interfaces.

I don't think that AMD and Intel can overcome the variable instruction length problem which is why they are eventually going to have to go the same route.

Everyone has seen the M1 block diagram with custom silicon and Intel is copying some of that in newer chips so kudos there. But that isn't unique to M1.
 
  • Like
Reactions: Velli

grandM

macrumors 68000
Oct 14, 2013
1,520
302
Hello :) - As my signature says, I'm a computer science student, currently doing my master's. Hope I can help alleviate some concerns.

Sure. Intel and AMD will improve, but Apple will improve too. The M1 isn't the end foe Apple just like the 11th gen isn't the final generation of chips from Intel. That's not to say that AMD and/or Intel won't ever make a chip rivalling Apple's contemporary offering - anything can happen and AMD was far behind before they revealed the Zen architecture, but competition in the market will just force everyone to make better products.
As for developer effort, for 95% of situations the extra effort to develop an Apple Silicon version of your software over an x86 version is literally 0. You check a little checkbox and the compiler will make an AARCH64 version. That's it. You may need to add a line or two to a build script if you're not using Xcode.
Most code is not written to be architecture dependant. The greater effort is and has always been porting from Windows APIs to macOS APIs not the CPU architecture. The switch from PowerPC to Intel was also a bit harder/made new ports easier because PowerPC is big endian which means it reads/writes memory starting from the "big end" where both x86 and ARM are little endian. There may be some testing involved which may make some developers initially hesitant because they may have bugs in their code already which just don't reveal themselves on the current platform but well written higher level code will just work regardless of what CPU is in the machine, after re-compiling it.
That said, some developers are sitting on code that is either machine dependant, like virtualisation software relying on VT-D/x or other operating systems for that matter. Or even where the code has been written to give optimised performance using AVX instructions and has no fallback for non-AVX supporting devices. This means the software also won't run on older x86 chips but they'll have to rewrite that code with a pure standard instruction based approach or use NEON intrinsics instead. But this is a tiny amount of software. All software on the Mac has already been developed specifically for the Mac, or using something like Electron - nothing will really change for either approach.

This isn't really different from Intel Macs, other than two factors.
1) Bootcamp isn't an option
2) Even a lowly MacBook Air is now actually capable of playing games where it wasn't on Intel.
I am very optimistic on the future of games on the Mac - all games won't be available. Most probably won't. But the ones that will be available will run a hell of a lot better than they ever have in the past and we're already seeing that. - And Bootcamp eventually coming to Apple Silicon Macs is not entirely impossible and until then a lot can be played with CrossOver and virtualisation anyway. Remarkably much on the M1 considering the overhead.

For more of my thoughts on this I recommend the MacGameCast podcast where me and some other folks have discussed the future of Mac gaming at length. Both our first episode and our episode with Andrew Tsai go into Apple Silicon specifically.
It's available on Apple Podcast, Spotify, YouTube - basically everywhere. Here's an Apple Podcast link

In raw power, yes, but the M1 is doing it at 1/8th the wattage. Apple has a lot of headroom to grow.
Intel's Alder Lake is promising and I have high hopes for that. AMD's new cache system likewise is promising and I have high hopes for it. Ideally all three players will do well and push each other to greater and greater products.

But I think Apple Silicon is good and will continue to be incredibly impressive. I have no worries as far as the future goes in all but one area. I can envision what we'll see for all Macs except one. The Mac Pro. I'm leaning on thinking it'll use a form of chiplet solution to leverage the economy of scale of M1X chips - or M2X at that time perhaps - Into M2Z, M2WX, etc. (my naming ideas).
Apple can and does deliver superior performance per watt. Actually Apple becomes even better on native apps. This electron **** is companies pushing inferior products onto tremendous good hardware, as thus bringing the product down. The dissipation of electron apps makes Windows look competitive which it really isn't at this moment.
 

jz0309

Contributor
Sep 25, 2018
11,392
30,074
SoCal
Sure we can, running the software we need is far more important than that. In fact, electricity for computers is a VERY minimal part of business, so much so that nobody cares about it. It comes under the lights budget.

Big server farms, yes, that makes a difference, desktop PC's and local servers, not so much.
No you can't, most businesses have moved to laptops ... and most average office workers need MS Office and a browser, that's it
 

casperes1996

macrumors 604
Jan 26, 2014
7,599
5,770
Horsens, Denmark
As a former x86-64 CPU designer (and, in fact, one of the first 15 such designers), I’ve often said intel and AMD should just drop legacy support. We designed AMD64 to be much easier to decode than legacy x86, and dropping support for 32 and 16 bit stuff would likely simplify the decoders a ton, allow much wider issue, etc. Still wouldn’t be as clean as a RISC architecture, of course.

Thanks for not naming the extra registers RKX or whatever dumb naming convention we would've wound up with if we'd continued down the track laid out by the original x86 register names, haha. R15, now that I can work with :p
Though what's the idea behind both having caller save and callee save registers in basically all the common ABIs? I mean that may more be a question for the compiler folks but why not just make all registers caller save or something?

But yeah, I wrote an OS for my bachelor project and I damn near tore my hair out over some of the legacy stuff, like the A20 line, having to set up all this segmentation stuff while the chip is in real mode and then once its put into long mode it's irrelevant anyway because we just use paging
 
  • Like
Reactions: jdb8167

casperes1996

macrumors 604
Jan 26, 2014
7,599
5,770
Horsens, Denmark
Apple can and does deliver superior performance per watt. Actually Apple becomes even better on native apps. This electron **** is companies pushing inferior products onto tremendous good hardware, as thus bringing the product down. The dissipation of electron apps makes Windows look competitive which it really isn't at this moment.
I wasn't advocating for Electron apps. But even native code written in C, C++, Swift, Rust, Zig, you name it, will just need to be recompiled and it'll be ready to go on Apple Silicon. Not much if any work needed at all in most cases. I just acknowledged a lot of programs are already written using web tech and that's inherently portable
 

grandM

macrumors 68000
Oct 14, 2013
1,520
302
I wasn't advocating for Electron apps. But even native code written in C, C++, Swift, Rust, Zig, you name it, will just need to be recompiled and it'll be ready to go on Apple Silicon. Not much if any work needed at all in most cases. I just acknowledged a lot of programs are already written using web tech and that's inherently portable
True that web tech is superb for portability but the best and hardware efficient software is coded natively. Plus these electron devs dragging in dependency after dependency constitute a huge security hazard.
 

pshufd

macrumors G4
Oct 24, 2013
10,151
14,574
New Hampshire
I wasn't advocating for Electron apps. But even native code written in C, C++, Swift, Rust, Zig, you name it, will just need to be recompiled and it'll be ready to go on Apple Silicon. Not much if any work needed at all in most cases. I just acknowledged a lot of programs are already written using web tech and that's inherently portable

A lot of software will be ported this way but I expect that there's a lot of custom silicon that developers could take advantage of to get much larger performance gains. Intel sells a bunch of software libraries with accelerated performance for common math and other operations. Apple has likely implemented a lot of things in silicon and they've probably provided APIs to use those functions.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.