What is the main issue for not being competitive? Compilers?The main processors moving to RISC-V seems less likely short-term though. I understand it as that it's simply not competitive enough for bleeding edge performance.
Assuming it's real. Which I would not.Interesting moveReport: Apple to Move a Part of its Embedded Cores to RISC-V, Stepping Away from Arm ISA | TechPowerUp}
www-techpowerup-com.cdn.ampproject.org
What is the main issue for not being competitive? Compilers?
Although the SemiAnalysis report explains how more companies (e.g. Google) are starting to use SiFive X280, TechPowerUp decided to write about Apple because SemiAnalysis wrote:I don’t see anything interesting here.
SemiAnalysis can confirm that these cores are actively being converted to RISC-V in future generations of hardware.
If things don't change soon, I see a near future where China uses RISC-V based computers and Western countries use ARM.If RISC-V would at least offer some features that make it fundamentally better for personal computing, that would be an incentive at least. But it does not. The main value of RISC-V is a) that it’s open source and b) that it’s easily extensible and customizable. High performance personal computing doesn’t care about the first part, and it’s actively harmed by the second one. Imagine the mess in software development if different CPUs had principally different features.
Well Apple used ARM SoCs in Intel Macs for a while so it wouldn't surprise me if they started using Risc-V now they are off Intel.Assuming it's real. Which I would not.
That might be part of it.What is the main issue for not being competitive? Compilers?
They're now just 2 years into using Arm as the primary ISA in Macs. They have a very large amount invested in Arm. AArch64 is fundamentally a better ISA than RISC-V. Apple engineers appear to have helped write the AArch64 specifications. Apple has been on the cutting edge of developing Arm features like PAC (pointer authentication) for some time.Well Apple used ARM SoCs in Intel Macs for a while so it wouldn't surprise me if they started using Risc-V now they are off Intel.
They're now just 2 years into using Arm as the primary ISA in Macs. They have a very large amount invested in Arm. AArch64 is fundamentally a better ISA than RISC-V. Apple engineers appear to have helped write the AArch64 specifications. Apple has been on the cutting edge of developing Arm features like PAC (pointer authentication) for some time.
So literally the only thing attractive about RISC-V to Apple is the absence of licensing fees, fees which are being paid to a fairly close partner. Balanced against "recovering" those fees are the costs of switching everything to a new ISA not long after they made the transition to Arm - for all that Apple's made it look smooth and easy, they spent a lot behind the scenes accomplishing that, and they'd have to do it again for RISC-V.
So, IMO, there's no rational reason to believe Apple is adopting RISC-V as a user-visible ISA in the near or even medium future. The only place there's scope for it is embedded microcontrollers which run only Apple-provided firmware (meaning nobody outside Apple has to deal with the transition). While that's possible, I personally believe it's unlikely to happen much. Arm doesn't charge much per core, especially for microcontrollers, and keeping the ISA uniform across all the µCs in a SoC is valuable even when you don't expect code sharing between µCs and the application processors. (Same memory consistency model, same data formats, etc.)
While that's possible, I personally believe it's unlikely to happen much. Arm doesn't charge much per core, especially for microcontrollers, and keeping the ISA uniform across all the µCs in a SoC is valuable even when you don't expect code sharing between µCs and the application processors. (Same memory consistency model, same data formats, etc.)
For the record the A4 was the first Apple-designed SoC and the A6 featured the first Apple-designed CPU.The transition to Apple Silicon started over a decade ago with the A7.
Why do you think AArch64 is better than RISC-V?AArch64 is fundamentally a better ISA than RISC-V
Why do you think AArch64 is better than RISC-V?
Honestly, I don't know which one is better because I haven't read anything that explains which one is better.Why do you think RISC-V is not worse?
I'll read it more closely later, but at a quick glance that post only shows that RISC-V is not perfect, not that it is worse than ARM because it doesn't compare RISC-V to ARM.There is a good list with potential criticisms of RISC-V written by an expert: https://gist.github.com/erincandescent/8a10eeeea1918ee4f9d9982f7618ef68 I agree with most of these.
Honestly, I don't know which one is better because I haven't read anything that explains which one is better.
I'll read it more closely later, but at a quick glance that post only shows that RISC-V is not perfect, not that it is worse than ARM because it doesn't compare RISC-V to ARM.
I'll read it more closely later, but at a quick glance that post only shows that RISC-V is not perfect, not that it is worse than ARM because it doesn't compare RISC-V to ARM.
But isn’t custom functionality exactly what Apple want and what they’ve been developing and putting into their chips?As I wrote above, RISC-V excels where you either need ultracompact simple cores, or if you need custom functionality. General purpose high performance computing however is not one of those things.
There is nothing in RISC-V which makes it uniquely able to support a cut-down version of the ISA, or support customizations. If Apple thinks it would be valuable to define a subset of AArch64 tailored for embedded microcontrollers, they almost certainly have enough freedom to do it even without Arm Holdings' input, and in reality they should have enough sway with ARMH to just collaborate with them and make it official.But isn’t custom functionality exactly what Apple want and what they’ve been developing and putting into their chips?
While many other hardware and SoC manufacturers keep throwing more and more „general purpose“ cores and RAM into their design, Apple seems to invest in and empasise high-performance specialised engines (neural engine, media engine - following encryption engines, that were a thing even in desktop computers before Apple got serious about their oen designs) quite a lot.
Mobile computing designs have become less and less about „high performance general purpose“ computing - but more about specialised engines optimised to provide specialised, custom functionality with high power efficiency.
Apple M1 foreshadows RISC-V
The M1 is the beginning of a paradigm shift, which will benefit RISC-V microprocessors, but not the way you think.erik-engheim.medium.com
As you have already pointed out, the A4 was not a custom Apple design and the A6 was a ARM32 Soc. The A7 was the first ARM64 SoC from Apple and therefore probably the first SoC from Apple that they could have ported MacOS to (having abandoned 32 Intel and PowerPC with the Snow Leopard release).For the record the A4 was the first Apple-designed SoC and the A6 featured the first Apple-designed CPU.
A4 absolutely was a custom Apple design. Before A4, Apple was using Samsung-designed SoCs. Because Samsung designed them, Samsung could (and did) use them in their own Android phones. From A4 onwards, Apple designed their own chips which appeared exclusively in Apple products, and Samsung's role was downgraded to a semiconductor fab partner. (And eventually downgraded to just a flash/DRAM memory supplier once Apple transitioned to TSMC.)As you have already pointed out, the A4 was not a custom Apple design and the A6 was a ARM32 Soc. The A7 was the first ARM64 SoC from Apple and therefore probably the first SoC from Apple that they could have ported MacOS to (having abandoned 32 Intel and PowerPC with the Snow Leopard release).
But isn’t custom functionality exactly what Apple want and what they’ve been developing and putting into their chips?
While many other hardware and SoC manufacturers keep throwing more and more „general purpose“ cores and RAM into their design, Apple seems to invest in and empasise high-performance specialised engines (neural engine, media engine - following encryption engines, that were a thing even in desktop computers before Apple got serious about their oen designs) quite a lot.
Mobile computing designs have become less and less about „high performance general purpose“ computing - but more about specialised engines optimised to provide specialised, custom functionality with high power efficiency.
Yes I am saying the A4 wasn't really custom because Apple used an off the shelf ARM CPU design but yes it was the starting point for Apple designed Silicon. Of course they could have ported 32bit MacOS to ARM32 (it ran on 32bit CPUss for years) but why would they bother. They aren't Microsoft.A4 absolutely was a custom Apple design. Before A4, Apple was using Samsung-designed SoCs. Because Samsung designed them, Samsung could (and did) use them in their own Android phones. From A4 onwards, Apple designed their own chips which appeared exclusively in Apple products, and Samsung's role was downgraded to a semiconductor fab partner. (And eventually downgraded to just a flash/DRAM memory supplier once Apple transitioned to TSMC.)
I think you're saying A4 wasn't "custom" based on the fact that A4 used lots of blocks designed by outsiders, such as the ARM Cortex-A8 CPU and PowerVR SGX535 GPU. But even in the 2020s, Apple Silicon continues to use blocks designed by outsiders - think USB and PCIe. Yes, over time, they've brought design in-house for the big important differentiators such as CPUs and GPUs, but the A4 is definitely the starting point of Apple designing Apple Silicon.
Also, Apple absolutely could have ported MacOS to 32-bit Apple Silicon. iOS was nothing more (or less) than a slimmed down version of MacOS X with a rethought UI software stack. Would they have wanted to port the full desktop environment to 32-bit AS? No, for a variety of reasons, but technologically this distinction you're trying to make doesn't hold up to scrutiny.
(p.s. Apple didn't abandon 32-bit Intel until Lion aka 10.7.)