I think this requires an architecture change. I don't think Apple wants to do another transition, because they are still with Intel to arm.Hopefully armv9 , finally
I think this requires an architecture change. I don't think Apple wants to do another transition, because they are still with Intel to arm.Hopefully armv9 , finally
I think this requires an architecture change. I don't think Apple wants to do another transition, because they are still with Intel to arm.
The real question is whether Apple believes in SVE or not. It's unclear that SVE is a good idea.ARMv9 is ARMv8 with few extra instruction. Its really much less of a deal than people make of it.
The real question is whether Apple believes in SVE or not. It's unclear that SVE is a good idea.
The concept of "length independent vector processing" is a good idea, but the specific details of SIMD are not the only way to do this; and there's plenty of evidence that Apple has a very different sort of idea in mind.
If they are going to ship in the next few years, they probably want good answers as to how this interacts with AMX (and the way AMX is growing to become something like AVX512 for Apple)?
If they are willing to make the instruction set for AMX visible (in a way that it is not right now; and probably after redesigning it based on what they have learned in the past few years, so that generic code compiles to it, not just function calls through Accelerate) perhaps that's an optimal solution?
I think this requires an architecture change. I don't think Apple wants to do another transition, because they are still with Intel to arm.
What I meant was 32 bit from A6 to 64 bits to A7. That I consider is an an architecture change because when Apple deprecated the translation. I believe arm v8 and arm v9 will be a similar thing. Apple is already doing x86 to Arm.Apple's licensing terms for ARM include the ISA only. They are not beholden to the existing ARM architecture like Qualcomm or other ARM customers. Apple can (and has) changed the SoC architecture with its A series (and now M series) SoCs for years at this point. The other difference with Apple's licensing terms is that they can add their own code onto the ISA, some of which has been backported into the ARM instruction set itself.
What I meant was 32 bit from A6 to 64 bits to A7. That I consider is an an architecture change because when Apple deprecated the translation. I believe arm v8 and arm v9 will be a similar thing.
There seems to be a total misunderstanding of v9 floating around out there. Last I looked into it, it’s basically v8 with some additional security stuff that Apple long ago already implemented (and got folded into the v9 spec). Little to nothing that would relate to performance gains.Yes, moving from aarch32 to aarch64 was an architectural change. ARMv9 does not include any changes of this nature at all. I am confused what you are basing your beliefs on. Did you read the ARM architecture documentation?
There seems to be a total misunderstanding of v9 floating around out there. Last I looked into it, it’s basically v8 with some additional security stuff that Apple long ago already implemented (and got folded into the v9 spec). Little to nothing that would relate to performance gains.
Am I way off base here?
Oh, I was just assuming.Yes, moving from aarch32 to aarch64 was an architectural change. ARMv9 does not include any changes of this nature at all. I am confused what you are basing your beliefs on. Did you read the ARM architecture documentation?
That was 1.5TB of third party RAM. 1.5tb of apple ram would prob be like 50 grand lolCorrected, thanks
The max RAM is sadly short of the 2019 Mac Pro's 1.5TB. At the rate Apple's doing this it may take a decade or two to reach that amount. All for the sake of economies of scale.
Out of curiosity, I did the math. Given that Apple's upgrade price to go from 8GB to 16GB is $200 ($25 per additional GB), $25 * 1560GB would be around $39,000 (assuming they use the same prices).That was 1.5TB of third party RAM. 1.5tb of apple ram would prob be like 50 grand lol
I heard "10% performance improvement" - That doesn't seem like a huge bump over A16...
they said 10% SC i thinkI heard "10% performance improvement" - That doesn't seem like a huge bump over A16...
More interesting is the claim of wider decode and execution; we expected the improved branch prediction, I've already written up the various elements of that.I heard "10% performance improvement" - That doesn't seem like a huge bump over A16...
Big focus on GPU upgrades. But yeah, only 10% single-core improvements on a new process node isn't great...I heard "10% performance improvement" - That doesn't seem like a huge bump over A16...
Yes. Incredibly disappointing. Shocking, even. I mean, there's a small chance they've reduced clocks and the IPC gain is better but I'm not counting on it.I heard "10% performance improvement" - That doesn't seem like a huge bump over A16...
Wait, did I miss something? I don't remember that. I'll go back and rewatch...More interesting is the claim of wider decode and execution; we expected the improved branch prediction, I've already written up the various elements of that.
You're reaching, just like I did. I hope so, but I'm not counting on it.Maybe on the A chip they took all the performance boost in the form of IPC; and kept frequency flat or even reduced to save energy? Certainly it seems like for phones people want an extra 10% battery life more than 10% faster P-core?
nVidia (at "comparable" hardware, very handwaving) gets about 6x the ray tracking performance of an M1. So a 4x boost is not bad, but also perhaps not the maximum possible.Yes. Incredibly disappointing. Shocking, even. I mean, there's a small chance they've reduced clocks and the IPC gain is better but I'm not counting on it.
The news on the NPUs is good - potentially VERY significant for some people. For GPUs, who can say? The quoted performance for ray tracing is weird (4x software?!? what does that even mean?), and if all the general performance boost is from the extra core, then I guess that means they haven't been working on improving the rest of the GPU all that much. Which, maybe, is fair, RT is big. We'll have to see. AV1 decode is good, lack of encode sucks but isn't surprising. Could possibly still appear in the Mx, though I'm not counting on it.