And because most of the miraculous speed advantages of Silicon at this stage are driven by software optimisation not hardware advances, the moment it has to go back to layer that needs physical memory and raw power, it's all out of the window.
If you need a lot of physical memory, sure, that's not what these chips are designed for. In regards to raw power however: in any general-purpose workload M1 performs exceedingly well. It's not software, it's hardware. Apple simply has better branch predictors, more execution units, lower latencies, larger OOE windows.
And obviously, the software has to adapt to the new hardware technologies of the ARM chips if you want to take advantage of the chip capabilities efficiently. It's not any different for any other ARM or x86 CPU. Need vector processing? Well, then you'd have to design your algorithms with platform technologies in mind. This is not unique to M1. You have the same dilemma with technologies like AVX512 on x86 platform or with things like mesh shaders on the GPU.
That of course doesn't mean the idea is bad, or architecture is bad, it's just very limited in this generation, because it's still just a 'ported' iPad chip that's been severely crippled - in current form it can't give you more memory, or more IO, half of TB3 is removed, USB4 is not really that, etc, etc.
Well, this crippled chip still performs as well as chips using much more power, so I am not sure what you are trying to say here. It's capabilities are adequate for the products it is designed for. RAM limitation is a pragmatic choice to reduce costs and improve manufacturing volume. It has two thunderbolt controllers, just like any Intel MacBook Pro out there, with the main difference that Apple's controllers are properly isolated and less vulnerable. It's more than enough for an MacBook Air, not so much for a 16" MacBook Pro — which is why Apple does not use M1 chips in the prosumer machines.
I think Apple's been preparing for this transition for at least two or three macos cycles. I think part of this stunning success of this random iPad chip with bottlenecks everywhere may be a long process of artificially slowing down older machines on Intel architecture and violently killing off genuine dGPU accelerations just so those 'pro' M1 perks in very specific apps look really good.
This is where it gets a bit odd. Intel CPUs perform just as well on Apple machines and Apple OS as the same CPUs in machines of different branding. What is this artificial slowing down you are talking about?
I don't think it's a coincidence that Big Sur runs like **** on Intels. With enough time persistent bunch of curious anoraks on internet forums might even figure out some really odd things happening under the bonnet. Like 'legacy' CPUs running in state of constant thermal throttling while doing basic things on Big Sur.
I have two Intel machines at home that run Big Sur. One of them is my main work driver. It doesn't thermally throttle and overall performs just as you'd expect the i9 with those specs to perform. Neither did I see any mentions of widespread issues with Intel CPUs and Big Sur. So again, not sure what you are talking about.
I don't think it's a coincidence that third party drivers and CUDA have been 'murdered' in shady circumstances.
I definitely agree that you are right on this one, even if I would word if differently. It is very clear that Appel removed Nvidia from their platform because they want users to use Metal. Currently, Apple's GPU technology outperforms AMD or Nvidia by a factor of 2-3 with same power usage, but their tech is sufficiently different from the mainstream GPUs. The chance of devs successfully adopting Apple's superior GPU technology increase if parasitic technology like CUDA is not available. Sure, it makes things more annoying short-term, but it's a win for Mac users long-term.
That's why I don't like discussing this stuff, because everyone immediately sums it up as "nice conspiracy". It isn't, really. You can run the tests yourself or browse those artificial benchmarks, Geekbenches etc, and very quickly discover that once it's benched on macos outside of Apple's control - be it hackintoshes or similar - M1 doesn't actually do that much better than 3 year old i5s. Which is still ok and fine for what M1 is.
Here is a random i5-10600K (a six-core Intel desktop CPU with 125W TDP) hakintosh I found on Geekbench:
Benchmark results for an iMacPro1,1 with an Intel Core i5-10600K processor.
browser.geekbench.com
Doesn't look that well compared to M1... or let's have a look at a 3 year old mobile i5 (Kaby Lake i5-7440HQ):
https://browser.geekbench.com/v5/cpu/search?utf8=✓&q=7440HQ+macOS
But wait, why not take something newer? Like the Tiger Lake? Here is a Tiger Lake (i7-1165G7) Hackintosh:
https://browser.geekbench.com/v5/cpu/search?utf8=✓&q=1165G7+macOS
So yeah, I am still a bit confused what you are trying to say here.
M1 IS a very good chip. For what it was designed for. It's just... what we are doing, what internet reviewers are doing, is painting a bit of a... unrealistic legend around a very basic arm chip. Let's stop with the stupid "beats $10k Pro machine" headlines. It doesn't. Not unless all you do on it is run geekbench all day. It's a cool chip. But just an iPad chip. And it's not like people browsed the net on 2018 i5 and thought - "oh my god, this is so slow, I just wish Apple would switch to some exotic architecture and bolloxed it up a bit".
It's faster at building software than my 8-core Intel i9 that uses 4-5 times more power, but sure... besides, I am not sure why you would call the most ubiquitous CPU architecture on the planet "exotic".
If we are also going to loose native pro software titles, this will pretty much be SGI scenario. It will be the same "so what it looks pretty and runs benchmarks faster - WTF do you use it for" we've seen way too many times before.
Depends on your usage domain I suppose. Of course, if you are relying on certain proprietary software and the developer of that software is not interesting to serve the new Apple platform, then you are out. For us other people, who are using widely available tools and need a very fast portable machine, Apple Silicon is literally a game changer. Already the M1 runs my data processing pipelines 20% faster than a much larger Intel machine. Upcoming Apple chips with more cores and higher power consumption will likely make it 100% faster. It's a big deal for me because it means that I can do much more stuff done much more quickly.
Now - I have a question for
@cmaier - since he's experienced enough to remember giants like Cray or SGI and worked for Sun Microsystems. In the entire history of personal computers, workstations and desktops - do we have a good example of a computing giant that dived as deep into proprietary CPU tech as Apple did this time, including software that requires platform specific coding across their entire range - and not ended up with chapter 11 blackeye in a long run? In your opinion - do you think this time around (cause Apple's been down that road once before and almost collapsed in the process before Jobs switched them to Intel at the last minute) it is going to be any different for Apple and their silicon?
Yes, Microsoft. They did quite well.