I think we’re all aware that OS X is a very nice wrapper around a Linux kernel. What difference does it make if a server has some easy to use interfaces. These days there are plenty of them that run on any brand of Linux.Well, we are in a different situation than before. ARM is rising especially for the server. Ampere made 80 cores ARM chip and others too. But obviously, macOS itself isn't good for the server system so maybe they need improve both CPU and software?
[automerge]1592112668[/automerge]
Having all that overhead on a server doesn’t make much sense.
What we are talking about here is not instruction sets, compilation or anything else, those are just abstractions. Today and in the future efficiency is what matters, if you can produce more flops per watt you win. Media streaming and blockchain are using significant resources and contributing significantly to climate change. Apple’s current ARM based chips are very efficient, and like most modular designs can be effectively scaled. They can and will build fast, wide memory access, it’s been done before, it’s not magic, it’s engineering.
There are no show stoppers here, all that’s going on is either easy recompiles (whether it’s JIT or pre=compile is comically irrelevant) there is no debate here at all. You build an abstraction layer somewhere and to the users, and likely the developers, things look very similar if not the same. The cost of the abstraction layer in terms of compute will be smaller than the gains from a smaller TDP.
[automerge]1594044429[/automerge]
[automerge]1594044980[/automerge]This is not a like for like comparison and therefore irrelevant.
I think an Apple ARM CPU has potential. The question is: To what degree. I've read a lot of pro Apple ARM posts on this site and if the performance claims are to be believed then Intel needs to liquidate or find some other pursuit.
Then let's me be perfectly clear: Intel has floundered as of late. That out of the way does their floundering mean an Apple ARM implementation will be significantly better? Or have longevity?
I've been through this before with the PPC. According to the Mac advocates PPC was the best thing to happen to the processor market. It was new, exciting, and it was RISC. But then something happened...it ended up being a dead end which led to Apple's adoption of x86.
Will ARM follow suit? Only time will tell. But the transition from PPC to x86 had one significant benefit which I see as a sep back for x64 to ARM...the ability to natively run x64 software. Which meant the ability to natively run Windows and its huge software library. Will Mac purists (i.e. those only using macOS specific software) care? Unlikely. But I think the move by Apple to use x86 had a huge benefit in being able to natively run Windows and its associate software. Thus it was a low risk because if a user couldn't find a native macOS application they could always fall back to Windows with very little penalty.
This is not a like for like comparison and therefore irrelevant.
This is the only thing that matters. All the rest is just engineering and a bit of code that seems to be available already. You’re concerns are not even real.
There is really very little difference between some number of A12Xs and whatever you seem to think a ”desktop” processor is, you want to address 1.5 TB of ram, fine widen the bus and add some cooling. It just doesn’t matter. These are really just semantics. If someone took the developer kit mini and stuck it in a tower, I am not sure you’d notice a difference.
“But I run Maya!” It’s just code and faster/wider busses. Untwist your undergarments and figure out a way to think more abstractly. That’s how hard problems get solved. Eventually, and probably sooner than you think you’ll be able to buy a laptop which smokes many “desktops” except for how much ram it can physically hold.
Last edited: