Too much Kool-Aid.The best of the best have been working on this for years now, and we are about to find out what they are capable of.
Too much Kool-Aid.The best of the best have been working on this for years now, and we are about to find out what they are capable of.
When you have the facts on your side - pound on the facts.
When you don't have the facts on your side - pound on the table.
My side:When you have the facts on your side - pound on the facts.
When you don't have the facts on your side - pound on the table.
Too much Kool-Aid.
We can?Anyone here can see which facts are more relevant.
We can?
It depends on the workflow and constraints. (For example, there's no value to me in having a 6 watt CPU over a 100 watt CPU in my desktop.)
And it's completely unknown as to whether a ARM cpu scaled up to 32 cores and Xeon-class performance will be anywhere close to 6 watts.
Put down the Kool-Aid.
Any links to support those projections?ARM v9 will be something different, than any design we have seen to this day, in terms of performance.
And yes, this is me writing this. A person who was blatantly sceptical about viability of ARM CPUs.
Any next gen CPU architecture from Apple will be based on ARM v9 and it is pretty much a game changer for this arch's viability.
The only thing is software. So resident dinasaurs will have to do two things. Either they will rewirte their software for ARM's architecture, which will bring more efficient execution(YAY!).
Or will get aneurism because their software will not work on ARM(YAY!).
I changed my mind about ARM's future. Apple did something huge for this arch. They will provide effectively a development platform, for ARM, which will bring tons of software to any ecosystem, not only to Apple's.
ARM devices will not be constrained by software, anymore.
Nobody should resist what Apple is doing. Or at least everybody should discuss ARM ecosystem, with, for example Jon Masters.
Given 150w~250w TDP, at least 48 core should be doable with A14 class cores, though the question would be, will Apple care enough to do that.
What we are seeing here is the end of Moore's Law in the 2010's, so the performance of desktop computers from the early part of the last decade are still reasonably good when compared to the latest models. This wasn't so much the case in the 2000's.
The same thing can be seen in data centres, where there are racks full of Sandy Bridge Xeon powered machines, because they are still good enough for the job.
You are somewhat fortunate with your Mac Pro 5,1 as I think you can run a recent version of macOS if you have a Metal-capable GPU. Others can't upgrade beyond macOS High Seirra, despite Ram upgrades, SSD upgrades and having a decent enough intel cpu.
Staying with an intel cpu isn't going to protect our machines from obsolescence.
What's your napkin math on this? Where are you getting numbers for Apple's target TDP, and how are you deriving 'at least 48' cores from that?
I'm sure it goes without saying that you can't just divide a target desktop TDP by an existing mobile SOC's TDP and arrive at some linear performance multiplier. For example, AMD can't just scale their mobile 4800U (8c/15W) to the TR3990X's TDP (280W) and shove 150 cores into a desktop package (280W/15W*8c = ~150c).
The problem with those huge ARM chips is that you just get a massive amount of slow cores. That's great for serving up web pages in the cloud, but on a desktop good single threaded performance is generally more useful. Massively parallel workstation tasks (e.g. machine learning) are best done with GPUs anyway.
I can see Apple Silicon making Intel look weak and impotent within just a few years. Performance will be from the tight integration of all their technologies, not just raw CPU alone.I just can’t imagine an ARM CPU replacing a Xeon with any adequacy.
I’d bet Mini, MacBook(maybe a new MacBook Air), and the lowest-end iMac get Apple CPU’s while higher end iMacs, maybe a Mini-Pro, MacBook Pro and Mac Pro keep Intel.
I'd be nice if you quoted the arguments you were supposed to reply to. Because I never suggested that a 32-core ARM CPU should draw 6 W.We can?
It depends on the workflow and constraints. (For example, there's no value to me in having a 6 watt CPU over a 100 watt CPU in my desktop.)
And it's completely unknown as to whether a ARM cpu scaled up to 32 cores and Xeon-class performance will be anywhere close to 6 watts.
Put down the Kool-Aid.
I don't know if they are faster. But they are competitive.Apple’s tablet ARM cores are actually already significantly faster than Intel’s desktop cores.
Ahem. 5 nm TSMC process is EUV. Which means: bye bye monolithic large dies.Plus, Apple's hypothetical Mac Pro ARM processor will be based on 5nm, which is about 1.8x denser and about 30% more power efficient than 7nm.
AMD just about managed to put 64 cores on 7nm. Do you think it would be technically impossible to put at least 48 cores on 5nm process?
Considering that dual core MacBook Air is achieving 1000 Pts in Single threaded, and 2000 pts multithreaded score I would not go that far.Apple’s tablet ARM cores are actually already significantly faster than Intel’s desktop cores.
Considering that dual core MacBook Air is achieving 1000 Pts in Single threaded, and 2000 pts multithreaded score I would not go that far.
And do not bring the topic of emulation .
If the scores from GB5 are anything to go by, MacOs itself hampers a lot of ARM chips performance. Or... iOS is extremely well coded, and extracts every, last bit of performance out of those CPUs. Apple may actually have still pretty steep hill to climb, to overtake x86. Because its not Intel who they have to beat. Its AMD who they have to beat. And that will be way harder than good, old Intel.
Ahem. 5 nm TSMC process is EUV. Which means: bye bye monolithic large dies.
Welcome Chiplets. Which is brilliant if you think about it.
Considering that dual core MacBook Air is achieving 1000 Pts in Single threaded, and 2000 pts multithreaded score I would not go that far.
And do not bring the topic of emulation .
Apple demoed Shadow of the Tomb Raider in their presentation, running on this very silicon at 1080p, with at least 30 FPS. And you know what this means?
Renoir based Vega 6, averages in 1080p, Medium settings 14 FPS in the very, same game. So Rosetta2 is pretty darn efficient in translating the code.
If the scores from GB5 are anything to go by, MacOs itself hampers a lot of ARM chips performance. Or... iOS is extremely well coded, and extracts every, last bit of performance out of those CPUs. Apple may actually have still pretty steep hill to climb, to overtake x86. Because its not Intel who they have to beat. Its AMD who they have to beat. And that will be way harder than good, old Intel.
Majority of designs will be monolithic. But for EUV processes in general, the Reticle limit is much, much smaller, than for "standard processes".Possibly for the cost reasons. But I still expect many of 5nm designs will remain monolithic until price becomes too prohibitive.
Majority of designs will be monolithic. But for EUV processes in general, the Reticle limit is much, much smaller, than for "standard processes".
Reticle limit for N7 from TSMC is 830 mm2. Reticle Limit for 5 nm EUV may be 500 mm2, or something like that.
This is the very reason why Hopper, from Nvidia, is an MCM GPU.
Prepare for the housefires with 1KW GPU devices .
... and the Vega 11, which may be AMD's best iGPU (AFAIK) averages 19 fps.Apple demoed Shadow of the Tomb Raider in their presentation, running on this very silicon at 1080p, with at least 30 FPS. And you know what this means?
Renoir based Vega 6, averages in 1080p, Medium settings 14 FPS in the very, same game. So Rosetta2 is pretty darn efficient in translating the code.