In return let’s have the haters apologize if Apple silicone is indeed a vast improvement over the current intel stuff.
Okee dokey
In return let’s have the haters apologize if Apple silicone is indeed a vast improvement over the current intel stuff.
I get that Apple is having problems with Intel getting new processors out in a timely fashion, and that x86 chips are one big reason that battery life is stuck at around 6-10 hours for anyone’s brand of medium or better performance laptop. But I haven’t heard WHY or WHAT has changed in ARM processors that makes them attractive now to (computational) power users.
Again, you say Apple chips will be 'faster', based on bench and Geekbench. You also add that the performance gain will be 'INSANE". Based on what metric do you get INSANE?
So I take issue with Geekbench and xBench but you reference them again, and what 'INSANE' leap are you looking at? From what I can tell, some Apple chips may be a tad faster in single core, and then somewhat slower in multi core (using those benchmarks I take issue with anyways).
The move to 3nm process will help apple chips a little, but it won't be 'INSANE'. And then I ask you, how long can TMSC keep shrinking something that's already down to 3nm? I guess we'll find out.
And no, Turbo Boost does not sustain when all cores are being pushed for more than a few minutes, and it certainly won't sustain in a thin apple product that's already challenged to dissipate heat. At least, that's my opinion.
Like I've said, I'll be real interested to see what performance ends up being when all the cores are pushed pretty hard ... given what I do, that's the only situation where all that extra supposed 'power' will matter. If it can't deliver that, then all your 2-minute benchmarking tools are completely useless to me.
They’re not the same code base so your argument is silly. The only fact here is that we saw a pre-canned demo of these builds. Nothing more. That is a fact.
Our most-popular Office 365 apps—Excel, PowerPoint, and Word—are designed for the modern workplace, with cutting-edge features like real-time co-authoring, AutoSave, and more. With our newest version of Office for Mac, version 16.9.0, we've extended these capabilities to Apple users; in fact, this release marks the first time in 20 years that Office shares the same codebase across Windows, Mac, iOS, and Android for core functionalities.
Oh,video rendering will probably be amazing, considering that Apple has specialized chips for video rendering even on their phones (why else do you think the iPhone can get that 4k, 60 FPS recording reliably)?
The problem is that video rendering is only a small fraction of the whole market. Unless performance with Apple Silicon is very extraordinary (and not "just" 20% to 30% faster), I don't see how the average user will switch from x86 to Apple Silicon. And I really don't think it is, or else Apple would be gloating over it from the start.
...at a reasonable price...
Sorry for quoting out of context. But I'm not going to count on this. It's Apple. New Apple Silicon Macs will most likely keep the same price point, or raise it even higher.
The iPad Air just got a $100 price boost after all.
Not really.A vison of what the future 'should' be in some peoples eyes don't often really make sense in the real world.
I'm sure the people who watched Metropolis in the 1930's thought it absolutely made sense that eventually people would commute around town in propeller planes. Just like how you feel a future without ports makes perfect sense.
I'm not saying you're necessarily wrong, I'm just saying you need to question whether you're following what makes true sense in the real world versus how the current trajectory of things seems to be guiding your train of thought.
Make sense?
Sorry for quoting out of context. But I'm not going to count on this. It's Apple. New Apple Silicon Macs will most likely keep the same price point, or raise it even higher.
The iPad Air just got a $100 price boost after all.
I don't think that Apple's strategy is necessary to make existing Intel Mac (or Windows) users to immediately switch to Apple Silicon with amazing performance boosts of 200-300%. They just want to attract people who were looking to upgrade their computer anyway within the next year. Considering Intel year-on-year improvements have been quite unimpressive for the most part (10-15%?), if Apple offered a 30% improvement with much better battery life, wrapped in a nice design, at a reasonable price, then it would attract a lot of people.
Personally I'm not expecting fireworks with Apple Silicon. At a guess, up to 30% faster than the equivalent Intel model Mac it replaces, with 40-50% better battery life. Some applications (e.g. video) may show more significant improvements due to the use of specialized Apple Silicon features, and these are the ones that Apple will highlight.
Sure enough, 30% right now seems a big deal. However, Intel and AMD refresh their lines at a much faster rate, so that "advantage" quickly vanishes.
Especially considering Apple is threatening to surpass them, I don't think they'll just stand still...
Personally I'm not expecting fireworks with Apple Silicon. At a guess, up to 30% faster than the equivalent Intel model Mac it replaces, with 40-50% better battery life.
Apple has a new CPU/GPU every year. Intel just took almost six years to release a new CPU, and they are barely able to beat last year’s Apple architecture while consuming four time more power... do you really think they will be able to come out with a completely new, superior design overnight?
Apple did....then it got wise.Not really, but no one expects a good GPU from Intel anyway. You can just pair it up with Nvidia and/or AMD. As long as the processors are faster, it doesn't matter.
Not really, but no one expects a good GPU from Intel anyway. You can just pair it up with Nvidia and/or AMD. As long as the processors are faster, it doesn't matter.
I am talking about the processor (CPU). Check out the Anandtech review of Intel's newest Tiger Lake
Here you can see a Tiger Lake in 28W configuration barely outperforming the A13 in last years iPhones. Anandtech's Andrei Frumusanu later confirmed that the Intel CPU was drawing around 20 watts to achieve this result. A13 needs 5 watts. Even more, if you restrict the Tiger Lake to the 15W profile (like most ultrabooks would), the performance is the same as A13. Again, we are comparing a laptop CPU to a mobile phone CPU here!
Intel's upcoming H-Series Tiger Lake might be able to push the peak performance by additional 10-15% (I find it unlikely that they will be able to clock it higher than 5.3 ghz). The CPU will end up drawing over 30 watts in that scenario. Apple's A14 is already supposed to be around 15-20% faster than the A13 — and that is still within limitations of a mobile phone. What happens if Apple decides to allow their CPU cores to go to 15 watts per core instead of 5 watts? They should be able to get at least 10-20% more peak performance out of it? I think you are being overly optimistic in assuming that Intel will be able to easily match that. They would need to start from scratch completely to match Apple performance per watt.
By the way, since you mention the GPU... this is the area where Intel has made very significant progress. The Tiger Lake GPU is actually faster than iPad Pro.
So that 5 watts is for the GPU as well. Sure the Tiger Lake GPU looks better...until you remember it is using 3 times the power minimum to get that performance.
More power = more heat + lesser battery life.
If you're offering "just" 30% improvement in exchange for dropping the possibility to run x86 applications (including Windows / Bootcamp), this could even scare off new users. Sure enough, 30% right now seems a big deal. However, Intel and AMD refresh their lines at a much faster rate, so that "advantage" quickly vanishes.
Intel & AMD have been stagnant for a decade and are trying to bridge the gap by throwing more cores at the problem instead of working on increasing performance/efficiency.
The guy right above you talks about how a 16 core apple silicon gnu would be right there with the amd 5500. That would be a lot of cores that apple is 'throwing' at the problem, right?
The guy right above you talks about how a 16 core apple silicon gnu would be right there with the amd 5500. That would be a lot of cores that apple is 'throwing' at the problem, right?
You don’t know much about how computers work, do you? Maybe it would be a good opportunity to learn some things instead of making jokes about topic you clearly don’t understand.
Radeon 5500M Pro is a 24 core design by the way.
With the iMacs and MacBooks this goes hand in glove. One of the problems right now is heat throttling as Apple designed for chips that Intel said would be ready...and then weren't. As a result Apple had to go with chips that had higher heat budgets then they designed for.As a power user, I would prefer Apple Silicon Macs to use a bit less power but deliver substantially higher performance
well the guy accused intel of 'throwing more cores at the problem' with their cpu's as if its a cop out by intel. So directly to my point, why isnt it considered a cop out when AMD uses 24 cores for a gpu?
Graphics computations can often be parallelised more successfully/ usefully than CPU tasks can, which makes multi-core performance more important on a GPU, while what the CPU is doing generally benefits more from each core being more powerful past a certain point. Previously this point was around quad core, more recently 6-8 core as software is evolved to take advantage of using more cores, though again it's diminishing returns as there's only so much parallelisation that can usually be written into software.well the guy accused intel of 'throwing more cores at the problem' with their cpu's as if its a cop out by intel. So directly to my point, why isnt it considered a cop out when AMD uses 24 cores for a gpu?