Perhaps instead of condescendingly criticising strangers on an internet forum might I suggest that you should use your obvious gifts to find better things to focus on.
Agreed on focusing on something else for myself. Thanks for the reminder.
Perhaps instead of condescendingly criticising strangers on an internet forum might I suggest that you should use your obvious gifts to find better things to focus on.
MS has an ARM release of Windows, that isn't the problem. Lack of native applications is the problem. They don't have the pull like Apple does to get companies to make ARM native apps. And their ARM Translation layer is slower than Apples.From the perspective of an old-school Mac user who now has an M1 MBA, as well as someone who's daily driver is Linux, I can tell you folks the trend is towards ARM-based CPUs, no matter how much noise Intel and AMD keep making about x86-64. Not that it's a bad thing: competition is a good thing. However, there's a LOT of effort right now getting Linux fully ported for ARM, and even (especially?) for Apple's M-series. Heck, Microsoft is working towards an ARM release of Windows.
Everyone is going to have things that are more important to them. You have to weigh the differences, do you want a smoother experience when working with the file, which in turn saves time, or is the saving in encoding and rendering worth more then the time saved when working fluidly with little to no lag.I agree with others in saying the M chips are not magical, and often it makes sense to go with an Intel/Nvidia GPU setup instead.
However, your quote here is the key differentiator for M chips vs. the traditional Intel/Nvidia setup. For editing in particular having a fast workflow is arguably more important than having a faster render time (at least in my case). When I'm rendering a video I'm not touching the editor anymore, I'm doing something else so a few more minutes of render time is not that big of a deal for me. For others every minute counts and maybe if we're talking hours of render time vs. a few minutes I can see why one is obviously better than the other.
Additionally, at least when talking about M powered laptops, energy efficiency of the chips is more than just a nerd talking point because it actually translates into longer sessions away from the wall -- this has been pretty big for me and has genuinely changed my workflow vs. older Intel powered MacBooks.
Apple's performance gains over Intel/Nvidia are only noticeable in certain software conditions because Apple are not technically winning in some objective 'pure performance' metric; rather, they are winning in specific workflows that they can optimize for with the stuff Intel/Nvidia simply can't do right now (system on a chip improvements like the inclusion of a neural chip, ProRes decoders/encoders, faster CPU/GPU communication and sharing of a common memory pool, etc.) Once Intel/Nvidia find a path to build their next generation chip platforms (RISC V for Intel?) the M chip competition gap will close pretty fast mostly because the other guys will also have the same architectural benefits that M chips have. At least that's what I think, I could be wrong.
I actually still think, in the Notebook sector: Intel 13th Gen won't even be able to beat M1 series of chips, as even 12th Gen is FAR behind. Let me explain:Source
Apple upended the chip industry with the M1 but AMD and Intel came back swinging and it seems like Apple now needs to pull another rabbit out of the hat with the M3.
Not really, Apple’s challenge from here to when they stop making Macs is make sure that each successive Mac is faster than the prior generation. The server market sales wouldn’t even be worth the effort put into creating them. Better to work on the core consumer market providing solutions the competition can’t touch. (Their business practices won’t allow them to.)Apple's challenge is making the M series chips into powerful server grade hardware like the Intel Xeon.
Completely agree having used the M1 and the M1 Pro for the last couple of years, it is insane the amount of power wattage usage when running VMs, Teams calls (which keeps improving with every build) and same for external displays not using 20-27w on the GPU alone when connected to dual 4K Displays with my 16" 2019.I actually still think, in the Notebook sector: Intel 13th Gen won't even be able to beat M1 series of chips, as even 12th Gen is FAR behind. Let me explain:
Apple Silicon was never about the best Benchmark numbers. They are simply that good engineering-wise, that the high raw CPU-Performance is basicly just a side effect.
It's the efficiency. And i do NOT mean "Benchmark points per tdp-watt". A Ryzen 6800H capped at 15w won't lose against a 15 capped M1 when doing Cinebench Runs.
Apple Silicon, when actually performing real world Tasks, the Powerconsumption stays extremely low. On my M1, i can run Windows 11 in Parallels, and in Idle it still consumes less than any Intel Chip in Idle when doing nothing.
When "calculating" a Zoom Conference, M1 chip's Power consumpion only increases by a lower %-difference, than x86 Chips. Especially, since Intel Chips boost into Nirvana for every little Mouse movement.
That's why Apple Silicon Macbooks can do the same Work, the same Tasks, while having almost no power consumption, almost no hat generation, and basicly no Fan Noise needed at all.
To perform a Task, Apple silicon seems to just consume as much power, as it needs to do the job. While Intel boosts to maximum 110% boost speed for doing any little thing.
In my Opinion, THIS is the Standard, that Intel and AMD needs to hit. Not yet another Benchmark number e-penis crown, just because the Notebook-CPU consumes 130 Watt, and the battery is empty after 4 hours of Websurfing while beeing Loud.
Why do so many people focus on Intel? The Ryzen 7 6800U is the closest PC CPU to the M2. Although it uses a worse node and is smaller, it outperforms the M2 in some tasks.Someone please find the i3 13 gen laptop results and post em.
You mean like focusing on people who focus on processor performance scores?I have a smoking hot girlfriend, make six figures, live in lower Manhattan, and love what I do.
I suggest you find some better things to focus on than numbers of a processor performance score.
Why do so many people focus on Intel? The Ryzen 7 6800U is the closest PC CPU to the M2. Although it uses a worse node and is smaller, it outperforms the M2 in some tasks.
...: competition is a good thing. However, there's a LOT of effort right now getting Linux fully ported for ARM, and even (especially?) for Apple's M-series. Heck, Microsoft is working towards an ARM release of Windows.
Apple has a node advantage. A fairer comparison would be M2 versus AMD's upcoming CPU for ultralaptops, as both would use a similar TSMC node.I don’t disagree that 6800U is the closest CPU to M1, simply because AMD did some very good work on their power efficiency. But that’s about it, at least for now.
Ubuntu can run on ARM since 2012.Similar with Linux on ARM in server space.
Apple has a node advantage. A fairer comparison would be M2 versus AMD's upcoming CPU for ultralaptops, as both would use a similar TSMC node.
Maybe even better to compare it to AMD stuff in 2024 lol… You compare what is AVAILABLE on the market NOW. Node advantage blame AMD for that.Apple has a node advantage. A fairer comparison would be M2 versus AMD's upcoming CPU for ultralaptops, as both would use a similar TSMC node.
Ubuntu can run on ARM since 2012.
Canonical Success at Computex | Ubuntu
We’ve been extremely busy at Computex, with over 1,000 people visiting the Ubuntu booth, and over 25 media interviews about Ubuntu for Android, Ubuntu Cloud and Ubuntu TV. One of the highlights so far was ARM’s Ian Ferguson, director of server systems and our very own Mark Shuttleworth...ubuntu.com
Even if they could I suspect AMD would trade some it for clock speed. As it is I don't think they could gain that much from shrinking the process node, at least not at the clocks they run at.6800U scores up to 1400-1500 on GB5 using around 10W in single core. M1 scores 1700 using 5W. Are you suggesting that going from 6nm to 5nm AMD will manage to improve their power efficiency by over a factor of two?
No. Only that the comparison between M2 and Phoenix would be fairer and would show clearly the advantage of Apple ARM over x86 without the node advantage.Are you suggesting that going from 6nm to 5nm AMD will manage to improve their power efficiency by over a factor of two?
Out of curiosity, where did you get the consumption values from?6800U scores up to 1400-1500 on GB5 using around 10W in single core. M1 scores 1700 using 5W.
We only need to wait six months. Then, we will be able to compare the best CPU for ultralaptops from AMD and Apple.Maybe even better to compare it to AMD stuff in 2024 lol… You compare what is AVAILABLE on the market NOW. Node advantage blame AMD for that.
Out of curiosity, where did you get the consumption values from?
Posts like this are, it seems to me, cries for attention more than anything elseIt would be enough to just post the benchmarks. I think the “Apple is in trouble” text in titles of threads like this are just to motivate people to click and read. I find it hard to believe OP thinks this is a reasonable comparison…
Google still has not made an Google Chrome ARM app for windows yet but has for macOS. Apple's move to ARM is something that Qualcomm's CEO said had a major effect on the industry taking ARM seriously.It is pushing those ARM ecosystems to evolve faster. But they were making steady progress before Apple showed up. And would being increasing the pace even if Apple had not. (it is incrementally faster with Apple there.)
I could tell that the first WoA devices were utter trash compared to x86 counterparts. Apple did a way better launch than 2017 Microsoft ARM devices with their M1. Being first does not mean anything, it's who does it best.That is three years before Apple shipped their product.
That SoC is not good when compared to Intel 12th gen, AMD's Zen 3+ and M2. Qualcomm chips won't be good until Nuvia's chips come out from them.Windows 11 on Snapdragon 8cx gen 3 ( third iteration ) is perhaps in the "working towards release" stage.
According to this it's already happening:It would be interesting to see if Apple can effectively change the rules of the PC game by getting consumers to care about features beyond simply performance and benchmarks. I believe Linus recently covered a small PC desktop from HP and the conclusion was that it throttled way too quickly to be of any real use given what you paid for it.
I think Apple has a unique value proposition with the Mac Studio offering great sustained performance while being small enough to tuck under your monitor, quiet and sips relatively little power to boot.
Intel can only compete based on raw performance, and even the gains aren’t as significant as clickbait titles would have you believe.