Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

LinkRS

macrumors 6502
Oct 16, 2014
402
331
Texas, USA
My point about ipc is that whatever process 7 is it is not a true 7nm like TSMC.

I have said the unification of ram, storage, and gpu makes these chips behave very different than Intel or AMD.

You can't directly compare TSCM and Intel (or Samsung) "nanometer" process. All three companies use a different metric, which makes comparisons meaningless. See here: https://en.wikipedia.org/wiki/7_nm_process

I know this is a PC article, but it does a good job explaining the differences, and how it is not comparable:

Intel changed how they refer to their process to help combat the confusion, as TSMC's method of measurement gives the impression of an inherent superiority. This all harkens back to what has been coined as "Moore's Law" https://en.wikipedia.org/wiki/Moore's_law. This was often misstated as CPU performance double every 18-months, when it actually said that transistor density doubled every 2 years. The thought is, the more transistors in a given space (more density) make for a faster processor. With everything being equal (processor IPC, clock rate, power consumption), this would typically be true. However, as we all know, Apple (and even AMD) have greatly increased IPC over a relatively short period of time, eclipsing Intel's. This means that things are no longer equal, so having a denser process does not automatically give Intel the upper hand anymore.

:)

Rich S.
 

Technerd108

macrumors 68040
Original poster
Oct 24, 2021
3,051
4,302
You can't directly compare TSCM and Intel (or Samsung) "nanometer" process. All three companies use a different metric, which makes comparisons meaningless. See here: https://en.wikipedia.org/wiki/7_nm_process

I know this is a PC article, but it does a good job explaining the differences, and how it is not comparable:

Intel changed how they refer to their process to help combat the confusion, as TSMC's method of measurement gives the impression of an inherent superiority. This all harkens back to what has been coined as "Moore's Law" https://en.wikipedia.org/wiki/Moore's_law. This was often misstated as CPU performance double every 18-months, when it actually said that transistor density doubled every 2 years. The thought is, the more transistors in a given space (more density) make for a faster processor. With everything being equal (processor IPC, clock rate, power consumption), this would typically be true. However, as we all know, Apple (and even AMD) have greatly increased IPC over a relatively short period of time, eclipsing Intel's. This means that things are no longer equal, so having a denser process does not automatically give Intel the upper hand anymore.

:)

Rich S.
This is true on more than just process.

As far as what they are I can give my personal use of AMD 4000 and 5000 series mobile processors against Intel ice lake and tiger lake and comet lake and a billion lakes before. Intel runs hotter and throttles and has poor battery life compared to AMD “7nm” process chips. There is a qualitative difference with AMD process vs. Intel although Intel claims to have reached parity or better.

Obviously I have not used an Alderlake cpu so maybe they have made better progress but there ESF 10nm now named process 7 doesn’t have any serious process changes. They are still not at 7nm or if they are they are doing something dramatically wrong and AMD is not and that doesn’t make sense to me.

And while process density is all over the place from the various fabs it does make a huge difference and I understand your point but that would only work if all things were equal.

Intel actually has a lot of room for improvement on their process or ipc which should benefit them greatly once they get there. Arguing that a better process and MORE transistors don’t equate into better performance because others have different standards doesn’t make sense.

Obviously Moores law is not working. Density or not.
 

mi7chy

macrumors G4
Oct 24, 2014
10,591
11,279
M1 does support VP9 hardware decode, but not AV1 (at least as far as anybody knows, and it isn't in VideoToolbox either).

VP9 is light on resources relative to AV1 since 15 year old Xeon 5160 can software decode 4K VP9. On M1 I see 13% CPU utilization for 4K VP9 and like 70'ish% for 8K so that's software decoding. Ryzen 4650U can do 8K VP9 hardware decoding with 1 to 2% CPU utilization so that's hardware decoding.
 

827538

Cancelled
Jul 3, 2013
2,322
2,833
I have the 32 core Max 16" and pretty much the same here. Not only does it not blast fans, it barely gets off of "idle" temps for 99% of tasks. The Apple Silicon chips aren't just more efficient, they have a totally different power/performance curve. It's very non-linear, with the CPU performing a LOT of work at or near the base line power draw. As an example, I do ML engineering, so I've played around running ML/Deep Learning workloads on here (all on CPU). Training neural nets is very heavy work for a CPU, and I can run a lot of jobs where the CPU is showing pretty much max usage and the temps will literally only rise 2-3 degrees. It's insane. And looking at the package power in ASITop shows only a marginal increase in wattage. Bought this machine on launch day and I've only had the fans turn on once... playing Borderlands 3 through Rosetta, which makes sense as that is stressing all these GPU cores + CPU pretty much to the maximum. Unless you are maxing GPU + CPU in unison the fans don't turn on, and the CPU temp will almost never get above high 50s.

What's even crazier, is after using this machine for normal productivity tasks for a day my peak package power was... 12W, with an average of 1.12W. There is simply no other chip out there that can do that.
I agree, it's quite something and shown how ARM64 is the future, perhaps along with RISC-V. The issue is just how much software is still x86 optimized. I would love to see some of these Unreal Engine 5 demos written natively for Apple silicon. Whilst I know Macs are never really designed for gaming, these machines feel like they have immense potential given the huge memory bandwidth of the M1 Max and overall exceptional performance.

I've been very interested in the ML potential of the NPU but I'm not a ML engineer (I'm EEE). Have you managed to use it much in the same way a video editor can efficiently encode/decode video using hardware acceleration?

I discovered pretty recently that my iPhone 13 Pro can identify plants that I take pictures of with frightening accuracy. I was quite taken aback. I feel like ML is one of those slow creeping advancements that you might not really take much notice of at first. I might try to use some of Apple's ML API's once I learn a bit more of Swift.
 

Technerd108

macrumors 68040
Original poster
Oct 24, 2021
3,051
4,302
Yes. And it’s entirely possible for each chip to have the same temperature but for one to be generating significantly more heat. This is why the fans spin up more on Intel Macs than on ARM Macs, and Intel CPUs under load will generally require either a larger cooling system, or to run their cooling system harder (or both). The temperature debate is kind of silly because it ignores this fact. The temperature alone isn’t really that important, provided it’s under control by the cooling system.
You are playing a semantics game.

Yes both chips run at the same temperature but one throttles significantly-why? Yes it is cooling but it is also efficiency/architecture of the chip itself. If the chip can't maintain operating temperature because the chip frequency heats up like the sun as soon as it hits peak frequency then it will constantly ramp up and down to maintain temperature because the inefficiency of the chip or waste/heat. This means that although both chips essentially run at the same temperature only one can sustain peak frequencies under load over time without ramping up and down and blasting fans to the max.

So sure if I had a liquid cooling solution and a desktop then maybe but there is still a possibility that the chip will only be able to maintain peak frequency in bursts because it simply can't run at operating temperature under load-period!
 

827538

Cancelled
Jul 3, 2013
2,322
2,833
My point about ipc is that whatever process 7 is it is not a true 7nm like TSMC.

I have said the unification of ram, storage, and gpu makes these chips behave very different than Intel or AMD.
Intel's 7 is just rebranding of their 10nm process. Which arguably is similar in transistor density to TSMC's N7. So I would indeed say it is a true 7nm node - which is just a naming convention. It's actually more dense than TSMC's N7.

The problem is it's just not anywhere near as efficient, even if it has good transistor density. The yields are also not great.

The other issue Intel has is it's next node (Intel 4) which is comparable to TSMC's N5 is nowhere near ready, yet TSMC's N5 is already very mature with N5P chips (like M1 Pro/Max and A15) and N4 around the corner. Further TSMC will be bringing out the next node N3 later this year.

So by the end of 2022 you will have Intel struggling to compete with TSMC's N7 process whilst they are already producing chips two nodes ahead.

Then you see Intel negotiating with TSMC to produce Intel chips and suddenly things make a lot more sense. I would never count Intel out but they are not the chip making juggernaut they once were. TSMC is about three times the size of Intel and is pouring staggering sums of money into R&D as well as new fabs to stay in front. I thing the best Intel can hope for is to stay a generation behind TSMC.

The wild card at the moment is Samsung. If their 3nm GAA process can deliver it could be a major upset.
 
  • Like
Reactions: Technerd108

827538

Cancelled
Jul 3, 2013
2,322
2,833
Because power consumption will become more of an issue as time goes by (climate change, inflation, et al.) the more performance per watt matters in the desktop and laptop space where Intel is a dominant player. Performance per watt is going to become the primary driver in the server space for certain SaaS companies in the long term.

The fact is that Apple was the company that pushed Intel with regard to performance per watt and even Apple lost influence towards the end of their relationship. Intel hasn’t advanced their process technology in any meaningful way since the move to Broadwell (14nm). No matter how Intel and their apologists try to slice it, 10nm has been an epic disaster AMD the primary way Intel have improved performance has been to up clock speeds and power consumption. If this doesn’t illustrate how bad off x86 and is at the end of it lifetime, I don’t know what will.

Intel boosters can argue all day long about absolute performance versus Apple’s current approach with the M1/Pro/Max, but unless you’re in the very highest percentile of time critical work, performance per watt and power consumption are a big deal at the desktop level as well.
I think relating climate change and CPU power consumption is a bit of a stretch.

If you have any experience in data centers you would know PPW is THE metric and always has been. Apple didn't lose influence, Intel couldn't deliver. They got complacent with AMD being unable to compete. If you have been following lithography and semiconductors for as long as I have you knew this day was always coming. Apple didn't get into chip production with the aim of only producing chips for phones, they also didn't invest massively into TSMC to only produce phone chips either. Intel's repeated failings undoubtedly accelerated this timeline but it was inevitable.

Every major cloud computing provider is already investing heavily into custom ARM64 chip development. AWS with Graviton etc as PPW is absolutely critical and x86 will never be able to compete with ARM64 due to carrying 50 years of baggage.

Whilst I'm firmly a believer in ARM64 don't count out x86, it's here to stay for a good long time, we are just witnessing the steady shift to ARM64 and RISC-V as efficiency matters in capitalism. AMD has been performing exceptionally well with Zen and Zen 4 will likely be another huge success. Intel's failures aren't solely due to manufacturing issues with 10nm, a lot has to do with complacency and hubris and failing to see major shifts in the industry.

I'd make a strong argument that Intel's latest chips are still far less powerful than M1, as they cannot maintain that performance for any length of time in a laptop. They require excessive amounts of power and cooling to post peak benchmarks and then throttle in anything less than an ideal setup. x86 efficiency cores is one of the most stupid ideas I've ever seen and one I think Intel will walk back as 1) they are not particularly efficient 2) take up a large amount of die space that can be used for performance cores that do the work 3) x86 OS's and software were never designed for efficiency cores so taking advantage of them is extremely difficult. Apple controls the whole stack and it's OS has been using efficiency cores for a long time. Outside of a few x86 programs, nothing takes advantage of those e-cores.
 

Stratus Fear

macrumors 6502a
Jan 21, 2008
696
433
Atlanta, GA
VP9 is light on resources relative to AV1 since 15 year old Xeon 5160 can software decode 4K VP9. On M1 I see 13% CPU utilization for 4K VP9 and like 70'ish% for 8K so that's software decoding. Ryzen 4650U can do 8K VP9 hardware decoding with 1 to 2% CPU utilization so that's hardware decoding.
You can disable hardware decoding in Chrome and watch CPU usage/power consumption more than double on VP9 content on M1. You could also compare usage during playback of VP9 content in Safari vs VLC (which doesn’t support HW accelerated VP9 on Macs yet). You could also peruse Google; other people have verified this already and that is easily searchable. Chrome even reports acceleration available for VP9 in chrome://gpu

edit: The reason for the massive jump in CPU usage from 4k to 8k is because the hardware decoder (at least according to Chrome as I see) doesn't support resolutions larger than 4096x4096:
Screen Shot 2022-01-10 at 8.24.18 PM.png


In this example on my M1 Max, Chrome's rendering process jumps from just under 20% of a single core at 4K, to between 600 and 650% in Activity Monitor when switched to 8k. You're talking about a 30x jump in CPU usage for a 4x increase in pixel count. That would not be as it was if there was no hardware accelerated decode of VP9 at all.
 
Last edited:

Stratus Fear

macrumors 6502a
Jan 21, 2008
696
433
Atlanta, GA
You are playing a semantics game.

Yes both chips run at the same temperature but one throttles significantly-why? Yes it is cooling but it is also efficiency/architecture of the chip itself. If the chip can't maintain operating temperature because the chip frequency heats up like the sun as soon as it hits peak frequency then it will constantly ramp up and down to maintain temperature because the inefficiency of the chip or waste/heat. This means that although both chips essentially run at the same temperature only one can sustain peak frequencies under load over time without ramping up and down and blasting fans to the max.

So sure if I had a liquid cooling solution and a desktop then maybe but there is still a possibility that the chip will only be able to maintain peak frequency in bursts because it simply can't run at operating temperature under load-period!
I wasn’t disagreeing with you, quite the opposite actually!
 

lepidotós

macrumors 6502a
Aug 29, 2021
677
750
Marinette, Arizona
I believe Dave Garr and the Licensees put it best: "The heat from Pentium warms a mid-size town!".​
I’m running YouTube @ 4K/60 on a 7th gen Pentium. That’s not a very demanding task for any modern processor thanks to the hardware acceleration built into web browsers.
I'm able to watch YouTube on my PowerPC 7447a just fine, and there's even a YouTube decoder for 68k Amigas. It's really not the most demanding thing you can do on a computer.
Until they remove the ME and patch all those giant holes and promise to become a competent company, I have no reason to ever move away from PowerPC. Sometime this year I hope to build myself a POWER9 minitower with the Blackbird motherboard, but for now my Macs will do.​
 
Last edited:

progx

macrumors 6502a
Oct 3, 2003
831
968
Pennsylvania
When Qualcomm delivers a PC processor similar to the M-series, then Intel may get into some serious trouble.

Which is looking to be sooner than later. AMD is the bird on the wire currently, but Qualcomm will have to be the start for Windows 11 Arm. Samsung is gearing up their ARM SOC with AMD graphics to take on the M1. HP has released Qualcomm ARM laptops with Windows 11 already.
 
  • Like
Reactions: lepidotós

fourthtunz

macrumors 68000
Jul 23, 2002
1,735
1,210
Maine
No not really. Just tired of all the Intel comparisons and pissing contests by trolls.

Just thought I would have some fun and see what new arguments they will make about why Intel is superior to everything and now we are all stuck with crappy AS!🤪😉😎
and is this Processor even in a laptop yet?
Kind of crazy comparing a laptop thats been out for a couple of months to
a processor that's not even in a computer yet.
 
  • Like
Reactions: Technerd108

DHagan4755

macrumors 68020
Jul 18, 2002
2,252
6,125
Massachusetts
Knowing what we know now, what are the chances the new 14"/16" MacBook Pro design was actually engineered with Alder Lake CPUs in mind, or just in case Apple silicon didn't work out as planned?
 

pshufd

macrumors G4
Oct 24, 2013
10,133
14,563
New Hampshire
and is this Processor even in a laptop yet?
Kind of crazy comparing a laptop thats been out for a couple of months to
a processor that's not even in a computer yet.

There were some benchmarks from an actual laptop that I saw in an article a few days ago. I'm pretty sure that you can't buy it yet.
 
  • Like
Reactions: fourthtunz

Shirasaki

macrumors P6
May 16, 2015
16,249
11,745
Cheering theoretical numbers against real products? Guys, this is becoming more and more pathetic as time goes on. I believe if the product is genuinely impressive, which Intel can totally do, people will cheer promptly. It's bad to see a company just die down without fighting back despite having all the required resources to do so. It is another when people just defend every single move of the said company without thinking twice. Are you defenders being paid or something?

If Intel actually releases this product and the performance number is as promising as they claimed to be, good. I doubt anyone would hate an x86 PC that is light and carry long battery life with good performance, or a chip that is performant and doesn't draw the power equivalent of a freaking microwave.

Rant over.
 

Technerd108

macrumors 68040
Original poster
Oct 24, 2021
3,051
4,302
Intel's 7 is just rebranding of their 10nm process. Which arguably is similar in transistor density to TSMC's N7. So I would indeed say it is a true 7nm node - which is just a naming convention. It's actually more dense than TSMC's N7.

The problem is it's just not anywhere near as efficient, even if it has good transistor density. The yields are also not great.

The other issue Intel has is it's next node (Intel 4) which is comparable to TSMC's N5 is nowhere near ready, yet TSMC's N5 is already very mature with N5P chips (like M1 Pro/Max and A15) and N4 around the corner. Further TSMC will be bringing out the next node N3 later this year.

So by the end of 2022 you will have Intel struggling to compete with TSMC's N7 process whilst they are already producing chips two nodes ahead.

Then you see Intel negotiating with TSMC to produce Intel chips and suddenly things make a lot more sense. I would never count Intel out but they are not the chip making juggernaut they once were. TSMC is about three times the size of Intel and is pouring staggering sums of money into R&D as well as new fabs to stay in front. I thing the best Intel can hope for is to stay a generation behind TSMC.

The wild card at the moment is Samsung. If their 3nm GAA process can deliver it could be a major upset.
Learned something! Thank you. This thread was not serious but I do like when a light hearted thread can actually tease out real information about process node not everyone knows like me!
I agree with you. On all points!

I am really actually a fan of Intel as they are one of the only fabs in the US. I do hate their business practices-specially the past few years but ultimately I want them to succeed.

I like AMD but over the years they have burned me in gpu software or bad gpu drivers and back in the Turion 64 days where they were fast but sucked power and heated way up-sound familiar-to Ryzen 3000 series which had horrible gpu drivers. Now the 4000 and 5000 series seem better they still are not good. Intel also just has a lot better software support-specially for Linux. So I would really like to see a 5nm or better-since you are saying they are at 7nm even though Intel said 10nm on Tigerlake and I don't see a big density or ipc difference but I honestly don't know for sure-cpu as I think a lot of thermal and wattage issues might start to get significantly better.
 
  • Like
Reactions: 827538

lepidotós

macrumors 6502a
Aug 29, 2021
677
750
Marinette, Arizona
I doubt anyone would hate an x86 PC that is light and carry long battery life with good performance, or a chip that is performant and doesn't draw the power equivalent of a freaking microwave.
That's where you're wrong, I categorically hate all x86 computers. It could come with a free money printer, have a negative power draw, singlehandedly reversing climate change, and make me a sandwich and I still wouldn't buy it.​
 

Shirasaki

macrumors P6
May 16, 2015
16,249
11,745
That's where you're wrong, I categorically hate all x86 computers. It could come with a free money printer, have a negative power draw, singlehandedly reversing climate change, and make me a sandwich and I still wouldn't buy it.​
Alright. I get it. While your type might not be a minority, it doesn’t really matter much in the grand scheme of things. I don’t “like” either x86 or ARM, as long as I can get the job done, ideally faster, which M1 is yet to reach that goal for me. The rest of “which one is superior”, is decided by silent of majority.
 

macintoshmac

Suspended
May 13, 2010
6,089
6,994
No just as I have said kind of tired of threads, Intel Alderlake beats M1 Pro and the very intelligent analysis that follows which is that M1 sucks.

So the solution is childish mocking and name-calling and creating sensational but senseless threads?

If only you presented the case without this fervour, citing facts and your thoughts simply, it would have made for a much more reasonable discussion.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.