Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

russell_314

macrumors 604
Feb 10, 2019
6,664
10,264
USA
It surely does, but there is engineering and there is engineering. Apple's outrageous performance/watt is not just the product of them using more advanced processing nodes, it's also the design of the CPU itself.
There’s no danger of some of that being copied or sort of copied just enough where it’s not an exact copy? I’m asking because I don’t know anything about CPU design. I just know when they work really well.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,675
There’s no danger of some of that being copied or sort of copied just enough where it’s not an exact copy? I’m asking because I don’t know anything about CPU design. I just know when they work really well.

Imagine that there is a very talented woodworker who uses innovative precision tools to make beautiful furniture. Others can purchase the same tools and they can study the furniture, but that alone won't allow them to replicate the technique. These designs are products of multiple years of hard work by hundreds of talented people, it's not something you can copy just like that. All AMD has access to is the tools that allow them to make finer circuits, but they will have to make their own designs.
 

russell_314

macrumors 604
Feb 10, 2019
6,664
10,264
USA
Imagine that there is a very talented woodworker who uses innovative precision tools to make beautiful furniture. Others can purchase the same tools and they can study the furniture, but that alone won't allow them to replicate the technique. These designs are products of multiple years of hard work by hundreds of talented people, it's not something you can copy just like that. All AMD has access to is the tools that allow them to make finer circuits, but they will have to make their own designs.
I’m glad to see there is competition because things were getting really stale with Intel at the top
 
  • Like
Reactions: MacCheetah3

Ethosik

Contributor
Oct 21, 2009
8,142
7,120
Apple's strategy is to maximize efficiency. Power/watt has always been a metric that they view as most important.

Intel's strategy is to maximize pure performance. Benchmarks has always been a metric that they view as most important.

AMD's strategy in the last five years or so has been somewhere in-between the two.



Personally, I'll never buy an Intel device ever again unless they really start focusing on power consumption.
Fully agree. I would prefer an M1 Max that gets 10% less performance than a hot Intel. Not hearing fans is a great thing!
 

iPadified

macrumors 68020
Apr 25, 2017
2,014
2,257
This has been on my mind for a while. How does Apple follow up its success of the M1 or does it have too? I think their main bread and butter for Macs is the laptop sector, and this falls right into Apple's strengths with its ARM processors. The issue is they could be viewed as falling behind if they roll out an M2 pro/max/ultra that can't compete with the latest offerings from intel and AMD.

Performance wise we're seeing significant gains from Intel and AMD. I could spend hours and hours going through each benchmark, but Cinebench r23 is one that is generally accepted. The Intel 13th gen numbers fall into the unverified category - so take them with a grain of salt. Also these numbers are not written stone. I can run Cinebench 10 times and get slightly different results, so keep that in mind.

View attachment 2083674
Good that Intel and AMD are competitive. What is the power draw and more importantly the noise levels of the machines with Intel/AMD?
 

ArkSingularity

macrumors 6502a
Mar 5, 2022
928
1,130
Imagine that there is a very talented woodworker who uses innovative precision tools to make beautiful furniture. Others can purchase the same tools and they can study the furniture, but that alone won't allow them to replicate the technique. These designs are products of multiple years of hard work by hundreds of talented people, it's not something you can copy just like that. All AMD has access to is the tools that allow them to make finer circuits, but they will have to make their own designs.

IIRC, there were also some fairly novel tricks that Apple used to massively reduce power consumption from their ridiculously wide front end. That was one of x86's biggest pain points, partly because instruction ROBs are expensive and partly because x86 instructions are all inconveniently different lengths (which is a problem for the decoders, which won't know where the start of the next instruction is until the previous one is decoded). Intel addresses this by just straight up throwing a ton of decoders in parallel and assuming that each byte is the start of a new instruction. Once they are all decoded, it keeps the valid ones and throws out the bad ones. It's not a bad solution, really.

Apparently Apple was able to make their ROB queues and front end schedulers significantly more efficient by splitting them up into several smaller queues for the back end rather than creating one gigantic queue. They patented a lot of the different optimizations they made, I'm not sure if this was one of them. This article has a lot of insight on how Apple managed to pull off the M1, it's definitely been an incredibly interesting read (albeit hard to understand at times, as I'm not a hardware engineer. There will be folks who understand it far better than me.)
 
Last edited:

maflynn

macrumors Haswell
Original poster
May 3, 2009
73,682
43,740
I’m glad to see there is competition because things were getting really stale with Intel at the top
Intel hasn't been at the top for a while, AMD over took them a few years ago. They've made some progress and the latest generation CPUs are pretty impressive, but I think AMD is still out innovating them
 
  • Like
Reactions: russell_314

Ethosik

Contributor
Oct 21, 2009
8,142
7,120
I take a different view with performance. My 12th gen Intel 3080Ti desktop is faster on benchmarks and on paper than my M1 Mac mini. Yet when I do video editing my Mac Mini blows it out of the water thanks to everything else the M1 SoC has. Someone mentioned it above, that’s similar to what I do “does this computer perform my work faster”. I don’t take the “ooohhhh high numbers” approach.
 

ArkSingularity

macrumors 6502a
Mar 5, 2022
928
1,130
Isn’t Nuvia working on an ARM chip that is server grade?
There are several companies that are working on ARM server chips. and my guess is that they will be very, very popular within a decade. They are already fairly common (though certainly not taking over majority market) on AWS as well.

ARM seems to have a lot of advantages on the server, largely because servers don't really have to worry about maximizing single core performance at all costs, multicore performance matters much more. ARM really shines here, it's easy to throw a ton of cores together with relatively low power consumption when you don't need to worry about cranking up the cores to 5ghz.
 

maflynn

macrumors Haswell
Original poster
May 3, 2009
73,682
43,740
Good that Intel and AMD are competitive. What is the power draw and more importantly the noise levels of the machines with Intel/AMD?
Nothing works in a vacuum, the CPU is one piece of the puzzle, but it seems that needing power supplies in the 1,000 watt range (maybe 800) is becoming the norm it seems - at least with the power hungry cpus, and high and GPUs, like the RTX 40 series from Nvidia.

As for noise, my desktop PC is whisper quiet, I have an I7 11700k, with an RTX 2060, using an air cooler with noctua fans.

Cooling the new CPus are getting harder, so more fans may be needed and of course that means more noise

Qualcomm owns them and ARM is suing them
 

Ethosik

Contributor
Oct 21, 2009
8,142
7,120
As for noise, my desktop PC is whisper quiet, I have an I7 11700k, with an RTX 2060, using an air cooler with noctua fans.
My desktop is quiet too. I have 12th gen i9 and a 3080 Ti with noctua as well. In fact I prefer the fan noise to my Mac Studio which is more silent, but a severely irritating high pitch. Good thing I have my headphones on 95% of the time because the performance of the Mac Studio is amazing!
 
  • Like
Reactions: maflynn

mikethemartian

macrumors 65816
Jan 5, 2017
1,483
2,239
Melbourne, FL
I take a different view with performance. My 12th gen Intel 3080Ti desktop is faster on benchmarks and on paper than my M1 Mac mini. Yet when I do video editing my Mac Mini blows it out of the water thanks to everything else the M1 SoC has. Someone mentioned it above, that’s similar to what I do “does this computer perform my work faster”. I don’t take the “ooohhhh high numbers” approach.
That’s because the logic used in video editing is implemented directly in hardware as it would be on an ASIC and does not have to go through the general CPU. So if someone is mainly doing video work it is a major advantage. But if you are doing computationally intensive work that can’t use that hardwired logic it doesn’t provide a benefit in those instances.
 
  • Like
Reactions: Ethosik

mikethemartian

macrumors 65816
Jan 5, 2017
1,483
2,239
Melbourne, FL
Nothing works in a vacuum, the CPU is one piece of the puzzle, but it seems that needing power supplies in the 1,000 watt range (maybe 800) is becoming the norm it seems - at least with the power hungry cpus, and high and GPUs, like the RTX 40 series from Nvidia.

As for noise, my desktop PC is whisper quiet, I have an I7 11700k, with an RTX 2060, using an air cooler with noctua fans.

Cooling the new CPus are getting harder, so more fans may be needed and of course that means more noise


Qualcomm owns them and ARM is suing them
Yeah I heard that there is some legal dispute whether their ARM license is transferable with the acquisition.
 
  • Like
Reactions: ArkSingularity

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,627
1,101
The main problem with AMD and Intel CPUs is their default settings. You can improve efficiency by tweaking a bit.
 

maflynn

macrumors Haswell
Original poster
May 3, 2009
73,682
43,740
In fact I prefer the fan noise to my Mac Studio which is more silent, but a severely irritating high pitch
I don't hear a thing out of my MBP, I have it by my side most days and its been excellent.
 

maflynn

macrumors Haswell
Original poster
May 3, 2009
73,682
43,740
The main problem with AMD and Intel CPUs is their default settings. You can improve efficiency by tweaking a bit.
Its also the motherboard makers, because intel advises them on settings but they're not required to adhere to them, so some motherboard makers may change how long the turbo will last, or heat thresholds, etc etc. The same CPU on different motherboards could actually be markedly slower if the motherboard maker is very conservative

I agree however, using the bios and/or XTU (in intel's case) will help zero in on an optimal setting. I used the settings to to squeeze out the most performance but balance the temps so it runs cooler
 

pshufd

macrumors G4
Oct 24, 2013
10,146
14,573
New Hampshire
That’s because the logic used in video editing is implemented directly in hardware as it would be on an ASIC and does not have to go through the general CPU. So if someone is mainly doing video work it is a major advantage. But if you are doing computationally intensive work that can’t use that hardwired logic it doesn’t provide a benefit in those instances.

One area that Apple has clearly provided benefits to is the YouTube creaters category. I'm rendering one right now on my M1 mini while running my other production work and idle is 59%. 4k or even 2k work would get the fans noisy on my Intel Macs. They would slow down foreground process too. I'd say that Google is a big beneficiary.
 

ChromeCloud

macrumors 6502
Jun 21, 2009
359
840
Italy
Adding to Xiao_Xi insightful comment, leaked Geekbench scores put the i9-13900k in the same league as the M1 Ultra (around 24k multicore).

Given the likely 250+ Watt TDP of the i9-13900k, I'd say Apple Silicon is still way ahead of the competition.

That said, Intel is on a trajectory to close the performance-per-watt gap in the next 2-3 generations if key keep improving at this pace (and if Apple Silicon performance keeps improving around 15-20% every year).
 

mi7chy

macrumors G4
Oct 24, 2014
10,622
11,294
What people look at is how fast it does things and how long does the battery last. That is for more professional people.

If Apple can keep this up, it’s not going to matter what Intel can do with more power because they won’t be able to achieve the battery life, and if you unplug the laptop it slows it down.

Even non-professionals should know the basic equation for battery life:

battery capacity in watt-hour / workload in watt = battery life in hours (not accounting for LCD, etc.)

Even M1 Macbook Pro has consequences running heavier or full work loads unplugged which leads to about 1.5 hours or less at full CPU+GPU loads. Professionals tend to run plugged for full performance without short battery life. Stable Diffusion works the integrated GPU like gaming.


Forward to 5:20
 

leman

macrumors Core
Oct 14, 2008
19,521
19,675
That’s because the logic used in video editing is implemented directly in hardware as it would be on an ASIC and does not have to go through the general CPU. So if someone is mainly doing video work it is a major advantage. But if you are doing computationally intensive work that can’t use that hardwired logic it doesn’t provide a benefit in those instances.

Video also massively benefits from unified memory. Video data is big and PCIe is a bottleneck. Apple doesn’t have that problem.
 
  • Like
Reactions: Argoduck and ahurst

pshufd

macrumors G4
Oct 24, 2013
10,146
14,573
New Hampshire
Even non-professionals should know the basic equation for battery life:

battery capacity in watt-hour / workload in watt = battery life in hours (not accounting for LCD, etc.)

Even M1 Macbook Pro has consequences running heavier or full work loads unplugged which leads to about 1.5 hours or less at full CPU+GPU loads. Professionals tend to run plugged for full performance without short battery life. Stable Diffusion works the integrated GPU like gaming.


Forward to 5:20

What percentage of Apple's base runs stable diffusion? Could they add custom silicon so that it runs more efficiently?

I like what Apple has done in making specific workloads run more efficiently. If I needed to run something that consumed more power, I'd run it on my desktops. I don't personally have any applications that would stop me from running a full day on my 2021 MacBook Pro.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,675
Professionals tend to run plugged for full performance without short battery life.

Not all professionals run sustained workloads 100% of the time. Most professional work is bursty, with majority of time the ms Hund being essentially idle.

BTW, if you care about stable diffusion (although I really don’t understand what professional workload relies on running stable diffusion 24/7), a Mac is probably the worst computer you can buy.
 
  • Like
Reactions: iPadified

mi7chy

macrumors G4
Oct 24, 2014
10,622
11,294
What percentage of Apple's base runs stable diffusion? Could they add custom silicon so that it runs more efficiently?

I use my M1 Macbook as a Chromebook and play some PC retro game ports like Duke Nukem 3D, Quake maps, etc. It's too slow for anything else like GPU compute, machine learning, AAA gaming, etc. Can't speak for how other people use it.

Perhaps the future of ultra efficient computing is compiling software to FPGA which nets about 50x efficiency or more. Xilinx FPGA is used in this demo was acquired by AMD and may be integrated into future CPUs while Intel acquired Altera FPGA.

https://www.cnx-software.com/2022/09/28/3d-game-fpga-50x-more-efficient-x86-hardware/

https://raw.githubusercontent.com/JulianKemmerer/PipelineC-Graphics/main/doc/Sphery-vs-Shapes.pdf

 
Last edited:

NT1440

macrumors Pentium
May 18, 2008
15,092
22,158
I use my M1 Macbook as a Chromebook and play some PC retro game ports like Duke Nukem 3D, Quake maps, etc. It's too slow for anything else like GPU compute, machine learning, AAA gaming, etc. Can't speak for how other people use it.

Perhaps the future of ultra efficient computing is compiling software to FPGA which nets about 50x efficiency or more. Xilinx FPGA is used in this demo was acquired by AMD and may be integrated into future CPUs while Intel acquired Altera FPGA.

https://www.cnx-software.com/2022/09/28/3d-game-fpga-50x-more-efficient-x86-hardware/

https://raw.githubusercontent.com/JulianKemmerer/PipelineC-Graphics/main/doc/Sphery-vs-Shapes.pdf
Hold up, we’ve seen multiple threads on this site about dramatic Machine Learning speed ups. You can’t just throw a whole field out there as not being possible, you’d have to be specific which particular ML/AI models aren’t well suited to Apple Silicon.
 
  • Like
Reactions: ahurst
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.