Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
The GPU with the higher performance/W ratio will incur a lower power bill.
Imagine two GPU's, X and Y. Y is has twice the power requirement of X, but has three times the performance. [So Y has a higer perf:W ratio than X.] Thus while Y consumes twice as much power, it only has to run for 1/3 the time to complete the task. Hence Y has a lower power bill, in spite of having twice the power requirement.

Of course, the real-world is more complex, since tasks aren't typically GPU-only, so you're also running the CPU, etc., but this is correct to first order.
 
Imagine two GPU's, X and Y. Y is has twice the power requirement of X, but has three times the performance. [So Y has a higer perf:W ratio than X.] Thus while Y consumes twice as much power, it only has to run for 1/3 the time to complete the task. Hence Y has a lower power bill, in spite of having twice the power requirement.

Of course, the real-world is more complex, since tasks aren't typically GPU-only, so you're also running the CPU, etc., but this is correct to first order.
Not true. You don’t get billed on the ratio. If my 3080 Ti system is pulling 800 watts from the wall, it’s 800 watts. Doesn’t matter how amazing the performance is that makes the ratio better. It’s consuming 800 watts. Compare that to my older setup with a 5700XT that only pulled 400 watts from the wall.

Therefore, my electric bill has since increased due to pulling 800 watts from the wall.

This is why I have shifted to playing games mostly on my macs now. I reserve my 3080 Ti for windows exclusives. Games like Factorio still causes the 3080 to draw significant more power than playing it on my MacBook Pro. Where the M1 Max has low ratio (not as good performance).
 
Last edited:
M1/M2 are "SoCs" which stands for "System on a Chip". Apple designs the entire chip, basically everything on it, which includes the GPU. They started putting PowerVR GPUs from Imagination Tech in their first A-series SoCs a dozen years ago and currently license Imagination IP, so their GPUs can be described as PowerVR-derivatives.
the build itself is done by TSMC...but the design and the whole SoC is design by Apple it self
These are essentially correct. But just to be a bit more precise, it's only the main processor die that is fabricated by TSMC (based on a design by Apple). Granted, that's by far the most important part of the SoC, and is what distinguishes Apple Silicon; it's a single piece of silicon that contains the CPU, GPU, coprocessors (neural engine, etc.), memory controllers (but not the RAM itself), etc.

But the other components on the SoC, which are separate chips, come from a range of different suppliers.

E.g., the RAM on the M1 SoC was made by SK Hynix, and is entirely Hynix's* design, unless there have been some Apple-requested customizations. [*Other than the parts of the RAM whose design is standarard, in which case we shouldn't give Hynix credit for that.]
 
Last edited:
The rumors are insane at how much the RTX 4090 is going to require. If left unchecked, there could be a time where desktop machines use two power supplies.
And a dedicated circuit, like major home applicances!

Or maybe even three-phase 😁....
 
  • Like
Reactions: maflynn
If you use a machine that consumes 100W for one hour, you are charged 100W * 1h = 100Wh.

If you use a more powerful and efficient machine (3 times faster but consuming only 2 times more) for the same task, you are charged 2*100W * 1h/3 = 66.6Wh.

If you use a less powerful, but more efficient (2 times slower, but consuming 3 times less) for that task, you are charged 100W/3 * 2*1h = 66.6W.

You may have other criteria to choose from, but you pay less when you use the more efficient machine.
 
If you use a machine that consumes 100W for one hour, you are charged 100W * 1h = 100Wh.

If you use a more powerful and efficient machine (3 times faster but consuming only 2 times more) for the same task, you are charged 2*100W * 1h/3 = 66.6Wh.

If you use a less powerful, but more efficient (2 times slower, but consuming 3 times less) for that task, you are charged 100W/3 * 2*1h = 66.6W.

You may have other criteria to choose from, but you pay less when you use the more efficient machine.
Things are rarely 2 or 3 times faster. These are mostly targeted towards gamers too. So a 1 hour gaming session at 800 watts costs more than a 1 hour gaming session at 400 watts. Only way to make it faster is to have ME be faster. So I can end my game in 30 minutes instead.

On the other side of the fence, I went from a 5700 XT to a 3080 Ti. Wall measurements are 400 and 800 roughly. Working on video editing hasn’t really sped up in a decade with h.264. I pretty much experienced a 1:1 ratio with the video duration and export times ever since my 2010 Mac Pro. I keep severely overspending to help with this but no luck until M1. So I’m consuming now 800 watts but the same duration. It wasn’t until the M1 that suddenly made things so much better! I sometimes have 8 hour videos and those took roughly 7.75 hours to export and 3080 didn’t help speed it up unfortunately. Now it’s MUCH faster with the M1 and I’m happy with it!
 
If you use a more powerful and efficient machine (3 times faster but consuming only 2 times more) for the same task, you are charged 2*100W * 1h/3 = 66.6Wh.
I think you're making an assumption that once the task is over you stop working. I am making the assumption that I'm using my computer for 8 hours a day. So even if I finished task A much much quicker, I also have task, B, C, D, etc. So while individual tasks are completed faster, I'm still working 8 hours a day and having a GPU that is using 600 watts and a processor consuming something like 250 watts, that's a lot more power then say my current setup where I'm only drawing a 1/4 of that power.

Now shifting gears, lets talk about gaming, my point (I believe) remains the same. I'll play a game for 2 hours, it doesn't matter how many more FPS I'm getting with the RTX 3040, I'm playing by time not performance.
 
  • Like
Reactions: Ethosik
Not true. You don’t get billed on the ratio. If my 3080 Ti system is pulling 800 watts from the wall, it’s 800 watts. Doesn’t matter how amazing the performance is that makes the ratio better. It’s consuming 800 watts. Compare that to my older setup with a 5700XT that only pulled 400 watts from the wall.

Therefore, my electric bill has since increased due to pulling 800 watts from the wall.

This is why I have shifted to playing games mostly on my macs now. I reserve my 3080 Ti for windows exclusives. Games like Factorio still causes the 3080 to draw significant more power than playing it on my MacBook Pro. Where the M1 Max has low ratio (not as good performance).
That's only if you're playing games, and are maxing out the GPU regardless. I'm talking about doing actual GPU compute work.

And as for games, I don't know how they work, but can't you set a framerate cap on them to reduce GPU load? And if you compare the more and less efficient GPU at the same framerate, i.e., for the same player experience, won't the more efficient one use less power?

E.g., suppose you've got a more-efficient 800W GPU that only needs 400W to reach the frame cap, and you've got a less-efficient 600W GPU that needs 500W to reach the frame cap. If so, the more efficient 800W GPU will, for the same playing experience, draw less power.
 
Last edited:
Do you shut your PC off once its done with the task?
No, but once your're done with the task, the computer is idling, and that 800W GPU isn't consuming 800W of power, it's only consuming a tiny fraction of that. That's why I didn't bother including it in the calcuation. If you're curious how idling comes into it, I leave this as an exercise for you: Repeat the calculation I did where the computer is turned on for 24 hours per day, and add in watts while idling.
 
That's only if you're playing games, and are maxing out the GPU regardless. I'm talking about doing actual GPU compute work.

And as for games, I don't know how they work, but can't you set a framerate cap on them to reduce GPU load? And if you compare the more and less efficient GPU at the same framerate, i.e., for the same player experience, won't the more efficient one use less power?

E.g., suppose you've got a more-efficient 800W GPU that only needs 400W to reach the frame cap, and you've got a less-efficient 600W GPU that needs 500W to reach the frame cap. If so, the more efficient 800W GPU will, for the same playing experience, draw less power.
Games are tricky as a lot of them aren’t well optimized. I have a few games that puts both 5700 XT and 3080 Ti at 100% with the same settings and fps due to engine limitations. Also with more and more monitors supporting 120Hz it essentially comes out to the same. Older GPU looks like X but gets 60FPS. Newer GPU still looks like X but gets 120FPS.

Also games like Factorio and those that can run very well on Intel GPUs (minus mega bases if you ever played this game) does surprisingly draw a bit from my 3080 Ti. Not 100%. But enough where I mostly play it on my MacBook Pro now.
 
No, but once your're done with the task, the computer is idling, and that 800W GPU isn't consuming 800W of power
I can only speak for what I do. When I work, I don't stop, when a task is completed so my PC is going from one activity to another. I know we can only talk generalities here but if you need a RTX 4090 for work related stuff, then odds are you will be using that machine non stop. I don't think you'll spend a lot of money on one of those to let the PC idle for long periods of time.

I'll probably go out on a limb and say most of the RTX 4090 sales will be coming from gamers, and that's where the 4090 will not save you on power consumption.
 
  • Like
Reactions: Ethosik
For me, I'd rather not be spending a lot of money on electricity, incurring more heat. Its a personal preference but I'm not inclined to have a desktop that is using kilowatts worth of power.
I can only speak for what I do. When I work, I don't stop, when a task is completed so my PC is going from one activity to another.
OK you're saying, contrary to my example, that your machine isn't idling much at all, and in fact it's going full-bore most of the time while you're working. Which means if you've got a 500 W machine, it's using 500 W pretty much all the time, and the same for a 750 W machine. [E.g., you send a rendering task to the machine, wait for it to complete, and then immediately send it a new rendering task, and wait for it to complete, etc.]

Given that, and given the workflow you describe, your desire to avoid higher max power consumption (even if the higher-powered machine is more efficient) doesn't make logical sense to me. Suppose the 750 W machine is twice as fast as the 500 W machine. Let's assume you work 40 hours/week.

With the slow 500 W machine, you use 20 kWH in one week, and get 10 tasks done. With the fast 750 W machine, you use 30 kWH in one week, and get 20 task done.

Are you really saying you would rather get half as much work done each week just so you can reduce your power consumption by 1/3?

And don't you need to get that work done eventually? If so, you just end up working twice as long with the 500 W machine which means you end up using more power with it anyways (per set of tasks done).


Or suppose the above doesn't describe your workflow, and instead, while you're working continuously, the machine itself isn't going full-bore all the time, such that the average power use of either machine is well under its max. In that case, you can't compare the energy use of the 500 W and 750 W machines using those figures, since those are their max power draws! Instead, you need to compare their low-load power draws for the same task. In that case, if the more efficient machine is also more efficient under low load, it will use less power for the same computational load, even if its peak power use is higher. And the individual high-load jobs you send to the more powerful but more efficient machine will be done faster, and with less total energy use, than is the case with the less powerful, less efficient machine.
 
Last edited:
Why does it matter if it performs accordingly? The performance/watt is going to be much better than the RTX 3090. The new AMD graphic cards are also rumoured to have increased power requirements.

I mean, I'm thrilled that my machine (Mac Studio) only sips power comparatively to those desktop monsters but some people need all the performance money can buy, especially on a desktop machine.

I agree. Power / Watts only matters on portable devices.

Which is why I would never buy Apple Silicon for a desktop. Especially since it is not upgradable.
 
The GPU with the higher performance/W ratio will incur a lower power bill.

Not necessarily. As others have pointed out you are neglecting the baseline power usage. Your GPU doesn’t do intensive useful work all the time. Besides, efficiency will depend on the task. Nvidia GPUs are currently more efficient that Apple for many ML tasks for example, but you probably won’t be running an ML workload every second.

Apple uses PowerVR GPUs from Imagination Technologies that are custom made by Apple.

To be fair, there is not too much left from original PowerVR in current day Apple Silicon. Some parts of fixed-function hardware and setup. But compute cores, dispatch, memory, that’s all fully custom Apple.


Which is why I would never buy Apple Silicon for a desktop. Especially since it is not upgradable.

Unless Apple decides to roll out desktop-optimized silicon one day.
 
Not necessarily. As others have pointed out you are neglecting the baseline power usage. Your GPU doesn’t do intensive useful work all the time.
You can address that. From ecoenergygeek.com:

1663375299275.png

1663375302373.png


Suppose you do one GPU-intensive and GPU-limited task/day that takes 3 hours on the 2080 = 3 h x 212 W = 636 Wh; Plus 5 hours doing other tasks that don't significantly tax the GPU = 5 h x 16 W = 80 Wh. This gives a total of 716 Wh/day for the 2080.

FP32 TFLOPS for the 2080 and 3080 are 13 .45 and 34.1 respectively. Using this as a general measure of GPU compute, the 3080 should finish that 3 hr task in 3 h x 13.45/34.1 = 1.2 h.

This gives 1.2 h @ 340 W + 6.8 h @27 W = 590 W h/day for the 3080.

Yes, this doesn't account for CPU use, or the possibility that that your run time won't be as much different as shown above once one accounts for the CPU, but that can be built-in as well.

The bottom line is that if you actually do work that can make use of a more powerful but more efficient GPU, and you actually do make use of it, you can have a net energy saving.

And if the more powerful GPU was truly more energy efficient, including consuming less at idle (i.e., unlike the example above), then you would *always* save energy.
 
The bottom line is that if you actually do work that can make use of a more powerful but more efficient GPU, and you actually do make use of it, you can have a net energy saving.

What about a less powerful but much more efficient GPU though, like Apple? An M1 Max offers a bit under 1/3 of a 3080 FP32 throughput at 10 times lower power usage...

Still, it’s very difficult to do this kind of calculations. For ML-related tasks for example Nvidia will be much more efficient than Apple for most problem sizes. And there is also the matter of software maturity.
 
What about a less powerful but much more efficient GPU though, like Apple? An M1 Max offers a bit under 1/3 of a 3080 FP32 throughput at 10 times lower power usage...

Still, it’s very difficult to do this kind of calculations. For ML-related tasks for example Nvidia will be much more efficient than Apple for most problem sizes. And there is also the matter of software maturity.
Well, aside from cases like the example you gave in your 2nd para., a system that is more efficient at all power levels, like the M1 Max vs. a 3080 PC, will of course always use less energy.* But I was instead addressing the question of what happens if you have a system that is more efficient under load, but that uses more power at idle.

*Though if you really want to do a sophisticated treatment, you'd need to do a complete lifecycle analyis. I.e., suppose you needed to buy either 3 M1 Max's or a single 3080 PC. Then you'd also need to account for energy costs to produce and ship 3 M1 Max's vs. one 3080 PC, etc...
 
Assuming single-precision (FP 32) TFLOPS are a good meansure of general GPU compute performance, I'd say get a Mac if your primary concern is GPU performance/watt, or go PC-NVIDIA if your primary concern is GPU performance/$* (e.g., the M2 Ultra with full GPU should start at ~$5k, while a 4060-equipped PC should be about than half that even when equipped with the top Intel Core i9, which will probably be slower MT but faster ST than the M2 Max Ultra CPU; though the M2 Ultra would be a much nicer machine, for other reasons):

TFLOPS, SINGLE-PRECISION (FP 32)
M1: 2.6
M2: 3.6
M1 MAX: 10.4
M2 MAX: 14 (?) (EXTRAPOLATING FROM M2/M1 x M1 MAX)
4050: 14 (?) (entry-level, ~$250?)
M1 ULTRA: 21
M2 ULTRA: 29 (?) (EXTRAPOLATING FROM M2/M1 x M1 ULTRA)
3080: 30
4060: 31 (?) (entry-level, ~$330?)
3080 TI: 34
3090: 36
3090 TI: 40
4070: 43 (?) (mid-level, ~$550?)
4080: 49 (?)
M2 EXTREME: 58 (?) (EXTRAPOLATING FROM 2 x M2 ULTRA)
4090: 80 (?)
4090 TI: 89 (??) (EXTRAPOLATING FROM 4090 x 3090 TI/3090)
M2 2X EXTREME: 116 (?)

[*I don't fall into either category; for me, the OS and my attendant user efficiency and experience are paramount, so I use a Mac.]
 
Last edited:
Assuming single-precision (FP 32) TFLOPS are a good meansure of general GPU compute performance, I'd say get a Mac if your primary concern is GPU performance/watt, or go PC-NVIDIA if your primary concern is GPU performance/$*
You should compare the Mac Studio GPU with the Nvidia RTX A6000 not with the RTX 3090. The Nvidia RTX A6000 is a more efficient version of the RTX 3090.
 
You should compare the Mac Studio GPU with the Nvidia RTX A6000 not with the RTX 3090. The Nvidia RTX A6000 is a more efficient version of the RTX 3090.
Feel free to do that and let us know what numbers you find--after all, it was your suggestion.
 
Feel free to do that and let us know what numbers you find--after all, it was your suggestion.
The RTX A6000 has the chip as the RTX 3090 Ti.

- RTX A6000 has 38.7 TFlops and consumes 300W
- RTX 3090 Ti has 40.0 TFlops and consumes 450W
 
The RTX A6000 has the chip as the RTX 3090 Ti.

- RTX A6000 has 38.7 TFlops and consumes 300W
- RTX 3090 Ti has 40.0 TFlops and consumes 450W
And the 3090 Ti is ~$1200 vs the a6000 at ~$4600. Kinda ruins the PC-NVIDIA value proposition.

Edit: found better prices
 
  • Like
Reactions: theorist9
The RTX A6000 has the chip as the RTX 3090 Ti.

- RTX A6000 has 38.7 TFlops and consumes 300W
- RTX 3090 Ti has 40.0 TFlops and consumes 450W

Right, but how do their performance:watt ratios compare to that of the GPU on the M1 Studio (say, the Ultra)? That was the comparison you said should be done:
You should compare the Mac Studio GPU with the Nvidia RTX A6000 not with the RTX 3090. The Nvidia RTX A6000 is a more efficient version of the RTX 3090.
 
how do their performance:watt ratios compare to that of the GPU on the M1 Studio (say, the Ultra)?
It seems that the M1 Ultra is more efficient than the A6000.

This video shows how the M1 Ultra renders just over 3 times slower than an RTX 3090 while consuming 70W.

This video shows how an RTX A6000 renders as fast as an RTX 3090, while consuming 270W.

The scenes are different, so the comparison is not entirely fair.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.