Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

eulslix

macrumors 6502
Dec 4, 2016
464
594
In GFXBench, the M1 is comparable to a 1650-Ti Max-Q, which is a 35Watt Nvidia Turing GPU. Frankly, I am doubting that result, since it is a bit too much (it would mean that Apple now has a 3x perf-per-watt lead over Nvidia which would be crazy), but it can be used as an indicator. Even if the "real" M1 performance is closer to MX450, it would be a great win for Apple and Mac users.

That’s not the point. The M1 is an incredible chip, offering unheard of efficiency, no one doubts that. That 4500$ graphics card though was made for purposes which aren’t even tested in those benchmarks. If they were tested, the M1 couldn’t even score there, since it lacks that functionality in the first place. But that’s ok, they never were meant to compete with each other anyways. I dislike such comparisons for the pure sake of fanboyism.
 
  • Like
Reactions: EugW

leman

macrumors Core
Oct 14, 2008
19,520
19,670
That’s not the point. The M1 is an incredible chip, offering unheard of efficiency, no one doubts that. That 4500$ graphics card though was made for purposes which aren’t even tested in those benchmarks. If they were tested, the M1 couldn’t even score there, since it lacks that functionality in the first place. But that’s ok, they never were meant to compete with each other in the first place. I dislike such comparisons for the pure sake of fanboyism.

Ah, yes, that you are 100% right about. Even running a gaming benchmark on a Tesla M10 is silly.
 
Last edited:
  • Like
Reactions: eulslix

Abazigal

Contributor
Jul 18, 2011
20,392
23,890
Singapore
yep, those compute cards don’t do gaming. They don’t do many mainstream things.

In a comparison against a 5700XT that gets 200fps in that benchmark the M1 GPU is about 3 times slower.

It’s pointless to benchmark a integrated graphics chip against a desktop GPU anyway. That GFXBench app itself is ancient.

We aren’t going to see Call of Duty level raytraced 4K gaming any time soon. Use the M1 for being productive on a laptop with reasonable expectations. If you think you will be able to load “thousands of layers” “billions of polygons etc then you set yourself up for disappointment.

Or in other words, continue to use a MBA for the tasks people normally do on a MBA, and it’s still going to be the same experience, now with faster performance and longer battery life.

I find it amusing that people are complaining about the entry level MBA not being able to run triple-A title games or perform some other demanding tasks normally reserved for more powerful workstations, as though you could do them on an equivalent windows ultrabook either way.

Heck, that we are even having such a conversation shows that the M1 chip is going to punch above its weight and there is nothing the competition can do about it.
 

ArPe

macrumors 65816
May 31, 2020
1,281
3,325
Or in other words, continue to use a MBA for the tasks people normally do on a MBA, and it’s still going to be the same experience, now with faster performance and longer battery life.

I find it amusing that people are complaining about the entry level MBA not being able to run triple-A title games or perform some other demanding tasks normally reserved for more powerful workstations, as though you could do them on an equivalent windows ultrabook either way.

Heck, that we are even having such a conversation shows that the M1 chip is going to punch above its weight and there is nothing the competition can do about it.


I’m telling people to have realistic expectations, otherwise we get endless moaning when reality hits. That GFXBench benchmark is ancient by modern standards of gaming graphics and so are Apple Arcade games. So keep it real and you’ll be happy with the new machines.
 

Homy

macrumors 68030
Jan 14, 2006
2,506
2,456
Sweden
In graphic (not compute) tasks, the M1 could be on par with the Radeon Pro 570X (and way ahead of the 560X). The TBDR architecture of the M1 (for which Metal has been tailored) benefits graphics more than it benefits compute. Also, Apple GPUs can use 16-bit AND 32-bit numbers in shaders, for precision and to boost efficiency, which PC GPUs can't.

That's great! I suspected that since it can render more pixels/s:

M1 41 GPixel/s, 82 GTexel/s
Pro 560X 16.06 GPixel/s, 64.26 GTexel/s
Pro 570X 35.36 GPixels/s, 123.8 GTexel/s
Pro 580X 38.4 GPixels/s, 172.8 GTexel/s
Pro 5300 52.8 GPixels/s, 132 GTexel/s
Pro 5500 XT 56.22 GPixels/s, 168.7 GTexel/s
Pro 5700 86.4 GPixels/s, 194.4 GTexel/s
Pro 5700 XT 95.94 GPixels/s, 239.8 GTexel/s
 
Last edited:

thingstoponder

macrumors 6502a
Oct 23, 2014
916
1,100
In graphic (not compute) tasks, the M1 could be on par with the Radeon Pro 570X (and way ahead of the 560X). The TBDR architecture of the M1 (for which Metal has been tailored) benefits graphics more than it benefits compute. Also, Apple GPUs can use 16-bit AND 32-bit numbers in shaders, for precision and to boost efficiency, which PC GPUs can't.

That's great! I suspected that since it can render more pixels/s:

M1 41 GPixel/s, 82 GTexel/s
Pro 560X 16.06 GPixel/s, 64.26 GTexel/s
Pro 570X 35.36 GPixels/s, 123.8 GTexel/s
Pro 580X 38.4 GPixels/s, 172.8 GTexel/s
Pro 5300 52.8 GPixels/s, 132 GTexel/s
Pro 5500 XT 56.22 GPixels/s, 168.7 GTexel/s
Pro 5700 86.4 GPixels/s, 194.4 GTexel/s
Pro 5700 95.94 GPixels/s, 239.8 GTexel/s
I was always confident in their CPUs and nothing I’ve seen performance wise has surprised me, but if this is true then wow. I was always much more skeptical in their GPUs.
 
  • Like
Reactions: Homy

Kung gu

Suspended
Oct 20, 2018
1,379
2,434
Anyone who gets their m1 MBA, MBP or Mac mini please post cinebench r23 result

Thanks
 
Last edited:

leman

macrumors Core
Oct 14, 2008
19,520
19,670
Just few quick comments

Also, Apple GPUs can use 16-bit AND 32-bit numbers in shaders, for precision and to boost efficiency, which PC GPUs can't.

Yes they can. Modern Nvidia, AMD and Intel GPUs fully support both single and half precision in the shaders, where half precision runs at double rate.

That's great! I suspected that since it can render more pixels/s:

M1 41 GPixel/s, 82 GTexel/s
Pro 560X 16.06 GPixel/s, 64.26 GTexel/s
Pro 570X 35.36 GPixels/s, 123.8 GTexel/s
Pro 580X 38.4 GPixels/s, 172.8 GTexel/s
Pro 5300 52.8 GPixels/s, 132 GTexel/s
Pro 5500 XT 56.22 GPixels/s, 168.7 GTexel/s
Pro 5700 86.4 GPixels/s, 194.4 GTexel/s
Pro 5700 XT 95.94 GPixels/s, 239.8 GTexel/s

This figures are meaningless. GPU makers stopped focusing on thee figures long time ago, for a good reason.
 

jido

macrumors 6502
Oct 11, 2010
297
145
It is the M1 (MacBook Pro 13"):

Apple M1 Cinebench.png
 

EntropyQ3

macrumors 6502a
Mar 20, 2009
718
824
On a busy day, I read around 1000 pages in Word or PDF or write around 20-50 pages in Word.

Can someone provide a benchmark for me?

With the MBA be enough for my demanding tasks?
:)
I think this post nails the MacBook Air.
Take two steps back, and look at it. It is one hell of a sweet little computer. It's not really designed to push the envelope, it's designed to fit into it! In all respects. Utterly silent as there is no fan to make noise, no moving parts at all except the keys and the hinge and a screen and keyboard that is really nice for text. Battery life that largely frees you from anxiety and having to carry around chargers, and allows conservative charging that greatly increases the longevity of the battery, as does simply needing fewer charge cycles.
This while still comfortably having performance margins for personal photo and video editing et cetera.
It's a great personal computer.

Switching away from x86 for personal computing is momentous, so of course tech geeks want to analyse this first foray into a brave new world. But that shouldn't be interpreted as detraction from what the MBA does, and does really well, its just curiosity and a desire to have a better foundation for extrapolation into the future. We don't want to see how M1 performs at for instance CineBench 23 because we want it to replace render farms, but to complement our understanding of the architecture.

There is no conflict.
 

EugW

macrumors G5
Original poster
Jun 18, 2017
14,883
12,857
Here is the previous fanless MacBook, my 2017 Core m3-7Y32:

Screen Shot 2020-11-15 at 10.30.24 PM.png


Screen Shot 2020-11-15 at 11.04.45 PM.png


The performance improvement on the coming fanless MacBook Air is going to be unreal.
 
  • Wow
Reactions: NotTooLate

Homy

macrumors 68030
Jan 14, 2006
2,506
2,456
Sweden
Just few quick comments



Yes they can. Modern Nvidia, AMD and Intel GPUs fully support both single and half precision in the shaders, where half precision runs at double rate.



This figures are meaningless. GPU makers stopped focusing on thee figures long time ago, for a good reason.
I'm not an expert but was thinking of this article. It's almost two years old so maybe things have changed.

"When the PC moved to unified shaders, the industry moved to FP32 for all GPU functions. This is as oppposed to the mobile world, where power is an absolute factor for everything, Vertex shaders are typically 32bpc while Pixel and Compute shaders can often be 16bpc. We’ve seen some movement on the PC side to use half-precision GPUs for compute, but for gaming, that’s not currently the case."

Why is it meaningless for a GPU to be able to deliver higher pixel rate per second?
 

leman

macrumors Core
Oct 14, 2008
19,520
19,670
I'm not an expert but was thinking of this article. It's almost two years old so maybe things have changed.

"When the PC moved to unified shaders, the industry moved to FP32 for all GPU functions. This is as oppposed to the mobile world, where power is an absolute factor for everything, Vertex shaders are typically 32bpc while Pixel and Compute shaders can often be 16bpc. We’ve seen some movement on the PC side to use half-precision GPUs for compute, but for gaming, that’s not currently the case."

The article is not wrong. Mobile GPUs have a long history of using reduced precision to improve their power efficiency. But in the recent years, desktop GPUs started implementing double-rate half-precision ALUs as well. Intel has had them for a while I think, Nvidia since Pascal and AMD since Vega. The main motivation for this was actually not graphics but increased interest in machine learning, where reduced-precision calculations are frequently used. But there is nothing preventing modern desktop GPUs from using faster half-precision calculations where it makes sense (e.g. for color shading).
Why is it meaningless for a GPU to be able to deliver higher pixel rate per second?

Because it's not a figure that says much. Not so long time ago this and other metrics like "triangles per seconds" were common marketing points and everyone was competing in pushing higher and higher numbers. But we now live in the age of GPU-driven pipelines, unified shaders, sparse textures and complex rendering algorithms... so other figures like shader throughput, memory bandwidth etc. are more important. Basically, any modern GPU is so good at doing basic tasks like primitive assembly or rasterization that boasting with these numbers does't make much sense.

Besides, how do you interpret that? Like what does 41 Gpixel/s actually means? What is a pixel? Which pixel format are we talking about? Does it mean that you can fill up a 4K frame buffer 5000 times per second? Sounds impressive, but why would you want to do that? You actually want to draw something meaningful to it, don't you? So what is left of those Gpixels in the end? One can ask similar questions about the texturing performance. Is that the filtering performance or texture bandwidth or both? Which filtering settings? Which pixel formats? What about compressed textures? And so on. Frankly, I'd rather see some benchmarks :)
 
  • Like
Reactions: bernhard and Homy
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.