Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

leman

macrumors Core
Oct 14, 2008
19,521
19,678
Do you know what the software FP64 performance of M1 is? I’m just curious.

No idea. But a double float technique can be usually implemented at the coast of 4-5 FP32 operations.


How I remember it being explained was that DLSS and any AA technique that required information on the whole scene/larger-than-a-tile area effectively require a second rendering pass for TBDR GPUs. I’ll see if I can dig up the link.

That’s correct. And that’s why neither have any advantage here. Of course, a bandwidth constrained GPU like base M1 will have more trouble.
 
  • Like
Reactions: crazy dave

crazy dave

macrumors 65816
Sep 9, 2010
1,453
1,229
No idea. But a double float technique can be usually implemented at the coast of 4-5 FP32 operations.




That’s correct. And that’s why neither have any advantage here. Of course, a bandwidth constrained GPU like base M1 will have more trouble.

I’m little confused by your last statement, is it just that the bandwidth advantage of the TBDR GPU is canceled out by the second rendering step required by DLSS? I would’ve though a second rendering pass would be way more expensive? Is that not so?
 

leman

macrumors Core
Oct 14, 2008
19,521
19,678
I’m little confused by your last statement, is it just that the bandwidth advantage of the TBDR GPU is canceled out by the second rendering step required by DLSS? I would’ve though a second rendering pass would be way more expensive? Is that not so?

Why would it be more expensive? That’s just a compute pass, neither TBDR nor IMR have any intrinsic advantage here. Both need to fetch the image data and process it.

TBDR might have a small advantage when collecting the scene data though, because it can stream out tiles more efficiently.
 

crazy dave

macrumors 65816
Sep 9, 2010
1,453
1,229
Why would it be more expensive? That’s just a compute pass, neither TBDR nor IMR have any intrinsic advantage here. Both need to fetch the image data and process it.

TBDR might have a small advantage when collecting the scene data though, because it can stream out tiles more efficiently.

Hmmm I’m still not following. What I was led to believe and perhaps I’m misread or misunderstood was that DLSS required an additional pass in TBDR GPUs that IMR GPUs didn’t need to take. Is that not correct?
 

leman

macrumors Core
Oct 14, 2008
19,521
19,678
Hmmm I’m still not following. What I was led to believe and perhaps I’m misread or misunderstood was that DLSS required an additional pass in TBDR GPUs that IMR GPUs didn’t need to take. Is that not correct?

I do not see why this would be the case.
 
  • Like
Reactions: crazy dave

Bug-Creator

macrumors 68000
May 30, 2011
1,783
4,717
Germany
The problem with such comparisons is that there are to many variables.
How to select the x86 contender? Price? Form factor? Battery life?


Apple just doesn't offer a >100W SoC in a 14" laptop costing 1500$ and as such any comparisons will be off.

Just in the same way that you can't buy anything x86 that will run a full day on battery without at least ending up with inferior performance (and often even pricing).
 
Last edited:

cnnyy20p

macrumors regular
Jan 12, 2021
229
317
I think the point of the video was M1 Max did not perform as Apple claim. The product is already good by itself.
 

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,628
1,101
I think the point of the video was M1 Max did not perform as Apple claim
It seems Apple has made the same mistake that Android phone manufacturers have been doing for a while: use benchmarks that don't reflect daily use cases to prove that they are better than the competition.
 
  • Like
Reactions: Boil

ingambe

macrumors 6502
Mar 22, 2020
320
355
It seems Apple has made the same mistake that Android phone manufacturers have been doing for a while: use benchmarks that don't reflect daily use cases to prove that they are better than the competition.
Why wouldn't they do that? Every manufacturer is cherry-picking their benchmark score, it's perfectly normal

The mistake some customers made is to blindly follow YouTubers who have a video editing oriented workflow
As the m1 pro/max have hardware acceleration for that, they were of course impressed by the performance delivered by the machines
If your workflow is different, you might not have such a spectacular boost in performance

But for owning one, no one can say the performances aren't impressive. These machines are beasts.
And in my own benchmark (software I'm working on, very CPU demanding) I get better results than some desktop CPUs
Not bad, especially when considering the battery life
 
  • Like
Reactions: Argoduck

MauiPa

macrumors 68040
Apr 18, 2018
3,438
5,084
The benchmarks are lackluster in many applications besides gaming.
particularly not optimized ones. Lets just face it, M1 simply don't run x86 software as well as x86 devices do. I would love to see an x86 run a program optimized for M1 and see how they do. The would be a real test
 

MauiPa

macrumors 68040
Apr 18, 2018
3,438
5,084
It seems Apple has made the same mistake that Android phone manufacturers have been doing for a while: use benchmarks that don't reflect daily use cases to prove that they are better than the competition.
But why not choose appropriate software to perform the test? There are enough well-written dual platform (no, not adobe, I said well-written) programs out there, and in those tests, the M1 meets its claims easily. But lets go with running software not designed for a platform against running it on the platform it was designed for. Does it take a high IQ to figure out which one wins?
 

ikir

macrumors 68020
Sep 26, 2007
2,176
2,366
They are incredibly good considering consumes and performance. Most software are still not optmized for Metal and runs via rosetta through emulation. In many professional task M1 Pro and M1 MAX destroy even high end GPU.

Linus Tech Tip is a popular channel but often very anti Apple and many comparison made in the past were absolutely crap.
 
Last edited:

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,628
1,101
There are enough well-written dual platform programs out there, and in those tests, the M1 meets its claims easily
I don't doubt M1 CPU is better than Intel CPU, but I'm interested in the comparison between Apple GPU and Nvidia GPU.

GPUs without good software support make those GPUs useless. AMD has GPUs as good as Nvidia, but the lack of good software support makes AMD GPUs only useful for gaming.

In many professional task M1 Pro and M1 MAX destroy even high end GPU.
Besides video editing software, what other software perform as good as Apple claims? It seems video editing software uses the media engine, so it doesn't seem fair to compare GPU performance using video editing software for pure GPU tasks.
 
  • Like
Reactions: Luis Glez

MauiPa

macrumors 68040
Apr 18, 2018
3,438
5,084
I don't doubt M1 CPU is better than Intel CPU, but I'm interested in the comparison between Apple GPU and Nvidia GPU.

GPUs without good software support make those GPUs useless. AMD has GPUs as good as Nvidia, but the lack of good software support makes AMD GPUs only useful for gaming.


Besides video editing software, what other software perform as good as Apple claims? It seems video editing software uses the media engine, so it doesn't seem fair to compare GPU performance using video editing software for pure GPU tasks.
It’s called a SOC for a reason (that is System on Chip). Who cares what piece performs what function, it’s about throughput. Apple does not sell GPUs separately. Other than video editing? Good question, did apple target any markets? I saw one chess engine written in assembler and optimized for x86, and definitely not optimized for m1. So that doesn’t count (and assembler, seriously that is so 1980). But seeing apple is not selling GPUs, only tests that count are optimized and workflow related - it literally doesn’t matter what part of soc gets you there
 
  • Like
Reactions: Technerd108

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,628
1,101
it literally doesn’t matter what part of soc gets you there
It is helpful to predict better the performance of the M1 Pro/Max for other GPU intensive tasks that don't use the media engine (e.g. 3D rendering or deep learning tasks) and make a purchase decision based on that.
 
  • Like
Reactions: Luis Glez

JMacHack

Suspended
Mar 16, 2017
1,965
2,424
I hate to see Apple follow the trend of fudging benchmarks and cherry picking, but I guess it’s to be expected.

As I recall most testing found the comparisons to be accurate, when constrained to Apple’s methodology. But the graphics were presented in a misleading way as to make people think the M1 Max was equivalent to a desktop 3080 in many if not all tasks (which is untrue).

Should’ve known when Apple kept touting perf/watt in the keynote.

Still, I stand by the statement that the M series is impressive when you have realistic expectations.
 
  • Like
Reactions: bobcomer

JMacHack

Suspended
Mar 16, 2017
1,965
2,424
Linus Tech Tip is a popular channel but often very anti Apple and many comparison made in the past were absolutely crap.
I’m no fan of LTT, but I find Anthony to be a very fair presenter. Dismissing him based on LTTs track record is unfair I think.

Likewise, MaxTech is far too positive towards Apple for my tastes, and I’m not convinced that he’s unbiased in his testing.

Though, as we’ve seen, we can test in all kinds of methods and come to all sorts of different conclusions based on that. It should’ve been obvious that eventually we’d find shortcomings in the M series in specific tasks.
 

ingambe

macrumors 6502
Mar 22, 2020
320
355
I hate to see Apple follow the trend of fudging benchmarks and cherry picking, but I guess it’s to be expected.

As I recall most testing found the comparisons to be accurate, when constrained to Apple’s methodology. But the graphics were presented in a misleading way as to make people think the M1 Max was equivalent to a desktop 3080 in many if not all tasks (which is untrue).

Should’ve known when Apple kept touting perf/watt in the keynote.

Still, I stand by the statement that the M series is impressive when you have realistic expectations.
I'm sorry but people who thought that an integrated graphic can beat top of the line graphics card are very naive
For sure in certain tasks having material acceleration will give a huge advantage but in general computing it was predictable

Apple cherry-picked, but they also select benchmark that matters for their consumers
A lot of Apple pro users are creatives or developers, it makes perfect sense to give them targeted benchmarks
 
Last edited:

Digital_Sousaphone

macrumors member
Jun 10, 2019
64
63
Woah woah woah, real world isn't lining up with Apple's marketing material? I was assured in this very forum that their marketing was true and valid. I see the handwaving crew is fully activated in this thread.
 

JMacHack

Suspended
Mar 16, 2017
1,965
2,424
Woah woah woah, real world isn't lining up with Apple's marketing material? I was assured in this very forum that their marketing was true and valid. I see the handwaving crew is fully activated in this thread.
You can run all sorts of tests until you find one or more where the processor underperforms. Frankly we were bound to find a few.

It’s not like this completely invalidates the new processors. The marketing material holds up, but people who expected it to be more than that were bound to be disappointed.

I'm sorry but people who thought that an integrated graphic can beat top of the line graphics card are very naive
For sure in certain tasks having material acceleration will give a huge advantage but in general computing it was predictable

Apple cherry-picked, but they also select benchmark that matters for their consumers
A lot of Apple pro users are creatives or developers, it makes perfect sense to give them targeted benchmarks
I don’t think it’s fair to compare an SoC to integrated graphics anymore. The architecture is fundamentally different from a standard cpu+gpu setup and that’s where the M series found the efficiency and performance gains. Essentially they “cut out the middleman”.

Naturally, tasks that move data from Storage or RAM to the GPU will see the major gains, but outside that situation it’s more in the expected range. i.e. if a task fits in the vram of a traditional gpu it’s not likely to see a major boost in speed.

I still stand by my opinion that it’s still impressive. Apple focused on perf/watt and that’s what they achieved.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.