Userbenchmark is trash. Don't use em.Im not a GFX card guru but that seems consistent.
UserBenchmark: Nvidia RTX 2080 vs 3080 (Laptop)
gpu.userbenchmark.com
-d
Userbenchmark is trash. Don't use em.Im not a GFX card guru but that seems consistent.
UserBenchmark: Nvidia RTX 2080 vs 3080 (Laptop)
gpu.userbenchmark.com
-d
On Geekbench Vulcan the 2080 scores higher as far as I can see.No cause the 3080 Laptop is a 3070ti, which scores higher than a 3070 (which scores equal to a 2080ti).
Because GeekBench isn't testing rasterization, it is testing compute. The results will differ.On Geekbench Vulcan the 2080 scores higher as far as I can see.
If you sort by GPU versus by device the extras go away. Then you get the "best score" for the card by the "best api". Which is weird because the 6900XT tops the list if you do it by device but the 3090 tops if you do it by GPU.There are two different wattage types for the 3080.
~80W and ~155W.
Geekbench reports 32 compute units.I genuinely am starting to believe that this is actually the 24 core GPU and NOT the full 32 core GPU
Errors in geekbench do happen from time to time.Geekbench reports 32 compute units.
The layer likely translates OpenCL kernels directly into Metal IR and OpenCL API is trivially converted into Metal API calls. It should be fairly efficient overall. I have no idea how much overhead there is because of API mismatches…
Also the main money makers are console games which all the new ones support ray tracing. The pc games are ports.I worked as an engine / graphics programmer in the games industry for 8 years and the main reason for it is that 99% of devs are in it for passion not money.
I was earning half what I could make outside of games, and the main reason I put up with it so long was that I could work on cutting edge, interesting stuff and get paid to do it. Lots of people in the industry work on their own projects for free too just because they love doing it. I guarantee that almost every game that currently supports ray tracing started with the programmers begging the business people to let them do it and not the other way around.
It's the same with the artists. They want to make mind-blowing high quality art because they love making mind-blowing high quality art. If you asked them to spend all day quickly knocking up ugly low resolution art most of them would quit.
That sounds like a Fusion problem that can be fixed.
Okay I am not sure how I feel about this. I ran a quick test on my 6900xt and got vastly different scores depending on API chosen.
View attachment 1872574
I wouldn't say the 3080 Laptop scores are suspect, but they could be missing results (if you sort it appears the Vulkan and DX12 tests were not ran).
Which also is ridiculous as there is a vanilla M1 with a 800FPS score.API overhead and implementation details have a non-trivial impact at these high framerates. That’s also why GFXbench is not very good, it’s simply not demanding enough. But it can be used to approximate things to a limited degree. It’s best to compare the best score to the best score.
Okay I am not sure how I feel about this. I ran a quick test on my 6900xt and got vastly different scores depending on API chosen.
View attachment 1872574
I wouldn't say the 3080 Laptop scores are suspect, but they could be missing results (if you sort it appears the Vulkan and DX12 tests were not ran).
See my post directly above this. I genuinely suspect we are looking at the 24 core GPU option and geekbench is reporting incorrectly.Returning to Geekbench metal scores for a moment. The A14 gets just under 9000 metal score with 4 cores, the M1 gets 21000 with 8 cores. That seems like great scaling. Why wouldn’t the Max get around 80000? What ami missing?
That does make sense to me.See my post directly above this. I genuinely suspect we are looking at the 24 core GPU option and geekbench is reporting incorrectly.
Even further, we know that the 5600M is supposed to be about par with the 16 core GPU. I guess we will have to wait and see what happens with the benchmarks. This is only one benchmark, so I wouldn't hold too much wait in it, but the more and more I look into figures directly provided by Apple, I think this is the 24 core GPU just being miss-labeled in Geekbench.That does make sense to me.
I say that because on the score browser the 3080 laptop gpu is missing Vulcan/DX12 scores just like the 6900XT is missing those scores as well. It shows up as failed/not supported which isn't true (clearly I ran the test).I’m not sure I understand the last comment about “appearing to not having been run” but the first part is expected.
GPU benchmarks like these test: how optimally the benchmark was coded in the API; how optimally the driver for the API was written for the hardware; and the underlying hardware.
Creating tests for just the last part is almost impossible. So you just accept that there’s going to be more variation across different tests and more variables underlying the result. And, ultimately, when you are using the GPU all three are indeed what you care about: how well the program was coded for the API, how well does the API run on your GPU, and how good is the hardware at that specific task.
Returning to Geekbench metal scores for a moment. The A14 gets just under 9000 metal score with 4 cores, the M1 gets 21000 with 8 cores. That seems like great scaling. Why wouldn’t the Max get around 80000? What am I missing?
It’s curious. The M1 score is 19000. Divided by 8 equals around 2400 approximately. The 16 core yields 38000. Again around 2400 per core. one Would think the 32 core would be around 72000 opencl score. 60000 must be the 24 core.Even further, we know that the 5600M is supposed to be about par with the 16 core GPU. I guess we will have to wait and see what happens with the benchmarks. This is only one benchmark, so I wouldn't hold too much wait in it, but the more and more I look into figures directly provided by Apple, I think this is the 24 core GPU just being miss-labeled in Geekbench.
EDIT: Alternatively, we could be actually looking at the 32core GPU, but it just doesn't scale well for the M1 Max due the depreciated graphics API being used here. That's also entirely possible. However, if we are indeed looking at the 24core here, oh boy this is gonna be a wild ride.
That may be the case if the M1 Max was about 1.5x faster than the M1 Pro across the various subtests. But the M1 Max is twice faster in some, and not even faster in others.See my post directly above this. I genuinely suspect we are looking at the 24 core GPU option and geekbench is reporting incorrectly.
Why would it scale from 8 to 16 but not to 32?Even further, we know that the 5600M is supposed to be about par with the 16 core GPU. I guess we will have to wait and see what happens with the benchmarks. This is only one benchmark, so I wouldn't hold too much wait in it, but the more and more I look into figures directly provided by Apple, I think this is the 24 core GPU just being miss-labeled in Geekbench.
EDIT: Alternatively, we could be actually looking at the 32core GPU, but it just doesn't scale well for the M1 Max due the depreciated graphics API being used here. That's also entirely possible. However, if we are indeed looking at the 24core here, oh boy this is gonna be a wild ride.
Yeah, that's what I am personally leaning towards. I'm confused as to why the 32core would see such terrible scaling in relation to the rest.It’s curious. The M1 score is 19000. Divided by 8 equals around 2400 approximately. The 16 core yields 38000. Again around 2400 per core. one Would think the 32 core would be around 72000 opencl score. 60000 must be the 24 core.
Yep, good point. I'm honestly thinking we are looking at the 24 core here. It's possible this is also just extremely poor scaling for OpenCL or something.That may be the case if the M1 Max was about 1.5x faster than the M1 Pro across the various subtests. But the M1 Max is twice faster in some, and not even faster in others.
The plot thickens.
My guess would be something with the different core complexes or another. I'm not an engineer, so I'm just making educated guesses here. All I know, is that OpenCL doesn't do too great for Macs.Why would it scale from 8 to 16 but not to 32?