Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
There is an incredible difference between a good port and a bad port, and something using metal is no indication that it's any good. PC users are already used to being handed a shoddy console port where the expectation is that the PC's superior hardware allows the hastily put together code to run almost as well as it did on much weaker console hardware.

We're even further down the food chain, so for an M1 Max to be able to perform at the same level as a 3080 it would either need far superior hardware (which it doesn't have), or a Metal port that had received as much love as the DirectX version (which is incredibly unlikely at this point in time)

The only "AAA" game I know of that has a really good Metal port is Baldurs Gate 3. I'd be really interested to see how that performs on the M1 Max.
 
  • Like
Reactions: Macintosh IIcx
No, definitely not. Removing Rosetta AND optimising for Apple Silicon likely could come close. After all, the Aztec benchmark is very good. The Mac version of these games has been an afterthought, they're likely not very optimised.

That's not to say the benchmarks are useless, they aren't. They reflect the current performance of some of the most popular available Mac games. But not the potential performance of the hardware.
Isn't Aztec more of an iOS/Android cell phone GPU type benchmark so better optimized for TBDR?
 
Isn't Aztec more of an iOS/Android cell phone GPU type benchmark so better optimized for TBDR?
I don't know if it uses TBDR, but yeah, it can be run on phones, so maybe. But macOS games could leverage the same level of optimization/performance, if developers worked on it
 
Isn't Aztec more of an iOS/Android cell phone GPU type benchmark so better optimized for TBDR?
Aztec is a crossplatform benchmark, but yes they also have been running on iOS for awhile and probably have a Metal TBDR-optimized engine. So it represents what *can be* even if it is not what *is*.
 
  • Like
Reactions: Andropov
dada_dave is me - I'm asking the question there. ;) I wrote about it in the linked post, even tagged you!
Oh got it. That's neat!

I'm really hoping we can see some better performance out of the GPU for games. I'm rather surprised, as the M1 was actually quite decent. It would generally get around 30FPS on medium settings in more or less any game that could run on it. I'm rather surprised to see the performance this low, so far.
 
I think we can conclude that the M1 Max cannot be close to a 3080 mobile, except in very specific cases.
Let's not forget that the latter has much more compute power.
 
  • Like
  • Angry
Reactions: EPO75 and g75d3
I think we can conclude that the M1 Max cannot be close to a 3080 mobile, except in very specific cases.
Let's not forget that the latter has much more compute power.

I'm not sure we can conclude that ... in graphics. For pure compute, yeah, it's not going to be close but Apple didn't claim that - just 4x the M1, 10 TFLOPs of compute, and close to/even beating the mobile 3080 in graphics where the engine is optimized for each. Since we don't have many graphics intensive games for the latter, difficult to confirm beyond synthetic benchmarks.
 
I'm not sure we can conclude that ... in graphics. For pure compute, yeah, it's not going to be close but Apple didn't claim that - just 4x the M1, 10 TFLOPs of compute, and close to/even beating the mobile 3080 in graphics where the engine is optimized for each. Since we don't have many graphics intensive games for the latter, difficult to confirm beyond synthetic benchmarks.
GFXBench puts the M1 neck to neck with the 3080 Mobile, but it's 30% slower in Wild Life Extreme. Which is still good.
 
I think we can conclude that the M1 Max cannot be close to a 3080 mobile, except in very specific cases.
Let's not forget that the latter has much more compute power.
Compute != Rasterization. It likely won't compete in compute, I don't think that's changed.

The issue right now seems mostly aimed at 2 games with potentially poor ports. We need more data than just 2 games to conclude how this goes. The bigger question to me, is whether or not developers will hop on board to use that power.
 
GFXBench puts the M1 neck to neck with the 3080 Mobile, but it's 30% slower in Wild Life Extreme. Which is still good.

Yeah and I don't know which of those two scenarios is more likely to play out than the other, probably closer to the latter in most code if we're lucky. Buuuut .... ?

(also which 3080 mobile? :p)
 
  • Like
Reactions: ElfinHilon
Compute != Rasterization. It likely won't compete in compute, I don't think that's changed.

The issue right now seems mostly aimed at 2 games with potentially poor ports. We need more data than just 2 games to conclude how this goes. The bigger question to me, is whether or not developers will hop on board to use that power.
I've been wondering if these benchmarks that measure compute take into account the Neural Engine? Most dgpus have ML acceleration built in. On the M1 chips it's a separate part of the SoC. Would these tests take advantage of that?
 
Further to the questions about gaming performance on the M1/Pro/Max, Brad Oliver, who works for Feral, posted this on Ars:

 
So would that mean that some of the tests are missing out on the extra performance the M1/Pro/Max offer?
In theory, yes, but not on practice. It's already hard to find a workload that stresses both the CPU and the GPU, imagine how hard would it be to find a workload (in the real world) that requires at the same time CPU, GPU, and a Neural Engine-capable machine learning algorithm. And the Neural Engine doesn't even work with all types of neural network layers, so it's not like you could translate every neural network to a CoreML implementation capable of running on the Neural Engine.

And for things that are not Machine Learning based, the Neural Engine simply cannot help.
 
  • Like
Reactions: crazy dave
So would that mean that some of the tests are missing out on the extra performance the M1/Pro/Max offer?
Of course. Benchmark tools use general-purpose code that cannot, almost by definition, be processed by specific hardware like ML cores, tensor cores, ray tracing units, etc.
Hence, the M1 Pro/Max cannot show their full potential in machine learning and editing in particular. Other CPUs/GPUs don't do ProRes encoding and decoding. However, high-end RTX GPUs have much higher ML power than even the M1 Max, unless I'm mistaken. And this isn't reflected in synthetic benchmarks.
 
Not too long ago people were claiming Rosetta 2 performance is faster than Windows native. So, which goal post position is it now?
You don't evaluate the impact of rosetta 2 by comparing a macOS X86 app running on M1 to the equivalent app ruining on Windows.

In case you didn't know, the impact of Rosetta 2 is typically 25-30% of CPU performance loss. But this should not have much impact on GPU performance.
 
Not too long ago people were claiming Rosetta 2 performance is faster than Windows native. So, which goal post position is it now?
M1 Macs were running Rosetta 2 apps faster than Windows computers were running them natively. Some of them, anyways. Certainly not the i9 11890HK + NVIDIA 3080 Mobile of this particular comparison.
 
  • Like
Reactions: jons and ElfinHilon
Not too long ago people were claiming Rosetta 2 performance is faster than Windows native. So, which goal post position is it now?
i recall some saying Rosetta 2 was faster than some apps on Windows because of the performance delta with some pcs. I don’t think any of those apps were games though. I’d be interested in any references you have for that.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.