Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

leman

macrumors Core
Oct 14, 2008
19,522
19,679
Sorry if I am asking the obvious: is it know how Proton works? Apparently it is proprietary - how would the situation change if they released it as open source?

Proton is open source: https://github.com/ValveSoftware/Proton

It would probably make Vulcan obsolete; however, Vulcan and Proton are kind of redundant - its just the licenses are different, so Proton is not publicly available.

Or is there another issue I am unaware of?

Proton is a software layer that allows Windows games to run on Linux. Among other things, it will convert DirectX calls to Vulkan calls, so that the game can run on a Linux system with Vulkan drivers. Proton and Vulkan are not redundant, one relies on another.
 
  • Like
Reactions: Romain_H

crazy dave

macrumors 65816
Sep 9, 2010
1,454
1,230
Proton is open source: https://github.com/ValveSoftware/Proton



Proton is a software layer that allows Windows games to run on Linux. Among other things, it will convert DirectX calls to Vulkan calls, so that the game can run on a Linux system with Vulkan drivers. Proton and Vulkan are not redundant, one relies on another.

I think it’s worth mentioning that Proton is basically Wine + a little extra if I remember correctly.

Your point about the future of the APIs is a good one. Sadly sometimes in history interoperability/translatability with the more dominant platform has not always lead to the growth of the smaller one but sometimes (is thought to have) contributed to its demise. Still in most cases probably it’s best hope to get that critical mass of development, but it can have downsides for exactly the reasons you mentioned.
 

Romain_H

macrumors 6502a
Sep 20, 2021
520
438
Proton is open source: https://github.com/ValveSoftware/Proton



Proton is a software layer that allows Windows games to run on Linux. Among other things, it will convert DirectX calls to Vulkan calls, so that the game can run on a Linux system with Vulkan drivers. Proton and Vulkan are not redundant, one relies on another.
Thx. So Valve sould have a vital interest in Vulcan. I‘d like to see native ports, but Proton seems better than nothing
 

playtech1

macrumors 6502a
Oct 10, 2014
695
889
The GPU speed increase of A15 seem to be the most (only?) interesting aspect of this round of iPhone SoC.

I think the focus on GPU year-on-year reflects that there is a gaming strategy at Apple - game DLC is after all the big App Store money-spinner.

They also pumped out Apple Arcade and must want it to be a big revenue stream (they are keeping GeForce Now and GamePass off the App Store for reason that are commercial not technical!).

All the pieces of the gaming puzzle are pretty much in place except for AAA titles.

Perhaps AAA games on Apple Arcade?

Yeah... maybe not.
 

cmaier

Suspended
Original poster
Jul 25, 2007
25,405
33,474
California
I don’t see anything in the GPU geekbench to indicate higher bandwidth.

Ok. Wasn’t aware that geekbench was useful for determining that. I’m relying on the fact that the entire memory path has been modified extensively. Bigger SLC, new memory management unit, higher power I/Os, etc. And the fact that the reason M1 was port-limited was there just isn’t enough bandwidth, and that will not be the case this time.
 
  • Like
Reactions: Wolf1701

Jorbanead

macrumors 65816
Aug 31, 2018
1,209
1,438
I think the focus on GPU year-on-year reflects that there is a gaming strategy at Apple - game DLC is after all the big App Store money-spinner.
Maybe, but I also think Apple needs to do this regardless of gaming. Right now Apple is one of the leaders in CPU performance, but their GPU’s still have a ways to go. Yes they are amazing for integrated graphics, but Apple needs to compete with Nvidia and AMD, especially for their pro macs.
 

altaic

Suspended
Jan 26, 2004
712
484
The iMac Pro was discontinued. That’s not to say they won’t bring it back, but we have no idea what the TDP would be. Surely they’re not going to use the old design if they did bring it back, so it’s likely the TDP would be much less. If they were going to do an iMac Pro, I would imagine it would look like the Pro Display XDR with a chin.
If they did use the Pro Display XDR design, I'd think the thermal budget would actually be quite high. It's 28.3 in x 16.2 in x 1.1 in, which is thick enough and voluminous enough to fit the logic board, vapor chamber heat pipes, fans, speakers, etc., even without a chin. That said, I'm not holding my breath on a new 300+ W TDP iMac-- but I think it is feasible.
 
  • Like
Reactions: Jorbanead

Tagbert

macrumors 603
Jun 22, 2011
6,261
7,285
Seattle
If they did use the Pro Display XDR design, I'd think the thermal budget would actually be quite high. It's 28.3 in x 16.2 in x 1.1 in, which is thick enough and voluminous enough to fit the logic board, vapor chamber heat pipes, fans, speakers, etc., even without a chin. That said, I'm not holding my breath on a new 300+ W TDP iMac-- but I think it is feasible.
I think the manufacturing cost of the Prodisplay XDR would be too high for an iMac. I would expect something closer to the small iMac though not so thin. I'm sure that they will leave thermal room for an eventual iMac Pro even if they don't launch with one.
 

altaic

Suspended
Jan 26, 2004
712
484
I think the manufacturing cost of the Prodisplay XDR would be too high for an iMac. I would expect something closer to the small iMac though not so thin. I'm sure that they will leave thermal room for an eventual iMac Pro even if they don't launch with one.
I was talking about the feasibility of an uber-specced iMac Pro, and that, in general, iMacs can have a pretty high thermal budget due to their shape. But, yeah, for the regular iMac, I agree that it would make sense to see a similar design language as the M1 iMacs.
 

leman

macrumors Core
Oct 14, 2008
19,522
19,679
Might the increased memory bandwidth BE the reason the GPU is "faster" in the A15...?

It’s likely part of it, yes.

The faster GPU is explained by the A15 GPU tech video: https://developer.apple.com/videos/play/tech-talks/10876
A15 simply has higher FP32 throughput compared to A14. In-depth explanation below:

A-series GPUs have operated at 1/2 FP32 rate for a while. They have 32 ALUs, but were only doing 16 FP32 operations per clock. A14 is already capable of doing 32 FP32 per clock, but it is disabled in the iPhones (I assume for power efficiency reasons), while M1 has full 32FP throughput. The A15 brings full FP32 throughput to the iPhone GPU for the first time. I doubt that this means a significant rework of the shader units, they just probably improved the efficiency enough to enable them to unlock the capability that was there for a while.
 

jmho

macrumors 6502a
Jun 11, 2021
502
996
It's going to be extremely disappointing if the A15 GPU improvements don't make their way to the M1X. Lossy compression and shuffle and fill (which should be excellent for post-processing / screen-space effects) are going to be fantastic ways to improve real-world 3D performance, especially at the resolutions Macs run at.

The performance benefits from those features won't be reflected in the Geekbench metal score improvements either, so I think GPU-wise the real world improvements from A14 -> A15 are really significant.
 
  • Like
Reactions: Vazor

leman

macrumors Core
Oct 14, 2008
19,522
19,679
Lossy compression and shuffle and fill (which should be excellent for post-processing / screen-space effects) are going to be fantastic ways to improve real-world 3D performance, especially at the resolutions Macs run at.

Lossy compression should not affect performance, it’s a memory optimization technique. I don’t see much use of it on desktop gaming to be honest. M1 already does automatic lossless compression to save bandwidth anyway. Maybe you can get a bit more saving with lossy here, but nothing to make or break the game.

Shuffle and fill is an amazing addition to the compute shader repertoire. I was looking around, and I can’t find any analogue in DX12 or CUDA. So this seems to be a feature where Apple is ahead of everyone else. Unfortunately, I doubt it will be used in games any time soon, it’s too niche of an optimization and the hardware is too new. But pro software will adopt it quickly.

What can be really significant for games however are sparse depth and stencil. One can do some really cool things with it. And in general, sparse textures are one of Apples underestimated strength. They are the only company to ship this feature with good performance. Last time I checked, sparse resources were so slow on Nvidia and AMD thst they were practically useless.

At any rate, I fully expect the prosumer hardware to have these features.

The performance benefits from those features won't be reflected in the Geekbench metal score improvements either, so I think GPU-wise the real world improvements from A14 -> A15 are really significant.

Geekbench has multiple shaders that would benefit from shuffle and fill. Would be nice if they update them.
 
  • Like
Reactions: jdb8167

jmho

macrumors 6502a
Jun 11, 2021
502
996
Lossy compression should not affect performance, it’s a memory optimization technique. I don’t see much use of it on desktop gaming to be honest. M1 already does automatic lossless compression to save bandwidth anyway. Maybe you can get a bit more saving with lossy here, but nothing to make or break the game.

Perhaps. Historically anything you can do to get texture data faster into the shader units the better. If lossy compression means more texture data can sit in the cache that should be a big win.

I also think a lossy texture is significantly smaller than lossless, but we'll see. Hopefully my iPhone 13 arrives today so I can have a play :D

Shuffle and fill is an amazing addition to the compute shader repertoire. I was looking around, and I can’t find any analogue in DX12 or CUDA. So this seems to be a feature where Apple is ahead of everyone else. Unfortunately, I doubt it will be used in games any time soon, it’s too niche of an optimization and the hardware is too new. But pro software will adopt it quickly.

Fortunately, like lossy compression this is something that isn't too complex to implement. You're probably right though that very few people will use it unless they're specifically trying to make a great Mac port.

What can be really significant for games however are sparse depth and stencil. One can do some really cool things with it. And in general, sparse textures are one of Apples underestimated strength. They are the only company to ship this feature with good performance. Last time I checked, sparse resources were so slow on Nvidia and AMD thst they were practically useless.

Yeah, this looked really cool but it also looked incredibly complex to implement. I think we're only going to see this on incredibly polished games.

At any rate, I fully expect the prosumer hardware to have these features.



Geekbench has multiple shaders that would benefit from shuffle and fill. Would be nice if they update them.

I think Geekbench probably wants to keep a relatively basic feature set so they test raw compute power, letting the M1X do half the work thanks to newer feature could be seen as cheating - although I guess that's the dilemma of synthetic benchmarks.
 

leman

macrumors Core
Oct 14, 2008
19,522
19,679
Perhaps. Historically anything you can do to get texture data faster into the shader units the better. If lossy compression means more texture data can sit in the cache that should be a big win.

I always thought that compression was done by the memory controller, i.e. the texture is fully decompressed in the cache but compressed/decompressed on the fly as data is transferred from RAM. I think Nvidia has a presentation on these kind of techniques…. but who knows what Apple is doing behind the scenes. In the dev video they were selling the lossy compression as a way to save RAM, not bandwidth.
 

jmho

macrumors 6502a
Jun 11, 2021
502
996
I always thought that compression was done by the memory controller, i.e. the texture is fully decompressed in the cache but compressed/decompressed on the fly as data is transferred from RAM. I think Nvidia has a presentation on these kind of techniques…. but who knows what Apple is doing behind the scenes. In the dev video they were selling the lossy compression as a way to save RAM, not bandwidth.
The diagram in the developer talk has the shader cores talking directly to the compression unit, so hopefully that means the textures stay compressed all the way until they're sampled.

They also state that lossy textures are 50% smaller than lossless so I think there should be knock-on effects for saving bandwidth / increasing cache-hits even if the main intent is saving RAM.
 
  • Like
Reactions: Macintosh IIcx

leman

macrumors Core
Oct 14, 2008
19,522
19,679
The diagram in the developer talk has the shader cores talking directly to the compression unit, so hopefully that means the textures stay compressed all the way until they're sampled.

They also state that lossy textures are 50% smaller than lossless so I think there should be knock-on effects for saving bandwidth / increasing cache-hits even if the main intent is saving RAM.

Would be interesting to benchmark the effects…
 
  • Like
Reactions: jmho

Joelist

macrumors 6502
Jan 28, 2014
463
373
Illinois
I'm sure the A15 cores on both CPU and GPU as well as ML will make their way to M. Seeing as right now M1 drives a 4.5K display with superb color characteristics without breaking a sweat I am sure M2 will be even better than that. It will probably also have the upscaled controllers to drive multiple displays better.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.