Oh, that's strange they haven't built in decoding for these demanding codec in these powerful GPU. Thanks for info.
no all that strange at all.
Costs ( along multiple dimensions )
1. The static, fixed function decoders take up transistors ( space on the die). For a given product die allocation , more codec support means features cores/cache that are contributing to the "powerful" aspect selling the card against.
2. Also competing with other codes. AV1 got added to RDNA2 cards. The number of potential consumers of the codec is also a "cost" factor. More folks who need it pragmatically makes it cheaper to add ( since more folks will buy to get that).
3. H.265 doesn't come free. If 4:2:2 10-bit entangles more patents than 4:2:0 10-bit then it costs more to deploy.
Pile up all three factors and it particularly surprising can loose out to AV1 ( which is "free" , and has more users ).
Nvidia isn't bending over backwards to do 4:2:2 10-bit either. [ So that is kind of a cherry on top factor. Apple does. but Apple also doesn't complete in the "very powerful" GPU space either. ]
If Apple is on tract to always putting theirs ( 4:2:2 10-bit) embedded into every Mac sold ...... probably not going to be major driver for Linux/Windows market.
AMD/Nvidia next generation GPU that get to new process node ( e.g. 6nm-5nm ) then that would be a bigger chance of a "surprise" ( "strange"). More die space (because everything else shrank), looking for new ( because AV1 present already), and looking to keep average selling price at least as high ( more expensive components).
The fact Apple is not looking to do a top end , "most powerful" GPU means they are much more willing to allocate transistor budget to some niche that adds value to their mobile devices (e.g., iPad product line). They spend more on smaller dies on the bleeding edge process (e.g., 5nm for M1 ) and just pass increased component costs to customers with profit pad on top every chance they get.