This is a perfectly normal tech-enthusiast response. My question though would be - to what purpose?Markets change, and it's not like a competent RT requires a massive transistor budget. If Apple were to deliver useable real-time RT acceleration it would make them the first company offering this feature on a reasonable power budget (unlike Nvidia where you need a 200W desktop GPU for RT in games to make any sense). That would truly bring raytracing to the masses (at least to the masses of Apple users) and put Apple in a unique position.
Even as someone who is interested in technology, who does play games, and who is theoretically capable of looking for specific lighting model peculiarities, I still don't see a meaningful upside.
But now you're in a different domain entirely - and in a hypothetical future. It's not that I can't see that there could be benefits - it's that I don't see such hypothetical benefits in 3D rendering lighting modeling taking precedence over any other use of available resources, or even simply saving gates, engineering effort and money.Apple already has hardware compression for both data and bandwidth and it's not like the upscaling requires any dedicated hardware. As I wrote before, a good hardware RT implementation has to start with a comprehensive work/memory access reordering solution, which will have benefits way beyond RT alone. Getting good performance of the RT is not just about computing ray-triangle intersections quickly, but first and foremost solving the problem of control flow and data divergence. If this problem can be solved, the GPU suddenly becomes a much more competent programmable processor.
Even as a tech nerd, efficiency and real user benefit are the criteria I'd like to see optimized for. And RT demonstrates neither.