M1 = no Media Engine
M1 Pro/Max/Ultra = Media Engine(s)
M2 = no hardware ray-tracing
M2 Pro/Max/Ultra/Extreme = hardware ray-tracing
The M2 filled in Media Engine ( ProRes )
Apple rolled out new software support for Ray tracing at WWDC 2022. Than doesn't mean that hardware support is coming soon. Doesn't discount it arriving, but doesn't enable it either. Apple has a keen interest in rolling out more efficient , effective ray tracing to the
current platforms at least as much to solely limiting it to some future ones.
And teh media engine stuff rolled out incrementally over several years also. Afterburner came first. Apple rolled that out to a decently large end user audience. Worked out the bugs and then only then put it into fixed function silicon. Apple has spent a substantial effort over last 2-3 years helping 3D raytracing programs getting their stacks adapted to Apple's Metal foundation. IHMO, it is a bit early that to show up as fixed function silicon. Apple really didn't get the kinks out of most of that until the last year or so on the software side. So if the API is shifting and evolving how are they going to do fixed function hardware in the most optimal way? Probably not. So waiting like they did for the Afterburner case until get enough deployed API feedback would be a less risk path to rolling out hardware ray tracing. ( rather than being in a feature check box race with Nvidia. )
And the Apple hardware that is likely most pressed for hardware ray tracing is the tether-less VR/AR goggles; not the Mac Pro.
Or maybe Apple has the M2 Pro & M2 Max laptop SoCs on 5nm, without hardware ray-tracing; and the M2 Ultra & M2 Extreme SoCs are on 3nm, with some of the extra transistor capacity earmarked for said hardware ray-tracing...?
Again the VR/AR goggles are in deeper need of TSMC N3 to pack hardware ray tracing into a highly constrained die area than the Mac Pro SoC usage. Rumors early in 2022 about the goggles is that they were running way too hot. (which highly likely means the battery life was also 'way too short' to be competitive. ). Unloading the GPU cores there with a more power efficient ray trace pipeline would buy a bigger 'bang for the buck' there.
[ And rather super early , 'at risk' production N3 chip in early 2022 that was having heat problems wouldn't be that surprising. Rumors Apple using a M1 Pro/Max in the goggles did not much much sense other than as a 'test mule' SoC just to power a prototype physical set up. And blowing the thermal budget in that context wouldn't be surprising either. ]
Apple's original plans were for the Mac Pro to be out in 2022. Loading it down with stuff that wasn't finished in 2020-2021 would not help that happen on time. The AR/VR goggles had no explicitly stated timeline. So if they slide , then they just slide. There is no user base there yet.
The plain M2 stuffers from die bloat. It is incrementally faster, but it is also bigger. Smaller M2 ultra M2 extreme die components will be easier to package. Do not want the biggest chiplets; generally going to want smaller ones.
So pretty good chance not going to take the same M1 Max die size and stuff the maximum number of transistors into the same size. Not going to have the same area budget. So M2 architecture in smaller space is going to 'spend' that relatively smaller transistor budget. (relatively to what it could have been it went for the more expensive, larger die).
The same percentage size die bloat ( m1 -> M2 ) applied to a 400+ mm^2 Max die would be bad. Probably blow past the packaging TSMC tech used for the M1 Ultra. The die cost would go up. They'll probably need a different packaging tech for the 'more than two" die set up regardless, it isn't going to help to use bigger than need be chiplets.
Full size E core complex ( 4 cores ) , 2 more GPU cores per cluster , some cache increases (which N3 does not shrink as well) , DisplayPort 2.1 , and some NPU/Media tweaks/improvements has pretty good chance of using up most of the increased transistor budget in a smaller die size.
All of those will be more useful to a broader user base than just hardware raytracing. More CPU and GPU cores and keeping them feed with data ( cache bump) is just better overall, general throughput. Broader display support for 2023+ higher end displays (not everyone who needs more screen resolution needs those pixels raytrace generated).
Hardware ray-tracing can be very narrow in impact with a reasonably small increment in transistor budget. Apple could squeeze a limited (or single) function into the budget. I wouldn't expect some humugous impact there though. (substantive improvements , but not 'revolutionary' ) And likely will be like AVFoundation library relation to the media engine. If you use the Apple RT API exactly how they laid it out then will get the uplift. Roll your own RT stack and nothing. What is being 'swapped out' is a call to something Apple wrote, not a general 'tool' for 3rd party implementations.
Ray tracing likely not a 'big budget increase' driver. Especially on the first iteration/generation implementation.
it is probably more important for Apple to have portable Metal RT code ( same code work on AR/VR goggles, iPhone, iPad , Mac ) than to have some some custom RT code that only works on > $5K systems with the RT hardware in it.