Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

diamond.g

macrumors G4
Mar 20, 2007
11,438
2,665
OBX
I have to wonder. How much horsepower do computers actually need? Where are we headed that an RTX 3090 isn't enough graphics power, or that the M1 Max is underpowered? Is the plan to completely fry the real world to create a virtual one?

What happened to computers de-materializing and reducing the environmental footprint of our work? Computers have made the problem way worse.
For rasterization more is always better. Realtime Raytracing could use more performance uplifts.
 
  • Like
Reactions: Andropov

JouniS

macrumors 6502a
Nov 22, 2020
638
399
I have to wonder. How much horsepower do computers actually need? Where are we headed that an RTX 3090 isn't enough graphics power, or that the M1 Max is underpowered? Is the plan to completely fry the real world to create a virtual one?
Virtual reality, for example. Assume that a single RTX 3090 is sufficient for "good enough" graphics at 4k and 60 fps. For virtual reality, you may want 8k resolution (because the field of view is larger), 120 fps (because people are more sensitive to the frame rate in VR than on screen), and separate images for each eye. Naively, you need 16x more GPU power just to have "good enough" graphics in VR.

Then there are various professional applications where the power consumption of the hardware is measured in megawatts.
 

diamond.g

macrumors G4
Mar 20, 2007
11,438
2,665
OBX
Virtual reality, for example. Assume that a single RTX 3090 is sufficient for "good enough" graphics at 4k and 60 fps. For virtual reality, you may want 8k resolution (because the field of view is larger), 120 fps (because people are more sensitive to the frame rate in VR than on screen), and separate images for each eye. Naively, you need 16x more GPU power just to have "good enough" graphics in VR.

Then there are various professional applications where the power consumption of the hardware is measured in megawatts.
Tech like XeSS and DLSS is going to be key here. "No need to render at native if upscaling can look just as good", or so they say.
 
  • Like
Reactions: Krevnik

JouniS

macrumors 6502a
Nov 22, 2020
638
399
Doesn't the Oculus Quest 2 already drive 4k/120 and run from battery? What trickery is it using?
The graphics are not as good as in the RTX 3090 / 4k / 60 fps example.

Even that baseline may not be sufficient. Trees look ugly in games, because there is not enough GPU power to render them in real time. Real trees are large and complex, and you often see many of them at the same time. I have no idea how much more GPU power you would need to render good-looking forest scenes without artificial restrictions.
 

JouniS

macrumors 6502a
Nov 22, 2020
638
399
This concerns me. What applications need that much power?
Scientific computation in some situations, and some extreme cases of training deep learning models. Probably some simulations related to nuclear weapons, but the details are classified. Maybe some simulations in more mundane forms of engineering as well. Hypothetically some cryptanalysis tasks in major intelligence agencies.
 

diamond.g

macrumors G4
Mar 20, 2007
11,438
2,665
OBX
Forza Horizon 5 manages to draw some very realistic scenery even on my 1660 Super that I stuffed into a 2014 i5 desktop. Heck, even GTA 5 on my PS4 got ray tracing working. Despite that, I enjoy playing Mario Kart 64 more. Graphics just aren't all that important to me. If I want scenery, I go outside.

I have the unpopular opinion of "Good enough". I can wait a few minutes for a video to render or a couple hours to compress files. It's an automated process. Turn it loose and work on something else.


This concerns me. What applications need that much power?
Tesla's Dojo system will reportedly use 1.8MW of power for training AI Driving.
 

sirio76

macrumors 6502a
Mar 28, 2013
578
416
How much horsepower do computers actually need?
Some users needs all the power they can get, other (I suspect the largest part here) just want to debate about what is fast without really need that much performance, or they need the performance just for gaming (that by itself is not a very good thing IMO... sorry but I prefer my son to play outdoor with his real friends and not in front a computer with virtual one).
 
  • Angry
Reactions: appleArticulate

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
I understand why people expect that a Mac Pro would be a Mac Pro, but I still expect that you are going to be disappointed. They have lost a tremendous amount of their real pro business. They understand that they are not all things to all people. It’s not hard to imagine how you can remove expansion slots and still support audio and video input and output

You are presuming that stripping away expansion options was totally decoupled from them losing a substantive amount of pro business. That is probably is not true. There are multiple contributing factors, but the "bet the farm on OpenCL and Thunderbolt" Mac Pro 2013 was a contributing problem. And even bigger problem to sit on it for 6 years.


. Not with today’s expansion cards but with a new kind.

Telling customers to dump their sunk cost cards and buy a whole new set that may/may not cover any new value-add ground probably is going to cause as many problems as it supposedly solves. The higher the price of the card ( e.g., $3K audio card) the more annoyed they will likely be at that prospect.



I still believe that they were on the path they wanted to be on with the trash can. But they were ahead of the times. It was too early to go in that direction. But it’s not too early now. I also said that there would be Thunderbolt four ports. I’m making the same mistake, that I wonder if others are making. Maybe they have a new thunderbolt? Thunderbolt five? Maybe they have some other systems that has a enormous amount of bandwidth.

The Mac Pro 2013 had six Thunderbolt sockets. That really didn't help dig it out of the hole . So more TB ports isn't going to solve it. Indeed, Apple even said that they "leaned on" Thunderbolt too much with the MP 2013...

Thunderbolt 4 is more so USB 4 with much of the optional optional stuff made mandatory. Thunderbolt 5 is quite likely going to be the exact same USB 4 baseline with just a longer list of "optional" stuff made manditory. ( e.g., DisplayPort 2.0 mandatory regardless of what USB 4.x does.).

The notion that going to quickly get some huge baseline bandwdith increase probably isn't coming because Thunderbolt is now hooked to USB evolution. USB is a bigger, broader committee that moves slower. Apple doesn't control either USB (USB-IF) or Thunderbolt ( even more solitary Intel ).


Apple is primarily pursuing Perf/Watt not "enormous bandwidth". Bandwidth matter more than Wattage consumed is a path they have explicitly said they are moving away from. Unlikely, it will come to a M-series powered "Mac Pro".





One should never count Apple out, of going in a proprietary direction. But on the other hand, there would be no thunderbolt today

Thunderbolt never was positioned as a proprietary solution. ( yes had a one vendor implementor for the controllers for long while , but that's like saying Apple moved to Intel x86 because it was proprietary. Pragmatically it was more of a standards with a smaller committee than entirely proprietary. )


if it wasn’t for Apple.

Thunderbolt dragged almost exclusively off of fiber onto copper? But thunderbolt avaliable to other system vendors. As long as folks abided by the rules (e.g., no "race to the bottom" products ) Intel sold controllers to a wide variety of system vendors. if nobody had bought but Apple it would have failed.



So there are ways to build a workstation that works for the kind of people that Apple wants to sell to. And they don’t require you to build a traditional PC tower. I believe that Apple is not looking to take over the high-end workstation business.

Taking over the bulk of the workstation market never was the question. The pressing question has primarily been how do they hold onto their Mac Pro 2008-2012 and Mac Pro 2019 user base .
The iMac Pro was primarily a follow up to the Mac Pro 2013 users who were mainly happy with the non-modularity tradeoffs made with that product.

Apple doesn't particularly need a follow up to that if just update the Mini ( M1 Max ) and iMac to (M1 Max or M1 Max Duo) like evolutionary updates. They have "lean on Thunderbolt" options throughout the rest of the product line up.

Apple also left a gap with the 100% Mac Pro 2019 entry price increase of folks who were looking for "better than iMac" performance without an attached screen.




. So why not take this opportunity, to reimagine what a high end, Mac could be like. I know that’s not what people want to hear, but Apple at its founding was not about giving people what they wanted to hear. What was that famous line that Steve Jobs said? “People don’t know what they want until we show them.“

Apple was founded on more so on a notion of that users wanted their own smaller computer ( as opposed to refigerator or bigger "mainframe" or "mini" that was so expensive had to amortize multiple users sharing it to justify the expense. )

Is Apple going to try to lure more Mac Pro users onto a smaller ( but more capable than the towers a couple generations back) systems. Yes. MBP 16" with a M1 Max doing video editing that is covered by the fixed function ProRes/H.265 logic in the Max will do the job a Mac Pro would be pressed into service on 3-6 years ago. Can Apple get more of those folks to migrate down the product line? Sure.

Will they loose folks who were at the top, bleeding edge end of the Mac Pro users base with that focus? Sure on that one too. There are substantive number of folks on that end of the spectrum where the Mac Pro was a contributing component of the overall solution. If the system cost of the non Mac Pro stuff is much higher than the Mac Pro then it is increasingly just a part.

Apple's dogma of doing all the system integration themselves and leaving user with smaller (and/or more clumsy ) system design choices is a contributing reason to why they have lost share over time. More dogma isn't really going to help them produce a better "Mac Pro" . Some other product, but not really a "Mac Pro".

Getting up on stage and saying "Can't innovate my ass 2.0 " is going to get about as many eyerolls and " We're done with them" as it will get applause.



Edit: here’s a system to add PCI cards if you really need them in such a system.:

Same "lean too much on TB" and "one and only internal drive is good enough" mindset that was problematical for the Mac Pro 2013. Things haven't changed that much.

Outside of x4 PCI-e add in cards there isn't much traction for this either performance/bandwdith/latency wise or efficient use of rack/desktop space/volume wise.

This will help with racked M-series Minis and desktop M-series minis more so than getting to core Mac Pro problem space issues.
 

Unregistered 4U

macrumors G4
Jul 22, 2002
10,610
8,628
I have to wonder. How much horsepower do computers actually need? Where are we headed that an RTX 3090 isn't enough graphics power, or that the M1 Max is underpowered? Is the plan to completely fry the real world to create a virtual one?

What happened to computers de-materializing and reducing the environmental footprint of our work? Computers have made the problem way worse.
Need? I think we’re already at OR passed a little while ago the threshold of NEED for the majority of users. One of the reasons why mobile systems are increasing in marketshare is not because folks have drastically decreased their performance requirements, it’s because the performance envelope for those systems that you can have with you all the time has exceeded their needs.

The top end will continue to incrementally climb upward in performance, and the folks that need that performance will decrease over time (though the curve is likely asymptotic). As a result, what’s going on at the top end will become more and more irrelevant to your average person.
 
  • Like
Reactions: appleArticulate

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
Need? I think we’re already at OR passed a little while ago the threshold of NEED for the majority of users. One of the reasons why mobile systems are increasing in marketshare is not because folks have drastically decreased their performance requirements, it’s because the performance envelope for those systems that you can have with you all the time has exceeded their needs.

Plateaued rather than decreased workloads is very probably more accurate. Doing a virtual greenscreen videoconferencing backdrop isn't a decrease in workload. But if not doing it a 4-8K resolution and HDR color space it probably isn't an increase either.

Home Internet going to 1Gb/s is merely catch up with multiple decades old 1Gb Ethernet. Not really decreasing.


I think you are trying to cast "decreasing" as a smaller fraction of the bleeding edge top end as it changes over time. that is something different than the workload demands of what folks are doing.
Also going to stuff like Electron and HTML JavaScript apps isn't going "backwards" or "decreasing" workload demands either. There are more bloated apps now than 10-15 years ago. [ Just not so bloated to completely overwhelm the modern mainstream CPU-GPU packages. ]



The top end will continue to incrementally climb upward in performance, and the folks that need that performance will decrease over time (though the curve is likely asymptotic). As a result, what’s going on at the top end will become more and more irrelevant to your average person.

This also ignores the fact that onces some things become more mainstream that the modern larger transistor budgets allow fixed function and/or super specialized logic to address those problems. Video encode/decode for mature codecs these days can be done off the general purpose cores.

Increasing the "top end" is moving out into increasingly esoteric workload zones , if not already problems that already suffered from non even available horsepower already ( watered down fidelity models to fit hardware had as opposed hardware wished you could have. )

AES on specialised logic can have wirespeed throughput on modern CPU packages. If tried to do that with general cores would need a "high end" system to cover up the overhead. Compression same issues. Workload hasn't gone down.... just has been moved off the general purpose cores.
 

Unregistered 4U

macrumors G4
Jul 22, 2002
10,610
8,628
Telling customers to dump their sunk cost cards and buy a whole new set that may/may not cover any new value-add ground probably is going to cause as many problems as it supposedly solves. The higher the price of the card ( e.g., $3K audio card) the more annoyed they will likely be at that prospect.
But, there’s no dumping of their sunk cost if they just keep using that system which is what they’d do. And, if a future system doesn’t suit their needs, they’d keep using the system they have until it’s impossible to do so. By that time, the ROI would have made up for the cost of the $3K audio card and they’d be buying a new $3K solution for whatever new box they need to buy.
 

Unregistered 4U

macrumors G4
Jul 22, 2002
10,610
8,628
Plateaued rather than decreased workloads is very probably more accurate. Doing a virtual greenscreen videoconferencing backdrop isn't a decrease in workload. But if not doing it a 4-8K resolution and HDR color space it probably isn't an increase either.

Home Internet going to 1Gb/s is merely catch up with multiple decades old 1Gb Ethernet. Not really decreasing.

I think you are trying to cast "decreasing" as a smaller fraction of the bleeding edge top end as it changes over time. that is something different than the workload demands of what folks are doing.
Also going to stuff like Electron and HTML JavaScript apps isn't going "backwards" or "decreasing" workload demands either. There are more bloated apps now than 10-15 years ago. [ Just not so bloated to completely overwhelm the modern mainstream CPU-GPU packages. ]
I think you misread me. When I used “decreased” it was in

“not because folks have drastically decreased their performance requirements”

which is saying that folks have NOT drastically decreased their performance requirements. And they haven’t. They’ve actually increased. BUT, the performance of low/mid range systems have increased to meet and exceed those requirements for the vast majority of people.

In short, I’m saying the performance of mobile solutions have increased to the point where it meets or exceeds the vast majority of use cases, not that anything has decreased in any real measurable way. The performance of mobile systems will continue to increase and continue to exceed the use cases for more and more average folks.

This also ignores the fact that onces some things become more mainstream that the modern larger transistor budgets allow fixed function and/or super specialized logic to address those problems. Video encode/decode for mature codecs these days can be done off the general purpose cores.
No, it doesn’t ignore it, it takes it into account AND celebrates it. As the high end is moving up, so, too, is the low end. Intel and AMD both HAVE to disable their low end solutions to provide a nice price gradient and not cannibalize their high end systems. Even so, these low end systems perform very well for the stuff average folks need to do.

Again, I didn’t say workload has gone down, I’m saying performance has increased to the point where most folks using mobile systems are not experiencing a noticeable performance penalty anymore.
 

Boil

macrumors 68040
Oct 23, 2018
3,478
3,173
Stargate Command
Latest ASi Mac Pro rumor-mongering...?

"The Apple Silicon transition will end by Q4 of 2022. The Mac Pro will be the last device to be replaced. The Mac Pro’s processor will not be an extension of the M2. The processor of the Mac Pro will instead be a further extension of the M1 beyond the cores of the M1 Max."

Let's break it down...
  • "The Apple Silicon transition will end by Q4 of 2022. The Mac Pro will be the last device to be replaced."
Nothing really new or groundbreaking here...
  • "The Mac Pro’s processor will not be an extension of the M2."
Well duh, the entry-level M2 SoCs will just be showing up in the new Macbook (Air replacement) at that time, Apple would be no where near launching a dual or quad M2 Max SoC configuration...
  • "The processor of the Mac Pro will instead be a further extension of the M1 beyond the cores of the M1 Max."
So, a dual or quad M1 Max SoC configuration, like everyone has been saying since forever...?

Stellar reporting there Dylan, cannot wait to see how you spin the same tired rumors next month, and every month until the ASi transition is complete...!
 
Last edited:

MayaUser

macrumors 68040
Nov 22, 2021
3,178
7,201
No one asked the reports about what Dylan wrote

Hopefully the cores for the M1 pro/Max from the upcoming next Mac Pro Silicon, to be an extension of the A15 cores, and not from the A14
 

iPadified

macrumors 68020
Apr 25, 2017
2,014
2,257
You are presuming that stripping away expansion options was totally decoupled from them losing a substantive amount of pro business. That is probably is not true. There are multiple contributing factors, but the "bet the farm on OpenCL and Thunderbolt" Mac Pro 2013 was a contributing problem. And even bigger problem to sit on it for 6 years.




Telling customers to dump their sunk cost cards and buy a whole new set that may/may not cover any new value-add ground probably is going to cause as many problems as it supposedly solves. The higher the price of the card ( e.g., $3K audio card) the more annoyed they will likely be at that prospect.





The Mac Pro 2013 had six Thunderbolt sockets. That really didn't help dig it out of the hole . So more TB ports isn't going to solve it. Indeed, Apple even said that they "leaned on" Thunderbolt too much with the MP 2013...

Thunderbolt 4 is more so USB 4 with much of the optional optional stuff made mandatory. Thunderbolt 5 is quite likely going to be the exact same USB 4 baseline with just a longer list of "optional" stuff made manditory. ( e.g., DisplayPort 2.0 mandatory regardless of what USB 4.x does.).

The notion that going to quickly get some huge baseline bandwdith increase probably isn't coming because Thunderbolt is now hooked to USB evolution. USB is a bigger, broader committee that moves slower. Apple doesn't control either USB (USB-IF) or Thunderbolt ( even more solitary Intel ).


Apple is primarily pursuing Perf/Watt not "enormous bandwidth". Bandwidth matter more than Wattage consumed is a path they have explicitly said they are moving away from. Unlikely, it will come to a M-series powered "Mac Pro".







Thunderbolt never was positioned as a proprietary solution. ( yes had a one vendor implementor for the controllers for long while , but that's like saying Apple moved to Intel x86 because it was proprietary. Pragmatically it was more of a standards with a smaller committee than entirely proprietary. )




Thunderbolt dragged almost exclusively off of fiber onto copper? But thunderbolt avaliable to other system vendors. As long as folks abided by the rules (e.g., no "race to the bottom" products ) Intel sold controllers to a wide variety of system vendors. if nobody had bought but Apple it would have failed.





Taking over the bulk of the workstation market never was the question. The pressing question has primarily been how do they hold onto their Mac Pro 2008-2012 and Mac Pro 2019 user base .
The iMac Pro was primarily a follow up to the Mac Pro 2013 users who were mainly happy with the non-modularity tradeoffs made with that product.

Apple doesn't particularly need a follow up to that if just update the Mini ( M1 Max ) and iMac to (M1 Max or M1 Max Duo) like evolutionary updates. They have "lean on Thunderbolt" options throughout the rest of the product line up.

Apple also left a gap with the 100% Mac Pro 2019 entry price increase of folks who were looking for "better than iMac" performance without an attached screen.






Apple was founded on more so on a notion of that users wanted their own smaller computer ( as opposed to refigerator or bigger "mainframe" or "mini" that was so expensive had to amortize multiple users sharing it to justify the expense. )

Is Apple going to try to lure more Mac Pro users onto a smaller ( but more capable than the towers a couple generations back) systems. Yes. MBP 16" with a M1 Max doing video editing that is covered by the fixed function ProRes/H.265 logic in the Max will do the job a Mac Pro would be pressed into service on 3-6 years ago. Can Apple get more of those folks to migrate down the product line? Sure.

Will they loose folks who were at the top, bleeding edge end of the Mac Pro users base with that focus? Sure on that one too. There are substantive number of folks on that end of the spectrum where the Mac Pro was a contributing component of the overall solution. If the system cost of the non Mac Pro stuff is much higher than the Mac Pro then it is increasingly just a part.

Apple's dogma of doing all the system integration themselves and leaving user with smaller (and/or more clumsy ) system design choices is a contributing reason to why they have lost share over time. More dogma isn't really going to help them produce a better "Mac Pro" . Some other product, but not really a "Mac Pro".

Getting up on stage and saying "Can't innovate my ass 2.0 " is going to get about as many eyerolls and " We're done with them" as it will get applause.





Same "lean too much on TB" and "one and only internal drive is good enough" mindset that was problematical for the Mac Pro 2013. Things haven't changed that much.

Outside of x4 PCI-e add in cards there isn't much traction for this either performance/bandwdith/latency wise or efficient use of rack/desktop space/volume wise.

This will help with racked M-series Minis and desktop M-series minis more so than getting to core Mac Pro problem space issues.
You could view the MP 2013 as FCP editing machine and for that it worked OK. Well until the GPU burned. It is 9 years ago and I think many user have gotten accustomed to Thunderbolt breakout boxes so I believe it is another playing field today.

Cutting ends performance also translate to cutting edge software and in science and technology, this is typically windows of unix based. No amount of internal expansion is going to change that.

The 2013 MP assumed that node shrinks would provide energy efficient GPU/CPU and we all know what happened with that assumption. The 2013 form factor (no internal expansion, relatively small) with AS, perhaps Jade 4C, plus a reasonable 2500$ mini LED 32 inch screen would be a very popular combination.

The winner of the upper high end market needing in saturable amount of compute will be the vendor with the best performance/price. Apple could do that but there is no indication that they will.
 

cmaier

Suspended
Jul 25, 2007
25,405
33,474
California
No one asked the reports about what Dylan wrote

Hopefully the cores for the M1 pro/Max from the upcoming next Mac Pro Silicon, to be an extension of the A15 cores, and not from the A14

They are definitely not. From the beginning we’ve known about all the Jade variations, which are tiled dies.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
Thunderbolt 4 is about 32Gb/s ( not 'Bytes', but 'bits'. So that is 4GB/s ).

Minor nitpick: it's actually about 40 Gb/s. One TB3/TB4 lane is 20 Gb/s, and there are two lanes in the connector.

Sorry I was being a bit sloppy with context. PCI-e v3 throughput delivered via TBv4 is about 32Gb/s . That original error on units was in the context of "don't need internal PCI-e ... just use TB". In that specific context it is 32Gb/s because x4 PCI-e v3 is 32Gb/s. Tap in a small amount of overhead and it will be about 32Gb/s.

But yes. Technically TB v4 data transmission is 40Gb/s. However , can't get throughput faster than the on/off ramps.




PCIe gen 5 might not happen as soon as M2, but it will happen sooner or later.

After all, even plain M1 already supports gen 4. Not sure what you meant by "extremely limited", either, it's there and is being used. One example: the M1 Mini 10 gig ethernet option uses an Aquantia 10GbE NIC connected to the M1 by x1 PCIe gen 4.

That usage is about Apple's one trick pony. That is why it is limited.

1. soldered to a single device.

2. no net overall increase in bandwidth to SoC. The older Mini Intel could do 10GbE controller also without major issues. x2 v3 or 1x v4 ... the bulk of that is just "cheaper to implement" for Apple. End user isn't seeing much of any additional utility at all. Apple doesn't even have 10GbE on vast majority of M-series systems; again the pragmatic impact is the bandwidth is buried.

The Gen 8 8000 series Apple used on the Mini 2018 has x16 and another x4 provision via DMI That is an aggregate of x20 v3 lanes . No big 'lift' on a M1. Even if there were 4 TB ports ( x16 v3) and x4 v4 (most of which attached to v3 or less devices). The modern AMD/Intel solutions are in the > x16 v4 range. Contemporary wise it isn't a bandwidth uplift at all.



3. even if provisioned it out into a PCI-e socket the mismtach between legacy v3 x2 , x4 , or x8 devices hooked to one physical lane would actually be backsliding on bandwidth.



With M1/M2/M3 etc in particular, it's important to adopt high speed I/O standards quickly.

If Apple isn't provisioning the ability to hook x4 and x8 v4/5 devices to it where is the grand "importance" ?

Everything on these chips that's only useful in a Mac is dead weight on iPad Pro. Moving to the latest and greatest PCIe spec helps Apple minimize overhead for iPads by reducing the number of SERDES required to support Mac-level I/O.

Which is a concern if that is the driving force for provisioning for a Mac Pro. Apple "stressed" about putting Wifi 7 chip with 11Gbps throughput in a future iPad driving PCi-e v4 adoption rather than a four M.2 socket PCI-e v4 x4 SSD add in card that needs x16 PCIe v4 just not to block the hosted SSDs. The primary focus is on limiting the number of SERDES implementation work; not on provisioning contemporary workstation aggregate bandwidth levels.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
Huh? They can already do that without much change. Make fragment and vertex functions available for Apple GPUs only (so the assumed UMA and TBDR still holds) and keep kernels available for external GPUs so they can be used for compute. Some other things like samplers may have to go too but it can be done.

If it is a easy as "falling off a log" why hasn't Apple done it? There are a substantive number of external TB devices with GPUs in them attached to Intel Macs. It has been over a year of macOS on M-series.
Probably about a month , maybe two , away from feature set of macOS 13 getting frozen and no leaks about something like this. So looking likely to be two years in.

[ decent chance that a contributing factor is Apple 'herding' some folks out of OpenCL AMD implementation. If it is deprecated and abandoned then don't have to port it. Plus upside is yet more folks driven deeper into the Apple GPU space.

Side effect of drives bigger profits by pushing folks into bigger Apple GPUs probably doesn't hurt either. ]
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.