Are you suggesting that the RTX 4090 will be 70-80% faster than 7900XT?
No, you are suggesting that. There is almost nothing in what I wrote tagging the specific mark different between 4090 and 7900. AMD doesn't desperately need a "4090 killer" right now. They have a higher need more market share.
In some narrow RT or ML niche that Nvidia ( or Nvidia fan boys) would likely cherry pick, that might be a decent figure. In overall, aggregate mainstream performance it highly likely is not. And to put a Mac focus on it, Apple isn't super pressed about Nvidia hardware specific RT or ML wins. ( otherwise Apple would have folded to Nvidia trying to play "tail wags the dog' games. )
For me it makes no sense at all. Perhaps these 'new' rumours are about the 7700XT rather than the 7900XT?
The 7700XT is in the table on that first link I posted as a reply back in post 33. For Navi32, AMD drops the number MCDs ( from 8 to 6), lower the cache and memory bandwidth while 'reusing' all the same MCD work ( fewer chiplets to get it out the door). If using a same GCD. Could be a smaller one.
Putting out the 7700XT sooner doesn't make much sense in a context where there are glut of AMD 6xxx and Nvidia 3xxx hitting the market. Again, that may have been a 'old' plan roll out order (e.g., before cryptocraze ending and Nvidia pulling the 4090 release forward. ). The 7900 may not be a 4090 'killer' but it is far, far more competitive with the 4090 than a 7700XT will be. 7700XT is suppose to be around a 6900. That was great when 6900 were priced well over $1K. They aren't. And some rumors point to AMD shrinking the PCI-e lane width... so bigger impedance mismatch with x16 PCI-e v3 system.
Some other rumors (from that post) are tagging 7700XT to the Navi33 monolithic implementation. (Presumable the 7800 would be major consumer of Navi32 package ) I'm not so sure desktop dGPU make much sense there either ( unless AMD is chopping back the margins. ). The monolithic RDNA3 implementations are even less "two die" credible .
The x8 PCI-e lane Navi33 is better aligned with the new 7000 Ryzen. Again the 7000 Ryzens are getting huge uptake right now before the more affordable motherboards ship in high volume. Navi33 in Q1-Q2 2023 would be far better aligned with the more mainstream AMD motherboards being available. Making that first out of the gate doesn't make much sense without a broader base to sell into. If still have cryptocraze and wide spread high spending going on (not high inflation) that might make sense. Could throw out anything in volume and folks would buy it. That isn't the market right now.
A 7950XT could be easily done by just 3D L3 cache stacking so have 384MB cache instead of 192MB cache of the 7900. UP tweak some clocks for higher power profile and they have a different card using the same foundational parts. It is more a gimmick product to a relatively low volume of buyers. AMD doesn't really need that right now. They probably have a lot of dies ordered up that they need to now sell (or start eating losses ).
AMD could shift some Navi33 into the high end laptop market if they do have a big perf/watt advantage. Nvidia's solution is way down the schedule so have long window. Intel has goofed there so they aren't problem there either.
"used" and inventory bloat dGPU add-in-cards are not in the way on new laptops.
P.S. Prehaps your "two die" rumors were based on smashing two Navi32 dies together.
7950 here is "2 GCD + ? MCDs "
GPU Mag is your go-to source for everything GPU related. We got the latest info, rumors, and insider knowledge thanks to our industry contacts.
www.gpumag.com
That is pretty doubtful. Picture from same article
As the Navi GCD die gets smaller the available edge space also gets smaller. Some "seamless" , "presents as a single" GPU die interconnection connector is going to take more space , not less. Still have the PCI-e x16 and multiple DP outputs to feed out.
I think folks are trying to apply the desktop Ryzen approach to the Graphics one and it is substantively different. AMD is just going to throw multiple GCDs in there like CPU CCDs. While still generally a "hub and spoke" chiplet set up they have largely reversed the roles here on I/O. For Ryzen the I/O is in the hub and the cores are in the spokes. For GPUs there memory I/O (not all the i/O but the very high bandwidth I/O) is in the spokes and the cores are in the hub.
It isn't going to 'scale with chiplets' the same way. The GCDs are changing here just like the I/O hubs change between desktop and server package differences for the 'hub' with common spokes. GPU cores and CPU cores have different latency , caching , and bandwidth constraints. So pushing CPU cores off into another die is a slightly different problem than pushing GPU cores off into a different die. There have been multiple die, unified memory CPU systems for decades. That has not been true of GPUs.
Far more affordable path for AMD to go with the MCD chiplets without 3D cache for 7900 and just stack more RAM for the 7900 and play with clocks. If have underwear in a twist over hardware RT performance if the 3D stack cache manages to hold a substantively higher percentage of the BVH data structures that spill out of the L2 caches then the RT performance will go substantially up. Not revolutionary better, but enough to make a difference and command a high card price point. Doesn't require any massive die space expansion on the GCD at all. AMD is actually 'shrinking' the L3 "Inifinity Cache" with the 7000 series ( because suppose to work more effectively with less capacity now. ). The 3D cache additions probably can be used to fill in the edge cases where than doesn't work quite as well the limited RT updates don't cover. Decent chance RT is one of those edge cases. But that will require some finely tuned drivers to specific applications which they probably won't have on day one (or day 30). Shipping the 7900 , then getting the bugs out , and only then tweaking for the "bigger cache" 7950 makes far, far, far more sense software wise.