No, not Arc ( 'DG2' ) two years ago.
You might be thinking of DG1.
https://www.theverge.com/2020/1/9/2...crete-gpu-graphics-card-announcement-ces-2020
DG1 was just the same iGPU in Tiger Lake slapped into a package so Intel would have a public test mule to gather feedback on. However, in 20/20 hindsight it should have been an indicator that drivers were going to be a problem. It only worked with special firmware and the driver of the iGPU hardware wanted very much to be be treated like an iGPU (tight coupling with CPU package. ). At very least, the expectation that resizable BAR (reBAR) was going to play a very big role.
However, Intel has been running "we are seriously coming for the dGPU" hype train for more than two years though. Hired Raja Koduri back 2017-2018. There were projects and flight before Koduri arrived but Intel was pushing the visibility up each year since 2018. ( it is a nice offset to the increasing problems they were having on the CPU package side. )
DG2 was likely suppose to be out Q4 2021 in pre-pandemic planning. ( Pandemic likely shot giant holes into their coordinated , hyper optimistic software efforts. ) Intel has been running the behind so that will be almost a year late for the 'sexy high mid range' stuff.
That's a bit of a dual edged sword. Their driver stack has so much implicit iGPU assumptions piled into it that chasing down all of those assumptions (and iGPU remaining the vast bulk of their 'GPU' business ) is a liability as well as an asset.
Intel probably would have been better served to just focus on mobile GPUs first. Maybe some "Pro" GPU cards where don't have to chase after every quirky API option in dozens of different games and APIs and bigger value premium put on stability. Leave the very large discrete card to the Xe-HPC (Ponte Vecchio ) scope where there is pragmatically no video out to worry about ( primarily all about GPGPU computational workloads).
By including the midrange market than scooped up a major obligation to cover an extremely wide set of gaming issues. It is balkanized: DX11 , DX12 , and Vulkan .
Intel's iGPU drivers were "good" in part because the scope they tried to cover wasn't overly broad. ( they were not particularly good in the sense of very high optimized while being extremely stable. ). Nor were they particularly very early adopters of the Vulkan/DX12/Metal model of shifting lots of optimizations decisions into the application (or at best shared "game/render" engine. ) .
Instead what Intel did on software side is go after CUDA library strengths ( with OpenAPI ). Prioritize effort onto DX12 ( where had thinnest expertise depth ). Spend lots of time on trying to couple the GPU to new Intel CPUs/iGPUs in a discrete card context.
The margins on low end cards is very thin. Pretty good chance their push to grab a larger share of the mid-range margin was to raise overall aggregate margin across the GPU products line. Getting into GPUs was going to be expensive. All the production is contracted ( more expensive than doing it internally) and using a bunch of externally developed EDA tools (not that the internal ones were better, but probably 'cheaper' if viewed through penny pinching internal account lenses). Lots more software (which likely means lots more bodies and overhead need to pay more . )
If the "start up" costs for the dGPU business is $700M then targeting 4M GPUs with average margin of $50 allows better amortization than targeting 2M GPUs with average margin of $25 . That would be true if don't count for the giant debacle a huge software blunder could ( is ) costing them. 2M very , very small GPUs wouldn't give them much negotiating leverage with TSMC either as it is a much smaller aggregate wafer order.
The very, very high end of computational data center cards ? Yes. The high end gaming card market? No.
Intel may have used "enthusiast" is a loose way to refer to 3060-3070/5600-5700 performance , but for folks not trying to wear rose colored glass they haven't been shooting that the upper mid - high range at all.
There initial talk was there was a range of Xe products. In 2020, they had this chart.
www.anandtech.com
Earlier before that there was just Xe-LP , Xe-HP , Xe-HPC. ( go back and look at the Xe product range slide in the DG1 article linked in earlier. ).
Xe-HP and Xe-HPC were more data center focused cards. HP was skewed toward video en/decode and server room 'display' hosting. HPC was very much focused on supercomputer compute.
That Xe-HPG thing crept in later when the Xe-HP card ran into problems. Some folks point to the diagram and say the Xe-HPG was suppose to be cover both Mid-range and Enthusiast in the first generation. I don't think so. At best that "enthusiast" was a long term aspirational thing. There was little practical way they could do both Xe-HP and a whole set of high end Xe-HPG at the same time. They might have hoped to push the server/workstation card done into commercial market as a placeholder, but their plans never were about having high end Nvidia killer on generation one. When the Xe-HP collapsed that was almost certainly going to be a short term gap. They can't have whole GPU chips disappearing and not have gaps open up in the line up; at very least on the first generation.
Even generation 2 I doubt they would cover the whole top end. Falling a bit short of 3070 land this round. Just covering "3070-3080 land" would be an expansion. And there is tons of cruft to clean up at the Xe-HPC level... which also would take loads of resources (money , people , and time).
High end gaming never really was on the roadmap until relatively recent. And even there it is squishy.