Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

FNH15

macrumors 6502a
Apr 19, 2011
822
867
Intel has a history of being so bad at making integrated GPUs.
I'm not getting my hopes up on them making dedicated GPUs.
Their Iris Pro was quite good - the iGPU in the 2013-15 15” MBPs.

They maddeningly stick their worst iGPUs with their “high end” processors - for example the UHD630 with their mobile i7 processors.

This is good for the Windows space, I guess.
 

TiggrToo

macrumors 601
Aug 24, 2017
4,205
8,838
I do but we are talking about now.
Hey, you're the one who created the incorrectly named thread.

This is far far far from Intel's first foray into the world of GPUs. It's their latest, sure, but their first?

Nope, Not. Even. Close.

Try first dedicated GPU and you may get closer.
 
Last edited:
  • Like
  • Haha
Reactions: JMacHack and sunny5

NotTooLate

macrumors 6502
Jun 9, 2020
444
891
I cant seems to find a single reason why anyone would berate Intel or be against them , I root for them to be competitive , were they doing shady things in the past ? are they the only company that ever did shady things .... welp .... getting away from the duopoly of GPU`s is great for us consumers and would put pressure on Nvidia and AMD to release their lower end cards sooner rather then later (which is what they are doing now days).
Wishing all the best to the blue team to bounce back and provide great products !!
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
Prepare for the AMD CPU / Intel GPU meme PC builds.

For the mobile GPUs that probably won't happen. One of the value add the mobile GPUs have is 'Deep Link'


That only works with a Intel CPU + Intel GPU combo. If drop the deep link and a few of the other value add features out the window , then the value of using the Intel GPU drops.


For Intel's desktop discrete GPUs (dGPU) ..... there is probably less of a fall off if use a AMD CPU with no iGPU. Bigger power budgets so less 'sharing' and likely no other GPU to split workload with. But is Intel going to super-duper optimize the dGPU drivers for AMD CPU cores? Probably not. They will probably work while leaving a small amount of performance "off the table".

So yeah there will be some "just because I can" builds for looky-loo video hits. For some mass production products ? Probably not if there are no large, sustained parts shortages. Decent chance this will promote more AMD/AMD and Intel/Intel pairings and Nvidia takes the bigger hit.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
They maddeningly stick their worst iGPUs with their “high end” processors - for example the UHD630 with their mobile i7 processors.

It is a fixed transistor budget trade off.

Gen10 ( 6 CPU cores 630 with 24 EU cores )


Gen 11 ( 8 CPU cores 32 EU cores )


Gen 12 ( 6P+8E CPU cores 96 EU cores )



As the transistor budget went up the iGPU got gains, but where the CPU core count is higher than the entry-middle offerings the skew is to allocate to CPU die space over GPU die space.

If look at the where Intel deployed the "Iris Pro" (e.g. Iris Plus 650 with eDRAM cache ) the CPU didn't really cross 4 cores to keep the package ( CPU-GPU + eDRAM + PCH ) size more tractable.

The other primary issue was that i7 were very often deployed with a dGPU. The UHD 630 is a much better "low power sipper" for contexts where don't need full power. Often end users are seeking higher GPU horsepower along with higher CPU horsepower . If they are mainly going to ignore the iGPU anyway; why put a larger , more expensive iGPU in there?


Those "Iris Pro" CPU packages that were more highly skewed toward GPUs didn't get huge uptake from any other system vendor but Apple. Other vendors but "some" but bought more of the the non Iris stuff. Several factors including a much bigger hardcore Nvidia fanbase in Windows PC land.
Over last several years, Apple has banished Nvidia GPUs.

This is good for the Windows space, I guess.

That is what largely pays the R&D bills for Intel CPU package products in the Core iX range. So yeah, it is skewed that way.
 

Unregistered 4U

macrumors G4
Jul 22, 2002
10,610
8,628
Are they trying to do the Smash Brothers Intro meme?

“A New Player has entered the game!”
 
Last edited:

Unregistered 4U

macrumors G4
Jul 22, 2002
10,610
8,628
Those "Iris Pro" CPU packages that were more highly skewed toward GPUs didn't get huge uptake from any other system vendor but Apple. Other vendors but "some" but bought more of the the non Iris stuff. Several factors including a much bigger hardcore Nvidia fanbase in Windows PC land.
At one point (do they still?) Intel was requiring their CPU to be mated with an Intel GPU. Such that, even if the system had a discrete GPU, they still had to make room for Intel’s GPU. Will these new chips be the new “bottom tier” that vendors choose just because they HAVE to pick something?
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
At one point (do they still?) Intel was requiring their CPU to be mated with an Intel GPU.

That was DG1. DG1 was never meant to sold in the retail market.


"... The Iris Xe discrete add-in card will be paired with 9th gen (Coffee Lake-S) and 10th gen (Comet Lake-S) Intel® Core™ desktop processors and Intel(R) B460, H410, B365, and H310C chipset-based motherboards and sold as part of pre-built systems. These motherboards require a special BIOS that supports Intel Iris Xe, so the cards won’t be compatible with other systems. ... "


Not only did it only work with an Intel CPU , you also needed one of a few chipset PCH chips , and specialized boot firmware for it to work.

Primarily, this was because it was the (CPU gen 11 ) Tiger Lake's GPU Gen 12 Xe-LP complex that was 'yanked' out of a CPU SoC and 'dGPU adapter' logic wrapped around it. It kind of gave Intel a "version 0.2" to work on driver stack issues out in the field with some end users ( who had little choice. ).

That GPU core was probably never intended at the very start of design to be a dGPU. Intel just did a "Frankenstien" project as a 'warm up' to doing it right.

What is being released now is code named as DG2 ( Xe-HPG).

When Intel first started talking about "Xe" family this DG2 wasn't there.

intel_dg1_slide_1.jpg


https://www.theverge.com/2020/1/9/2...crete-gpu-graphics-card-announcement-ces-2020


Xe-HP (Arctic Sound ) essentially died (Media transcode was not as big a workstation/server demand as they thought; among other issues ) and Xe-HPG (high performance gaming/graphics) took its slot with a different primary target (mainstream dGPU market. With some workstation thrown in. ).
Intel probably could have gotten to this "Arc" Xe-HPG/DG2 release state faster if they were not tracking on three different GPU approaches at the same time. [ Xe-HPC is the super high end data center compute card. Ponte Vecchio ]


Such that, even if the system had a discrete GPU, they still had to make room for Intel’s GPU.

It was more so that the firmware and chipset required the DG1 to be present. Just as if it was a iGPU. Pragmatically it was an iGPU ripped out. So still needed the iGPU firmware.

These Arc family GPUs have been designed from the beginning to be discrete GPUs. ( although I suspect next generation will allow some variants of them to be optionally placed on a iGPU tile in a "CPU" package for Gen 14 (Meteor Lake). )


Will these new chips be the new “bottom tier” that vendors choose just because they HAVE to pick something?

Technically no. Pragmatically it won't be surprising that Intel has some sales deals where if buy certain i5/i3 chip + some specific PCH controller + A350M that the A350M is thrown in 'at cost' (or lower. ) . Vendors don't "have to" pick it but any other option has higher bill-of-material (BOM) costs. [ for laptops it is part of the fully functional, standard reference design package the vendors get. Maybe even some AIO system boards designs with details worked out for a soldered on A350/370 ]

Similar on the when the discrete A350/370 show up. Intel may price them so low (relative to AMD/NVidia) that folks on tight budgets will pick them because not anything else at that price point. ( $100-120 cards ). Similar thing where Intel hands the card vendors a fully function reference design and then offers up some "insanely great" BOM pricing if just buy some fixed set of stuff.

A500 and A700 series they would sell with some profit attached to it, but still MSRP price it better than AMD or Nvidia. Some of those will land in some AMD systems. However, most probably won't because still at the point that most Windows systems still have Intel processors in them.
 

Unregistered 4U

macrumors G4
Jul 22, 2002
10,610
8,628
That was DG1. DG1 was never meant to sold in the retail market.
No, I was speaking more about the MacBook Pro. At one point, Apple had worked with Nvidia to produce an Intel CPU+Nvidia GPU system on an Nvidia motherboard. This was the last Intel MacBook I remember having a discrete GPU only. The motherboards for the rest show an Intel CPU, an Intel GPU also located on the MB, taking up space, and whatever AMD discrete board Apple decided to put in. I was under the impression that the Intel GPU was there ONLY because Intel required OEM’s include it… even though it takes up valuable space.

These Arc family GPUs have been designed from the beginning to be discrete GPUs. ( although I suspect next generation will allow some variants of them to be optionally placed on a iGPU tile in a "CPU" package for Gen 14 (Meteor Lake). )
Ah, so for right now, these are ONLY intended to be discrete. Is that discrete/board only or did they announce a mountable version for laptops?
 

Krevnik

macrumors 601
Sep 8, 2003
4,101
1,312
No, I was speaking more about the MacBook Pro. At one point, Apple had worked with Nvidia to produce an Intel CPU+Nvidia GPU system on an Nvidia motherboard. This was the last Intel MacBook I remember having a discrete GPU only. The motherboards for the rest show an Intel CPU, an Intel GPU also located on the MB, taking up space, and whatever AMD discrete board Apple decided to put in. I was under the impression that the Intel GPU was there ONLY because Intel required OEM’s include it… even though it takes up valuable space.

Those are iGPUs, integrated onto the CPU die, not on the motherboard. That’s why they are so very common. And in laptops, with the CPU package being the same size with or without the iGPU enabled, you may as well use the iGPU for light loads to save power.
 
  • Like
Reactions: Unregistered 4U

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
No, I was speaking more about the MacBook Pro. At one point, Apple had worked with Nvidia to produce an Intel CPU+Nvidia GPU system on an Nvidia motherboard. This was the last Intel MacBook I remember having a discrete GPU only. The motherboards for the rest show an Intel CPU, an Intel GPU also located on the MB, taking up space, and whatever AMD discrete board Apple decided to put in. I was under the impression that the Intel GPU was there ONLY because Intel required OEM’s include it… even though it takes up valuable space.

You are talking about long, long, ago in a Galaxy Far far away. Pre-2010. Back when Intel still allowed 3rd party Northbridge ( memory controller, PCI , and maybe iGPU) and Southbridge. (USB, general I/O , sound , etc. ) chips to be using on their "CPU only" packages.

Around 2010 Intel started to integrate memory controllers into the CPU die (and package). Pragmatically iGPUs came on die along with those too. [ doesn't make much sense at all to decouple a GPU subsystem "far away" from the required memory controllers. ). Ever since Intel switch over to Core iX (i3, i5, i7) there have been GPUs on the same die as the CPU cores (and memory controllers). For over the last decade the only Intel packages that didn't have a GPU in the personal computer space have been those based on the workstation/server dies. ( E5 , W 2000/300 series, Xeon SP , etc.)

In the per-2010 Nvidia sold a Northbridge with a shared memory controller and iGPU built in. (e.g. Geforce 9400M ). Intel and AMD kill all that off over a decade ago.

Once Intel got to the point where most of the mainstream CPU packages they were selling went into laptops, they have had iGPUs in all their mainstream offerings (for both desktop and laptop). Just to confuse things after a while Intel would label some HEDT dekstop package based on Workstation/server dies as being Core i7 xxxX or Core i9 xxxx. Those wouldn't have an iGPU but the 630UHD that some in this thread 'complained; about are integrated GPUs (iGPUs) not discrete GPUs (dGPU) . They are integrated onto the same die as the CPU cores and memory controller.

For example the. Core i 'Sandy Bridge' systems from 2010

900px-sandy_bridge_overview.svg.png



That box in the middle of the diagram is a logical description of the die on the "CPU" package. It has more than just CPU cores. The PCH ( platform controller hub) is handles the more general i/O USB/SATA/sound/etc. "IMC" integrated memory controller.


Same article an annotated die shot
sandy_bridge_%28quad-core%29_%28annotated%29.png


Modern Intel mobile dies have Thunderbolt controllers built into the die also. The folks looking for ultimate CPU core counts and hyper modular GPUs grumble at the 'wasted' space for the iGPU. Toss the display controller and GPU and the GPU I/O and could perhaps fit 6 CPU cores on the same die. Intel has resisted that for more than several years.
In the laptop space it doesn't make much sense.

For last 5-8 years , on Intel mainstream 'CPU' dies there has been as much die space allocated to "uncore" (non x86 Core functionality) as to the 'CPU' cores . The 'CPU package' is really not only a CPU package anymore.




Ah, so for right now, these are ONLY intended to be discrete. Is that discrete/board only or did they announce a mountable version for laptops?

GPU packages are solder mounted on Add-in cards just as much as on main logic boards in laptops. Generally in modern times the "mobile" version is a desktop package just run at slower clock speeds (and maybe slower memory clocks also) to save power consumption. They aren't different dies, but may have different numbers assigned to them. ( Nvidia has tabled some. x070 desktops as x080 mobile versions. ) . There is also overhead if the add-in board enables complex overclocking, but that is complexity outside the base GPU package also.

Intel Arc chips have some features where they can distribute workload over both the Intel iGPU and the Arc dGPU. For example run the game on the dGPU but convert the display frames to an H.265 stream on the iGPU. There is some copying overhead but frees up some bandwidth for each also.
 
  • Like
Reactions: Unregistered 4U

leman

macrumors Core
Oct 14, 2008
19,521
19,677
No, I was speaking more about the MacBook Pro. At one point, Apple had worked with Nvidia to produce an Intel CPU+Nvidia GPU system on an Nvidia motherboard. This was the last Intel MacBook I remember having a discrete GPU only. The motherboards for the rest show an Intel CPU, an Intel GPU also located on the MB, taking up space, and whatever AMD discrete board Apple decided to put in.

You mean the Nvidia 9400M? That was an iGPU, not a dGPU. By its very definition an iGPU is one that shares the RAM with the CPU. Integrating GPUs in the main board chipsets was a very common thing, since that’s also where the memory controller and IO was located. Bundling the GPU and CPU on the same die is a fairly recent innovation, if memory serves me right it was Intel introducing it with Sandy Bridge in 2011.

This essentially put an end to third-party iGPUs because the memory controller etc. was integrated into the CPU itself, leaving no way to connect a different GPU. But it might change again in the near future, with vendors working on multi-chip tile interconnect standards. Probably in five years you could have an Intel CPU, AMD GPU and Nvidia matrix accelerator all in one package, connected to unified high-bandwidth RAM.


I was under the impression that the Intel GPU was there ONLY because Intel required OEM’s include it… even though it takes up valuable space.

Thats just how these chips are made. There is enough consumer demand for integrated graphics, so it makes a lot of sence to dedicate a very small portion of the due to a GPU. Mobile iGPUs are bigger since energy efficiency matters.

P.S. @deconstruct60 beat me to it while I was typing, and as usual, provided more details :)
 
  • Like
Reactions: Unregistered 4U

Mcckoe

macrumors regular
Jan 15, 2013
170
352

Btw, does Apple Silicon supports AV1 hardware encoding? They claimed that encoding speed is 50 times faster than before on Premier Pro and Davinci Resolve. Curious to know actual performances compared to media engine and ProRes encoder/decoder.

At least there will be another GPU manufacturer other than AMD and Nvidia.

I don’t think anyone understands the purpose of this intel GPU. It is an opportunity for Intel to get into the high-end graphics game, a goal they realistically should have started working on two or three years earlier; oh well. Basically: there is no way Intel can match nVidia and AMD on Graphics advancements, however they can match them on raw horsepower. So, they tailor their systems and advertising toward data mining to inflate their sales numbers, and use the funds to work on “borrowing” advancements from nVidia and AMD, and at least have a component GPU option vs competitors.

If that sounds familiar, thats because intel has agreed to contract deals with both nVidia and AMD in the past for “combined performance projects“. Just to come out with “re-engineered” graphical chipsets the next generation when the contracts ended, it’s kind of Intel’s move.

My guess is… these new GPU‘s from intel will perform ~as good as a scaled version of their current integrated graphics chips; which will be 15-25 percent behind comparable chipsets from nVidia and AMD. But, for data mining, they can open the chip and just sell it… using just the raw power, the cards should perform just as good as cards from nVidia and AMD; possibly even better.

These aren’t meant to compete: they are meant to take over an “unwanted by other companies” market share, and maybe start eating into their competitors gaming/high-end market in a few generations.

::: Basically ::: Keep your expectations low on the 1st generation of these GPUs from intel.
 
Last edited:

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
I don’t think anyone understands the purpose of this intel GPU. It is an opportunity for Intel to get into the high-end graphics game, a goal they realistically should have started this working on two or three years earlier; oh well. Basically: there is no way Intel can match nVidia and AMD on Graphics advancements, however they can match them on raw horsepower. ....

My guess is… these new GPU‘s from intel will perform ~as good as a scaled version of their current integrated graphics chips; which will be 15-25 percent behind comparable chipsets from nVidia and AMD. But, for data mining, they can open the chip and just sell it… using just the raw power, the cards should perform just as good as cards from nVidia and AMD; possibly even better.

The Xe-HPG Arc GPUs aren't "scaled up legacy iGPU" cores.

"... Instead, starting with Alchemist (and Ponte Vecchio), Intel is introducing a new fundamental building block: the Xe Core. ... "
The older previous generations and the Xe-LP ( Tiger Lake) never had matrix units.

The 'Xe Core' of the HPG is more so a scaled back 'Xe Core' from the Ponte Vecchio product ( 8 vectors at 512 width and 8 matrix at 4096 width per 'core') dropping back to ( 16 vectors at 256 width and 16 matrix at 1024 ) more vectors that are narrower , but smaller big matrix aggregate width.



The Arc aren't necessarily just pigeonholed into only "data mining". Video content creation they are probably better (compared in the same price class ) than the AMD and Nvidia solutions. Lots of folks are producing/streaming video these days. Intel appears to have thrown "transistor budget" and die space at fixed function A/V and display output breadth than AMD and Nvidia have. Drag the testing into a niche of transcoding. H.264 to H.265 then Intel will probably win. ( transcode into AV1 they'll surely win as long as software taps the hardware to do the work. )


Yeah they'll loose in the super tuned with hackery and gimmicks max FPS 'wars' on a wide variety of games , but most of that is fixable over time. Where Intel has spent their tuning investment resources and dollars will likely turn in some results pretty close to low single digit (or now) loss in relative performance. That's is good enough.


I think missing the forest for the trees here. Intel getting into dGPUs means they have take a higher percentage of overall system bill-of-material cost share even if loose modest CPU package market share to AMD (and Apple M-series). For example make $2B less in selling CPUs but add $1.9B in new dGPU sales and net revenues haven't dropped noticeably . And Intel keeps their overall fab wafer production rate up pretty high (meaning it pays to be your own fab in the long term. Will need external products to make, but they have a potential solid base to work to build upon. )


These aren’t meant to compete: they are meant to take over an “unwanted by other companies” market share, and maybe start eating into their competitors gaming/high-end market in a few generations.

They are not meant to cover the whole entire GPU spectrum, but in their relative sub-categories they definitely are meant to complete. Intel has substantive interest in "mobile" dGPU placement as much as add-in-card. One reason they aggressively are deploying on TSMC N6 node. (easier to compete for mobile placements if getting decent Perf/Watt.). And Intel is probably going to compete on Pref/Watt/$ where it is a matter of system vendor choosing a dGPU to solder in.

Rumors has it they are looking to sell 4M in 2022. That isn't going to put Nvidia or AMD out of business, but it is a substantial number.


::: Basically ::: Keep your expectations low on the 1st generation of these GPUs from intel.

Xe-HPG is somewhat closer to be generation version 1.25 as opposed to 1.0 (1st). It has refactored pieces from Xe-HP (that eventually got canceled ) , Xe-HPC ( Ponte Vecchio ) and foundational work done in Xe-LP (which shipped in Tiger Lake iGPUs back in 2020 ).

Expectations should also be kept low that much of the tech porn press is going to do a balanced objective evaluations of these GPUs. There are going to be several narrow benchmarks to cherry pick to get to whatever preconceived outcome want to get to. There is going to be lots of 'noise' in the measurements.
 

Mcckoe

macrumors regular
Jan 15, 2013
170
352
The Xe-HPG Arc GPUs aren't "scaled up legacy iGPU" cores.
I admire the admiration, but your kind of proving my point in your post. Specialized chips are changing the game, and this GPU is Intel’s chance to cash in on some of those advantages, while still selling their product as “the fastest chip”.

I’ll explain a bit... As better and better decoder/encoder additions are added directly to chips, the raw horsepower of direct processing becomes leas and less required for video processing. Allowing systems like the M1 variants to reach levels on par, or even far beyond their competitors; depending on the level of allotted scale space the encoder/decoder portion of the chip die is allowed.

While Intel’s GPU chip will undoubtably be fast, it will not be able to produce as high of frame rates as similar AMD/nVidia cards, nor will it be able to encode/decode as fast as large dedicated systems like the M1 variants. So, why would you pay an Intel premium for the fastest chip, that can’t out pace a slower competitor? I’ll tell you, because you don’t care about gaming or encoding/decoding; you care about data mining and GPU processing offload, both of which these GPU’s are being tailored toward.

Gaming and encoding/decoding might be a desired market for intel, but this product won’t seriously compete in either for at least the next 2 or 3 generations, and at that point it might not matter all that much; in which case this entire project was in a way… pointless. Only time will tell if: this could be a last ditch effort to cash in GPU fever; or a brilliant pivot that defines the company for the next century.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,677
I’ll explain a bit... As better and better decoder/encoder additions are added directly to chips, the raw horsepower of direct processing becomes leas and less required for video processing. Allowing systems like the M1 variants to reach levels on par, or even far beyond their competitors; depending on the level of allotted scale space the encoder/decoder portion of the chip die is allowed.

Video processing is not just encoding/decoding, it’s also image transformation and processing (compute and ML). Of course, processing is bottlenecked by the encoding, so you need all aspects of your hardware to be fast.

That said, Apple Silicon has a definite advantage here - unified memory. Having the ability to apply heterogenous computing to any data anywhere without additional delays is a big thing when working with non-trivial datasets. It makes little practical sense to have a very fast compute engine if you can’t get the data to it quickly enough.
 
  • Like
Reactions: JMacHack
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.