Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

FloatingBones

macrumors 68000
Jul 19, 2006
1,506
775
You tend to get the most flak when you are over the target. The fact that for a "pro" machine only having an integrated graphics card as the only option and and no dedicated card is half the problem, not a feature!
If you had been on the record complaining about the MBP having integrated graphics in the past, your opinion would have some weight.

In other words, the problem is the timing of your commentary. Did you really not notice until now that low-end MBP machines had an integrated graphics processor?

An issue Apple has been skimping on for years now.

Not really. Show us, please, where you have had an issue with this in the past. Put up, or....

So you will be buying a MBP 13", a machine who's graphics are four years out of date.

Compared to what? Not last year's MBP model that it's replacing. Repeating your nonsense doesn't make it true.

try an Nvidia and AMD have dedicated mobile graphics card. Yes, Dedicated

What exact model of AMD/Nvidia? What thermal footprint do they have? And why now? You never complained before about the MBP with integrated graphics, but you complain now. Makes no sense.

Wait for the MBP which has separate GPU. But don't be comparing apples to apples. Stop trolling the discussion.

There are plenty of laptops with similar form factor that packs a dGPU, and iGPU.

Be specific. Name one that's comparable. And then explain your criteria for "comparable". Is the thermal footprint comparable? Battery size? Battery life?

No, no, and no.
 
Last edited:

Fomalhaut

macrumors 68000
Oct 6, 2020
1,993
1,724
They have only released the entry level models that all had only intels integrated graphics. All higher end models will have amd graphic cards like the intel models had with the automatic graphic switching system
It’s extremely unlikely that any future Macs will have discrete AMD GPUs. Apple will develop more powerful on SoC GPUs, and may produce their own discrete GPU or accelerator similar to the Afterburner
 

Serban55

Suspended
Oct 18, 2020
2,153
4,344
That's disappointing. Regardless of what "magic" Apple do to their SoCs it's highly unlikely that they'll beat a dedicated desktop GPU for certain applications.
Says who? Apple is already working on a custom gpu card , how do you know it will not be up there with the latest and greatest ? Lets wait...they already showed that they can be the best of the best on gpu on the SOC
Now lets see what they can do with an so called "outside SOC gpu" and i bet we dont have to wait to long (maybe March or even safer bet WWDC)
Remember, Apple has one of the best chip brain in the industry
 
  • Like
Reactions: cal6n

senttoschool

macrumors 68030
Nov 2, 2017
2,626
5,482
Says who? Apple is already working on a custom gpu card , how do you know it will not be up there with the latest and greatest ? Lets wait...they already showed that they can be the best of the best on gpu on the SOC
Now lets see what they can do with an so called "outside SOC gpu" and i bet we dont have to wait to long (maybe March or even safer bet WWDC)
Remember, Apple has one of the best chip brain in the industry
It will never be better than Nvidia's dedicated GPUs. Nvidia is a 300 billion dollar company making dedicated GPUs.
 

zakarhino

Contributor
Sep 13, 2014
2,611
6,963
Says who? Apple is already working on a custom gpu card , how do you know it will not be up there with the latest and greatest ? Lets wait...they already showed that they can be the best of the best on gpu on the SOC
Now lets see what they can do with an so called "outside SOC gpu" and i bet we dont have to wait to long (maybe March or even safer bet WWDC)
Remember, Apple has one of the best chip brain in the industry

Oh, I didn't know Apple were making a dedicated desktop GPU, I thought they were just going to focus on SoC GPUs and a dGPU for devices like MBP16, iMac Pro, etc. In that case it could be up there but I'm still not sure it would beat GPUs dedicated to things like machine learning, mining, etc (think Quadro graphics cards and the like), that's what I mean by dedicated cards. I don't really envision Apple dedicating much funding to building industry specific GPUs like Nvidia does for example.
 

jeanlain

macrumors 68020
Mar 14, 2009
2,459
953
Apple will still have no incentive to go with a third party GPU because their own one will always have two decisive advantages: TBDR and unified memory.
I'm curious. Do APUs from AMD and Intel have similar unified memory? For instance, the intel iris could access to only 1.5 GB of RAM. But some say it's purely an artificial limitation. But AFAIK, memory is also "unified" in consol APUs.
 
  • Like
Reactions: Juraj22

leman

macrumors Core
Oct 14, 2008
19,521
19,675
I'm not suggesting that Apple go with third-party. I'm simply pointing out the fact that Apple isn't really that far ahead of the competition, contrary to what most people have been singing about. The reality is that Apple is now able to catch up to and probably have exceeded nVidia and AMD in the lower-end of the threshold. That's great.

I don't see how we are in disagreement on this one. The actual hardware utilization efficiency is comparable between all major vendors, and the main reason why Apple can punch above their power consumption is TBDR. In Geekbench compute for example, the M1 is half the speed of the 1650 Ti Max-Q (same amount of cores, running at the same frequency), and I have a strong suspicion that is is mainly because of the bandwidth limitation.

Eh you are quoting boost clocks which are (supposedly) heavily TDP constrained. I wouldn’t expect the GPU to sit at those frequencies for long (hence why game clocks are lower).

I tried to take the average between the base and the most clock. Buy I agree with you that there are no clear numbers to go by. You can take advertised max FLOPS and compute the clocks from there but it's likely to be the peak anyway.

I'm curious. Do APUs from AMD and Intel have similar unified memory? For instance, the intel iris could access to only 1.5 GB of RAM. But some say it's purely an artificial limitation. But AFAIK, memory is also "unified" in consol APUs.

I suppose they do, although the technical details on that are hazy. Having access to the same memory is one thing, but what you really want is certain data coherence between CPU and GPU. If I understand it correctly, Apple achieves it by using a large SoC-level memory buffer. I don't know how Intel or AMD do it (Intel's figures from architecture whitepapers suggest a comparable arrangement).

I think this is also the reason why integrating a third-party GPU on Macs is not really feasible. AMD can do it on a console since they can make custom silicon that combines their CPU and GPU. Building that for Apple CPUs and AMD GPUs... not quite the same.
 

mr_roboto

macrumors 6502a
Sep 30, 2020
856
1,866
If you ignore the meaningless "Pro" labels and actually look at comparable laptops - 13" ultraportables, with premium "fit & finish", integrated graphics and low-power RAM, such as the Dell XPS13, Microsoft Surface Laptop, Asus Zenbook 13 - then a choice 8 or 16GB of soldered-in RAM is pretty much par for the course - and is probably an LPDDR4 thing (can you even get LPDDR4 RAM in plug-in SODIMM form?).
You cannot. LPDDR4 has some significant protocol differences from regular DDR4. There's no way to wire up a SODIMM with LPDDR4 memory on it and have it function.

LPDDR4 is also specified to take advantage of short traces and soldered connections as seen in Apple's package-on-package design for M1, which is why it's able to run at 4266 Mbps. For regular socketed DDR4, the highest speed is 3200. (heading this off: Please, angry pc gamer dudes, do not link me gamer ddr4 which runs at higher clock speeds. I know it exists. 3200 is the highest JEDEC standard speed and Apple would not be likely to go above that.)
 
  • Like
Reactions: theluggage

leman

macrumors Core
Oct 14, 2008
19,521
19,675
(heading this off: Please, angry pc gamer dudes, do not link me gamer ddr4 which runs at higher clock speeds. I know it exists. 3200 is the highest JEDEC standard speed and Apple would not be likely to go above that.)

To add to this, these gaming DDR chips are carefully selected and binned and they are exceedingly rare. It is always possible to make a faster computer by hand-testing tons of components and picking only the highest-performance ones, but you won't assemble too many computers this way. Apple has to sell millions of Macs — they won't be able to source enough chips that can sustain faster speeds.
 

09872738

Cancelled
Feb 12, 2005
1,270
2,125
They have only released the entry level models that all had only intels integrated graphics. All higher end models will have amd graphic cards like the intel models had with the automatic graphic switching system
Apple clearly stated they won't.
 

mr_roboto

macrumors 6502a
Sep 30, 2020
856
1,866
It will never be better than Nvidia's dedicated GPUs. Nvidia is a 300 billion dollar company making dedicated GPUs.
Intel is a 187 billion dollar company making dedicated CPUs. Ten years ago nobody believed Apple might someday design better CPU cores, or be able to manufacture them with process technology even equivalent to Intel's. Five years ago there were some interesting signs. Two years ago it was becoming obvious Apple's CPU cores had just about pulled even. Today, I don't think I have to tell you what has happened.

I think that people overestimate Nvidia's GPU dominance, and habitually underestimate Apple's GPUs because they've never been designed to compete in the desktop space before. There's been signs that Apple's GPUs are pretty good for several years now, and a lot of new post-M1 reasons to believe that competing with Nvidia and AMD is well within Apple's reach. I think leman's recent posts have covered them pretty well.
 

senttoschool

macrumors 68030
Nov 2, 2017
2,626
5,482
Intel is a 187 billion dollar company making dedicated CPUs. Ten years ago nobody believed Apple might someday design better CPU cores, or be able to manufacture them with process technology even equivalent to Intel's. Five years ago there were some interesting signs. Two years ago it was becoming obvious Apple's CPU cores had just about pulled even. Today, I don't think I have to tell you what has happened.

I think that people overestimate Nvidia's GPU dominance, and habitually underestimate Apple's GPUs because they've never been designed to compete in the desktop space before. There's been signs that Apple's GPUs are pretty good for several years now, and a lot of new post-M1 reasons to believe that competing with Nvidia and AMD is well within Apple's reach. I think leman's recent posts have covered them pretty well.
Unlike Intel, Nvidia has greatly improved its GPU performance from generation to generation. And while Intel is stuck on its own 14nm/10nm process, Nvidia shares the same foundry as Apple. And there is no magical ARM instruction set in the GPU world.

The fastest GPU Apple has ever made is 2.6Tflops. The fastest GPU Nvidia has ever made is 20Tflops. The neural engine in the M1 does about 11 Tflops. The tensor cores on Nvidia's A100 does 312 Tflops.

Apple and Nvidia are in different stratospheres in terms of big GPU designs.

Do I believe that the iGPU on 16" Macbook Pros will be amazing and will rival AMD's mGPUs? Yes, I do. Do I believe that Apple can design a better dedicated GPU than Nvidia? Absolutely not.
 
Last edited:

howlingsun

macrumors newbie
Nov 18, 2020
13
3
Would an Apple dedicated GPU mean no Unified Memory? Or what is the definiton of a dedicated GPU?
 

matrix07

macrumors G3
Jun 24, 2010
8,226
4,895
Even based on Apple's own graphs, they are showing that past the M1, power consumption will shoot up exponentially while performance may not rise as much.
You read the graph completely wrong. It’s the performance that shoot up, not power consumption. In fact from the graph power consumption just stop at a certain threshold and not consume more.
 
  • Haha
  • Like
Reactions: jido and 2Stepfan

theluggage

macrumors G3
Jul 29, 2011
8,011
8,444
Would an Apple dedicated GPU mean no Unified Memory? Or what is the definiton of a dedicated GPU?
It's usually "discrete" GPU rather than "dedicated" GPU - i.e. implemented as a separate chip or chips c.f. "integrated" graphics where the GPU is on the same chip as the CPU.

The point about "unified memory" is that the integrated GPU and other on-chip accelerators all have fast, direct access to the RAM so they don't need their own "private" video RAM etc. Although the M1's RAM isn't actually directly on the chip die, it's built into the chip package and can have very short, fast connections to the CPU.

With dedicated Video RAM the GPU has very fast access to specialised video RAM but any external data to go into that VRAM has to be copied over a relatively slow PCIe bus - so it's a trade-off: the extra-fast access from VRAM to the GPU vs. the inefficiency of copying between system and VRAM.

(NB: using regular system RAM as VRAM is probably one reason why the lower-end Intel integrated GPUs were so slow - some of the better Intel iGPUs had an extra edRAM "cache" on the chip to act as a buffer).

So if, hypothetically, Apple made a really mega-super zillion-core discrete GPU then it might makes sense to go back to separate VRAM if the improved VRAM-to-GPU speed outweighed the time taken copying data to/from VRAM. I somehow doubt it - they're making a big deal of "unified memory" and I suspect they've put a lot of effort into optimising Metal for unified memory.

My guess is that this "new GPU" Apple are rumoured to be working on is either a new integrated GPU to go into a future M2/Whatever chip or a separate chip designed to be packaged with an M? CPU in the same way that the RAM is currently built into the M1 package.

Also - and this is pure speculation - I wonder about the expansion path for future Mac Pro/iMac Pro-class machines.

Making a single, Xeon-killer Super-M-chip with dozens of cores, zillions of PCIe lanes and support for oodlebytes of RAM, would be hugely expensive for something that will only sell in small quantities (c.f. the Xeon - or even the M1 which is going into mass-market laptops). Instead, they could add additional M-series CPUs - each with their own RAM, GPU, Neural Engine and other gubbins - and optimise their software to divide work amongst multiple "compute units".

So, I'm calling an M1/M1X/M2/whatever accelerator card in MPX format for the Mac Pro as the first step to moving the Mac Pro to Apple Silicon.
 

bill-p

macrumors 68030
Jul 23, 2011
2,929
1,589
I don't see how we are in disagreement on this one. The actual hardware utilization efficiency is comparable between all major vendors, and the main reason why Apple can punch above their power consumption is TBDR. In Geekbench compute for example, the M1 is half the speed of the 1650 Ti Max-Q (same amount of cores, running at the same frequency), and I have a strong suspicion that is is mainly because of the bandwidth limitation.

Oh no, we're not in disagreement. I might have disagreed with you if I didn't have my M1 MacBook Pro... but I have one now, and I can tell what level of performance it's at.

Let's just say... I'm cautiously excited about the next chip that's coming, and I'm very curious if Apple will be able to pull it off without incurring any drawback at all. We'll see.
 
  • Like
Reactions: leman

leman

macrumors Core
Oct 14, 2008
19,521
19,675
Making a single, Xeon-killer Super-M-chip with dozens of cores, zillions of PCIe lanes and support for oodlebytes of RAM, would be hugely expensive for something that will only sell in small quantities (c.f. the Xeon - or even the M1 which is going into mass-market laptops). Instead, they could add additional M-series CPUs - each with their own RAM, GPU, Neural Engine and other gubbins - and optimise their software to divide work amongst multiple "compute units".

So, I'm calling an M1/M1X/M2/whatever accelerator card in MPX format for the Mac Pro as the first step to moving the Mac Pro to Apple Silicon.

My guess is still a NUMA-style system with multiple MPX cards that have their own CPU, GPU and local RAM clusters. An API would allow the developers to discover which processors share memory and run tasks on them so that stuff still stays fast. Additionally the modules can communicate via a very fast interconnect. Apple already has the initial version of such APIs in Metal to work with joined GPUs.
 

cal6n

macrumors 68020
Jul 25, 2004
2,096
273
Gloucester, UK
Usually. Maybe unified memory could be accomplished with an Infinity Fabric.
That’s my thought too. Apple didn’t talk much about the “fabric” part of the M1. I expect their design philosophy going forward includes parallelism, and an obvious way to achieve that would be if the “fabrics” of their chips could be unified so that a number of M1s and dGPUs could behave as one big SOC.
 

Pressure

macrumors 603
May 30, 2006
5,179
1,544
Denmark
Unlike Intel, Nvidia has greatly improved its GPU performance from generation to generation. And while Intel is stuck on its own 14nm/10nm process, Nvidia shares the same foundry as Apple. And there is no magical ARM instruction set in the GPU world.

The fastest GPU Apple has ever made is 2.6Tflops. The fastest GPU Nvidia has ever made is 20Tflops. The neural engine in the M1 does about 11 Tflops. The tensor cores on Nvidia's A100 does 312 Tflops.

Apple and Nvidia are in different stratospheres in terms of big GPU designs.

Do I believe that the iGPU on 16" Macbook Pros will be amazing and will rival AMD's mGPUs? Yes, I do. Do I believe that Apple can design a better dedicated GPU than Nvidia? Absolutely not.
Your comparison is leaving out a lot. For example to reach viable performance at the entry level you do not need a 826mm² GPU that needs 400 Watt for the top sku. The comparison is laughable.

You are looking at two different markets which have nothing in common. The M1 is an entry level SoC with everything integrated to deliver a great user experience in a single package. The starting price is $699.
The other is available for around $12,500 and focuses on machine learning only.

The problem isn't scaling up but scaling down to hit acceptable performance at 10W and 15W TDP for these entry level machines that features everything in a neat and cheap 120mm² package.

Apple didn't set out to compete with the 826mm² brute force behemoth GPUs but they wanted to change the user experience from the bottom up.

If you need to compare the extremes why not go the other way. Who else delivers so much performance at such a low price point and power usage?
 
  • Like
Reactions: throAU and Ktmster

Anonymous Freak

macrumors 603
Dec 12, 2002
5,604
1,388
Cascadia
Nobody was honestly expecting it to outperform a 2060.

The fact that it beats the 1050 Ti and RX 560 is rather amazing though, considering the CPU+GPU draw less than 20 Watts.

Find me another GPU that can perform even half as well in 20 Watts for the GPU alone.

These are in the low-end consumer devices. (Yes, the entry level MacBook Pro is a low-end consumer devices - it always has been.) It's an order of magnitude faster than the iGPU in the Intel 13" MacBook Pro, and comes damn close to the discrete GPU in the 16" MacBook Pro.

Wait until the M2 or M1X, or whatever they call the next version. I'm sure they'll do something with a 35-45 Watt power rating that will probably be able to spank *EVERY* laptop GPU, and all but the >150 Watt desktop GPUs.
 
  • Like
Reactions: MysticCow and ArPe

senttoschool

macrumors 68030
Nov 2, 2017
2,626
5,482
Your comparison is leaving out a lot. For example to reach viable performance at the entry level you do not need a 826mm² GPU that needs 400 Watt for the top sku. The comparison is laughable.

You are looking at two different markets which have nothing in common. The M1 is an entry level SoC with everything integrated to deliver a great user experience in a single package. The starting price is $699.
The other is available for around $12,500 and focuses on machine learning only.

The problem isn't scaling up but scaling down to hit acceptable performance at 10W and 15W TDP for these entry level machines that features everything in a neat and cheap 120mm² package.

Apple didn't set out to compete with the 826mm² brute force behemoth GPUs but they wanted to change the user experience from the bottom up.

If you need to compare the extremes why not go the other way. Who else delivers so much performance at such a low price point and power usage?
Because we're talking about GPUs that will go into the Mac Pro, not Macbook Air.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.