Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

jinnyman

macrumors 6502a
Sep 2, 2011
762
671
Lincolnshire, IL
I don't doubt that Apple can do mid range GPU.
I doubt whether Apple can match highend GPU with SoC approach.

This may sound similar to old consoles vs computers argument, but consoles are made for games, not like computers made for general computing. I believe it all comes down to thermal and having both CPU and GPU in SoC, with shared memory (unless going extreme to vastly increase bandwidth) will hamper whole system in certain cases.

If your work is both CPU & GPU heavy, with heavy memory workload (all these are well mentioned by MP users as necessity), then there will be a bottleneck. In case of a console, when you play games, most of hardware resources can focus on one particular game only with tiny bitty comm and support featuring running along side it. On a Mac, this is not the case. MP users are always boastful about their need for HUGE memory and memory heavy work. So many multi cores of CPU sharing a same memory with 3080ti level GPU? Nobody has been able to done that, and Apple is not a magician. Regardless of my suspicion, I welcome if Apple can surprise me on this matter :).

So. Apple has to solve these issues:

1. raw performance : Altough I'm yet to see AS running Mac OS, I have high hope on this area. With possible scale up, AS can achieve amazing level atleast in CPU. GPU I'm not so sure. Apple is already a great chip maker, let's hope their reputation keeps going forward in heavy computing sector.

2. memory bandwidth : I'm yet to see how Apple will do this. so I have my doubt.

3. thermal : for low power consuming mobile to middle range imac, this won't be the problem. But for highend imacs and mac pros, I doubt a SoC approach is the answer. They gotta separate two most thermally stressed components to effectively cool the stuff.


In addition, what has been discussed in this forum as Apple's revolutionary way to surpass other chip competitors are not industry secret. The competition in this sector is so fierce that if incorporation of such theory vastly outperform previous method, I'd guess Intel, AMD, and nVida must have already done it. I'm not expert in this field of study, but those companies started from conventional computing. They must have their own secret or two for their competitiveness in that sector. Perhaps their origin is making them not do well in mobile area where Apple is leading in performance/power. Will Apple lead in the general computing sector where performance/power is not so important? No idea. Will Apple continue its battle in workstation computing? You know, with ios and mac getting so close together, I doubt Apple will continue the fight. It's not their business style. They always look for maximum profit margin and spending so much for workstation level computing where the market size for them is so small?


As much as I admire Apple's achievement in its own silicon, they are yet to prove their competence in general computing. Bear in mind that scaling up # of cores, increasing frequency of chips, making chips consume vastly more power and what not are not trivial. The gap has never been this small though, so I'm genuinely positive about the future of AS Mac. I guess I'm just trying to lower my expectation so that I either don't get too disappointed or want to be surprised.

Oh crap. I'm rambling like an old man. Well my apology for the stuff I wrote. All my worries seem to come from so little information available. All the above will be more clearer when Apple formally introduce their hardware.

If done right, I really believe this will open up a whole new possibility for Apple. I just hopeApple will not abandon workstations.
 
  • Like
Reactions: burgerrecords

Erehy Dobon

Suspended
Feb 16, 2018
2,161
2,017
No service
So many multi cores of CPU sharing a same memory with 3080ti level GPU? Nobody has been able to done that, and Apple is not a magician.
SGI did Unified Memory Architecture (UMA) in the mid-Nineties in their O2 graphics workstations.

The technology has been there however that's not the issue.

The limiting factor is the reality that the high-end workstation audience wants modular components. That's why the cylindrical Mac Pro was a dead end and Apple returned to a standard modular 19" rack-mountable enclosure.
 
Last edited:

leman

macrumors Core
Oct 14, 2008
19,516
19,664
If your work is both CPU & GPU heavy, with heavy memory workload (all these are well mentioned by MP users as necessity), then there will be a bottleneck.

This could be solved with high bandwidth system RAM (e.g. HBM), a large enough cache and more memory controllers. Such approach won’t be cheap, but costs are less of an issue for Apple - their products are already priced at a premium and they don’t have to compete with other chip makers as they produce for themselves only.

In addition, what has been discussed in this forum as Apple's revolutionary way to surpass other chip competitors are not industry secret. The competition in this sector is so fierce that if incorporation of such theory vastly outperform previous method, I'd guess Intel, AMD, and nVida must have already done it.

Are you referring to TBDR? That’s an interesting topic. From my layman understanding, TBDR renderers didn’t establish themselves in the desktop segment because they are much more complex and because with a larger thermal budget a forward renderer can just brute force its way through. A criticism often brought up with TBDR is poor geometry throughput - less of an issue with mobile applications and their traditionally lower polygon counts, but critical for high-poly PC games. But that was the state of the art ten years ago. Apple seems to have solved it by utilizing the unified shader pipeline - since geometry, compute and fragment processing runs asynchronously on the same hardware, it’s easier to balance out the eventual bottlenecks. As to why Nvidia and co don’t use it - well, probably because they were not interested in this tech. Their stuff works well enough and they did borrow some ideas like tiling (but without deferred fragment shading) to make their GPUs more efficient. Revolutions sometimes come simply because someone has tried (and succeeded) something that others thought would not work. Again, think about MacBook Air or HiDPI screens. Those were laughed at in the beginning.
 

Erehy Dobon

Suspended
Feb 16, 2018
2,161
2,017
No service
... or HiDPI screens. Those were laughed at in the beginning.
By ignorant Western media and technologists mostly.

The benefits of HiDPI were quite apparent to people who write using logographic characters: Chinese, Japanese, Koreans, etc.

Even if you don't understand those languages, the advantages of a HiDPI/Retina screen are blatantly obvious to the naked eye.

A lot of people didn't get it when Retina displays debuted on iPhones and many of them still don't get it. HiDPI video playback isn't the primary benefit. HiDPI text rendering is.
 

leman

macrumors Core
Oct 14, 2008
19,516
19,664
A lot of people didn't get it when Retina displays debuted on iPhones and many of them still don't get it. HiDPI video playback isn't the primary benefit. HiDPI text rendering is.

Exactly. HiDPI is a godsend for people who spend most of their time looking at text.
 
  • Like
Reactions: AlphaCentauri

dgdosen

macrumors 68030
Dec 13, 2003
2,817
1,463
Seattle
I totally agree, the skepticism is mostly unfounded.

Let's talk numbers by comparing Metal GPU performance figures...

Fastest Apple GPU to date (current iPad Pro) - Apple A12Z --> 9105.
Fastest GPU option available for current Macbook Pro 13" - Intel Iris Plus --> 8499 (7% slower).
Fastest GPU option available for current MacBook Pro 16" - AMD Radeon Pro 5600M --> 40714 (3.5x faster).
Fastest GPU option available for current iMac 27" - AMD Radeon Pro Vega 48 --> 49589 (4.4x faster).

Great to post this - now we have a good reason for Apple to ship 13" stuff first - graphic perf isn't entirely expected, so why not give ourselves more time to get that petter GPU performance.

This doesn't throw water on my wish for a re-designed macbook style computer with much better perf/battery, smaller bezels, and no touch bar.
 

Waragainstsleep

macrumors 6502a
Oct 15, 2003
612
221
UK
And as has been pointed out numerous times, not every 'power' customer requires lots of graphics compute power. Therefore, Apple will likely design some fantastic CPUs no-doubt, but I don't believe that the embedded graphics will be anything in that power range.


The luxury Apple now has with their own silicon is they can make one SoC with a giant GPU on the die for Mac pro customers who need one, but they can also make an SoC that focuses more on regular CPU power like a Xeon for people who need that.

Apple can cater to the markets it perceives are there or the ones it believes it can improve or disrupt. They could build a super basic, super efficient iMac for admin use in offices. Something for the reception desk with a chip like the A12Z that runs MS Office and Safari and email and calendar apps and isn't built to do much else besides keep the electric bill down. They could also build a 30" Gaming iMac with something more like the console style ~250W TDP GPU heavy SoC that can play high performance games on a really beautiful screen. If they want to.
 
  • Like
Reactions: Boil

burgerrecords

macrumors regular
Jun 21, 2020
222
106
Technically it’s highly plausible they can. It just doesn’t slot in from a product marketing standpoint.

1. An Apple with a gpu competitive with consoles or ampere nvidia isn’t going to be a sleek device so you have to create an ungainly consumer product for a small user base.

2. Apple also wants to be able to charge a minimum of $5K for boxes with that kind of power, since the market for those are people spending other people’s money or people buying them with pretax dollars.

why cannibalize “pro” revenue to sell the $2k - $3k boxes necessary to get enough gaming market share to really capture a pc gaming box sale?

it’s not the 80s anymore and having a $1500 gaming/home office corporate windows wfh pc and a $1500 arm Mac that acts like a super iPad Pro and a $500 console is the way to go.
 
Last edited:

diamond.g

macrumors G4
Mar 20, 2007
11,435
2,658
OBX
Why should we buy a 199 AppleTV 4K when we can get a (potentially) more powerful gaming system that can do media duties for not a whole lot more money?
 

johngwheeler

macrumors 6502a
Original poster
Dec 30, 2010
639
211
I come from a land down-under...
No.

There are tons of similar GPU units in data centers all around the world right now.

High-performance GPU architecture made cryptocurrency mining on CPUs obsolete years ago.

What you are asking happened about five years ago.

Thanks. I was aware of GPGPU compute for BitCoin mining and other high-performance compute tasks, but wasn't sure how this translated into graphical performance of GPUs on SoCs.

It does sound like there is no technical limitation (other than TDP) to putting the equivalent of today's top desktop GPUs into a SoC. So why are they naysayers that claim "it can't be done"? Do they know something I don't?
[automerge]1595120788[/automerge]
what's your source for this?

Lots of specs on the web:


These don't appear to be rumors, but actual specificiations.
 
Last edited:
  • Like
Reactions: burgerrecords

diamond.g

macrumors G4
Mar 20, 2007
11,435
2,658
OBX
Thanks. I was aware of GPGPU compute for BitCoin mining and other high-performance compute tasks, but wasn't sure how this translated into graphical performance of GPUs on SoCs.

It does sound like there is no technical limitation (other than TDP) to putting the equivalent of today's top desktop GPUs into a SoC. So why are they naysayers that claim "it can't be done"? Do they know something I don't?
[automerge]1595120788[/automerge]


Lots of specs on the web:


These don't appear to be rumors, but actual specificiations.
No one has made a big TBDR part before so it is hard to say how easily it can scale. Remember there are two performance metrics folk are looking at here. Compute and Rasterization, being good in one doesn't immediately make you good in the other.
 

JacobHarvey

macrumors regular
Apr 2, 2019
118
107
Somewhere
I don’t know whether mobile graphics benchmarks use different precision than the desktop one (I hope not, that would be awkward...)

In the Anandtech article you linked before it seems that the mobile version of the 3D Mark IceStorm Unlimited benchmark is indeed using different precision to the desktop version as the authors state:

"On Windows, it uses DX11, and of course the precision is not the same across mobile and PC with the PC version running at 32-bit and OpenGL ES 2.0 only using 16-bit."
 

jinnyman

macrumors 6502a
Sep 2, 2011
762
671
Lincolnshire, IL
Are you referring to TBDR? That’s an interesting topic. From my layman understanding, TBDR renderers didn’t establish themselves in the desktop segment because they are much more complex and because with a larger thermal budget a forward renderer can just brute force its way through. A criticism often brought up with TBDR is poor geometry throughput - less of an issue with mobile applications and their traditionally lower polygon counts, but critical for high-poly PC games. But that was the state of the art ten years ago. Apple seems to have solved it by utilizing the unified shader pipeline - since geometry, compute and fragment processing runs asynchronously on the same hardware, it’s easier to balance out the eventual bottlenecks. As to why Nvidia and co don’t use it - well, probably because they were not interested in this tech. Their stuff works well enough and they did borrow some ideas like tiling (but without deferred fragment shading) to make their GPUs more efficient. Revolutions sometimes come simply because someone has tried (and succeeded) something that others thought would not work. Again, think about MacBook Air or HiDPI screens. Those were laughed at in the beginning.
Well. TBDR was one of the exampless I was referring to. Other being hyper bandwidth, P/E cores etc.
I'm not expert in any of those topics, but As for TBDR or tile based deferred rendering, googling made me think almost all PC GPU already utilize some form of tile based rendering. https://en.wikipedia.org/wiki/Tiled_rendering

Sorry for quoting wikipedia, and correct me if that's wrong, but looks like all nVidia and AMD current models are doing tile architecture with immediate mode GPUs running in tendom (or I'm not sure, but atleast based on those description) to get pros of both worlds. I don't know enough about that tech field to distinguish TBR vs TBDR, but looks like industry is already using many aspect of TBR than immediate mode rendering only.

Anyhow, as for MBA and HiDPI, I don't think any customer was against those introduction. When hidpi was first introduced with retina MBP on Mac, almost everyone here welcomed it. MBA was met with huge interest when first introduced in 2008.
 

leman

macrumors Core
Oct 14, 2008
19,516
19,664
Sorry for quoting wikipedia, and correct me if that's wrong, but looks like all nVidia and AMD current models are doing tile architecture with immediate mode GPUs running in tendom (or I'm not sure, but atleast based on those description) to get pros of both worlds. I don't know enough about that tech field to distinguish TBR vs TBDR, but looks like industry is already using many aspect of TBR than immediate mode rendering only.

Yes, modern desktop GPUs started using a form of tiling very recently. This was responsible for a big jump in performance for both Nvidia and AMD. Tiling itself is an important memory locality optimization - you try to finish the work for one chunk before moving to the others, so you are likely to save on memory fetches. But they still do immediate rendering, it’s not TBDR. Immediate/forward rendering simply means that the geometry is drawn sequentially - if you have multiple overlapping triangles, you might end up painting the same pixel multiple times, even though only the “last” pixel write will be visible in the end.

TBDR takes it one step further though - it does not just tile, but it also effectively “sorts” the geometry to draw only the one that is visible. Each pixel is painted exactly once. This can bring massive savings in memory accesses and saves the shaders from doing useless work.

Doing tiling itself is relatively easy, you just maintain a list of primitives that touch a tile and dispatch rasterizer operations per tile. It does require additional hardware etc, but Nvidia and AMD were able to implement it fairly quickly. The deferred (DR) part is much more tricky. Apple can do it efficiently because they are leveraging IP from Imagination, who were building TBDR GPUs fir an eternity.

Anyhow, as for MBA and HiDPI, I don't think any customer was against those introduction. When hidpi was first introduced with retina MBP on Mac, almost everyone here welcomed it. MBA was met with huge interest when first introduced in 2008.

I remember it differently. The forums were filled with complaints how retina Macs are laggy, blurry, too expensive for a gimmick etc. MBA was a great success but the initial reaction was along the lines of “lol Apple made a netbook”.
 

leman

macrumors Core
Oct 14, 2008
19,516
19,664
In the Anandtech article you linked before it seems that the mobile version of the 3D Mark IceStorm Unlimited benchmark is indeed using different precision to the desktop version as the authors state:

"On Windows, it uses DX11, and of course the precision is not the same across mobile and PC with the PC version running at 32-bit and OpenGL ES 2.0 only using 16-bit."

If this is the case, then UL have messed things up. What’s the point of a benchmark if it does different things on different platforms?
 

nothingtoseehere

macrumors 6502
Jun 3, 2020
455
522
I remember it differently. The forums were filled with complaints how retina Macs are laggy, blurry, too expensive for a gimmick etc.

Luckily enough, I didn't read forums then ;) but was simply excited by a high resolution display! It was Retina that brought me back to the Mac because of this:

HiDPI is a godsend for people who spend most of their time looking at text.

How true.
 

diamond.g

macrumors G4
Mar 20, 2007
11,435
2,658
OBX
If this is the case, then UL have messed things up. What’s the point of a benchmark if it does different things on different platforms?
To be fair I can’t seem to find the link that shows devices on their site. But based on the FAQ I don’t think they intend for desktop to be compared with smartphone (or tablet).
 

leman

macrumors Core
Oct 14, 2008
19,516
19,664
To be fair I can’t seem to find the link that shows devices on their site. But based on the FAQ I don’t think they intend for desktop to be compared with smartphone (or tablet).

Possible. I’m still confused why they have the same benchmark for mobile and desktop in that case.

Anyhow, to make matters worse, it is very much possible that some mobile GPUs do not support 32-Bit precision in the fragment domain at all. It is really difficult to compare things if you don’t know what you are comparing. Apple GPUs occupy this weird niche since they are basically a hybrid.

I believe that GFXbench might be the most representable benchmark at the moment. According to it, iPad Pro trades blows with the GTX 1050.
 

diamond.g

macrumors G4
Mar 20, 2007
11,435
2,658
OBX
Possible. I’m still confused why they have the same benchmark for mobile and desktop in that case.

Anyhow, to make matters worse, it is very much possible that some mobile GPUs do not support 32-Bit precision in the fragment domain at all. It is really difficult to compare things if you don’t know what you are comparing. Apple GPUs occupy this weird niche since they are basically a hybrid.

I believe that GFXbench might be the most representable benchmark at the moment. According to it, iPad Pro trades blows with the GTX 1050.
The mobile 1050, or desktop one?
 

leman

macrumors Core
Oct 14, 2008
19,516
19,664
The mobile 1050, or desktop one?

Mobile of course. For example:


BTW, the 1050 Ti max-q is the GPU in the 2018 Dell XPS 15”, so we see here an Apple tablet GPU within a striking distance of a contemporary 50W TDP dedicated one.
 
Last edited:

diamond.g

macrumors G4
Mar 20, 2007
11,435
2,658
OBX
Mobile of course. For example:

Ah yeah they had a ton of 1050’s. So I wasn’t sure which one to choose. I ran the benchmark on my pc and the dx12 render appears to be broken. Sigh. Testing Metal vs DX11 isn’t comparable as Metal is lower level like DX12.
 

leman

macrumors Core
Oct 14, 2008
19,516
19,664
Ah yeah they had a ton of 1050’s. So I wasn’t sure which one to choose. I ran the benchmark on my pc and the dx12 render appears to be broken. Sigh. Testing Metal vs DX11 isn’t comparable as Metal is lower level like DX12.

Metal has different managed API layers and is not directly comparable to the DX version model. Since we don’t know how GFXbench implements their tests we need to treat all these as approximations anyway. You won’t get accurate results with any if theses, but you can get a general idea. So yes, you can run the DX11 benchmark, I doubt that the rendering setup is that much different from the a Metal one.
 

PortoMavericks

macrumors 6502
Jun 23, 2016
288
353
Gotham City
There appears to be a fair bit of skepticism that Apple Silicon can achieve decent graphics performance without a discrete GPU, with the implied unsuitably of new Macs for games or intensive graphical applications.

Given that both the upcoming XBox System X and Playstation PS5 will have powerful GPUs on a SoC, I am curious as to why some people think that Apple won't be able to do the same?

I understand that the Sony and Microsoft consoles will be running their AMD SoCs at a high TDP (maybe >200W?), but this wouldn't be a problem for the iMac which already can run Intel Xeon + Radeon Pro GPU combinations with over 300W TDP

If the XBox X will allegedly have performance close to an NVidia 2080 Ti on a SoC, then what would stop Apple doing the same?

Obviously for the laptops a lower TDP would be needed. I think current MBP16s have a TDP of about 100W for the combined CPU + dGPU, but this should still mean that it would be possible to have a pretty powerful 50-70W TDP GPU on the SoC, similar in capability to the current AMD dGPUs

So I don't understand why there is doubt that it's possible to have a powerful GPU on Apple Silicon.

Or is it simply a lack of confidence that Apple has the expertise in this area to build one, compared to AMD & NVidia who have been producing GPUs for longer?

Intelectual property.
 
  • Like
Reactions: burgerrecords
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.