Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Zackmd1

macrumors 6502a
Oct 3, 2010
815
487
Maryland US
Yes, AMD supported it on the Xbox 360.

TBDR is mostly used on underpowered systems.


So the fact that nVidia is using TBDR on 10-20 series cards means nothing? TBDR is just another way of rendering a scene that has traditionally been used for lower power systems for efficiency reasons, not because it would hinder performance of higher power chips.
 
  • Like
Reactions: throAU

PortoMavericks

macrumors 6502
Jun 23, 2016
288
353
Gotham City

leman

macrumors Core
Oct 14, 2008
19,521
19,674
If you play a game on the iPad and the iPhone, very small screens, it looks OK, when you start pushing across screens sizes... on a 16" MacBook, things start to get ugly.

Can you elaborate? What does TBDR have to do with screen sizes?

Guys, TBDR is like really, really bad news for gaming on the Mac.

TBDR has been used to circumvent bandwidth restrictions on old, underpowered hardware (like the Xbox 360 and then the Xbox One S and Fat) and the reconstruction is always very poor on action games.

TBDR is a sophisticated rendering approach that — as you say — has been traditionally used to optimize memory bandwidth usage. Xbox 360 never used TBDR — they used a combination of Z pre-pass and a tiled forward rendering. Tiled forward rendering is also used in newer Nvidia and AMD GPUs and is responsible for much of their efficiency improvements.

TBDR never caught up on desktop hardware — for reasons unknown to me (maybe because it is more complicated to implement). I suppose that Apple is the first player that will make it work. There is nothing about TBDR that makes it inherently less suitable for high-performance graphics.


At the end of the day, the message is: we have an underpowered GPU and we'll force you (dev) to use this trick. This is the final blow on the Mac as a gaming machine like the title suggests.

Forward rendering is a trick, TBDR is a trick, ray tracing is a trick — every rendering algorithm is a trick. TBDR is an especially neat trick that allows you to do things that an immediate renderer will struggle with. Basically, anything that an immediate renderer can do, TBDR GPU can do too — but an immediate renderer does not give you guarantees over memory access patterns and therefore cannot implement some rendering approaches efficiently.
 
  • Like
Reactions: Zackmd1

leman

macrumors Core
Oct 14, 2008
19,521
19,674
Lower polygon counts in mobile games are an ideal match for TBR. LOWER POLYGON COUNTS.


Are you really quoting a 2008 presentation in 2020? Apple TBDR GPUs have no issues whatsoever with high polygon counts. If you have high overdraw, their throughout will always outperform a forward renderer.
[automerge]1594219700[/automerge]
What's your point? AMD and Nvidia also offer support for Half-precision floating-point operations, the famous FP16, that no one uses in the PC gaming world.

FP16 is used routinely for color calculations, as you don't need much precision. Even if you don't use it yourself, there is a good chance that a driver optimization will rewrite your shaders for you.
 
  • Like
Reactions: Zackmd1

NT1440

macrumors Pentium
May 18, 2008
15,092
22,158
Yes, AMD supported it on the Xbox 360.

TBDR is mostly used on underpowered systems.
Mostly, sure. But don’t confuse “typical” cost cutting application for proof of anything.

Most 4 cylinder cars are designed to be gas conscientious and economical. That doesn’t mean that *when you design the entire powertrain* you cant make a monster 600+ HP 4 cylinder engine.

All you’ve done is take an assumption and pile it on top of another assumption without examining the entire rest of the pipeline.
 

Exit_74

macrumors newbie
Jun 25, 2020
24
22
Gizmodo summarized how it compares with Immediate Mode Rendering used by Intel, NVIDIA, and AMD GPUs.


TBDR captures the entire scene before it starts to render it, splitting it up into multiple small regions, or tiles, that get processed separately, so it processes information pretty fast and doesn’t require a lot of memory bandwidth. From there, the architecture won’t actually render the scene until it rejects any and all occluded pixels.
On the other hand, IMR does things the opposite way, rendering the entire scene before it decides what pixels need to be thrown out. As you probably guessed, this method is inefficient, yet it’s how modern discrete GPUs operate, and they need a lot of bandwidth to do so.
For Apple Silicon ARM architecture, TBDR is a much better match because its focus is on speed and lower power consumption – not to mention the GPU is on the same chip as the CPU, hence the term SoC [System on a Chip].
 

Zackmd1

macrumors 6502a
Oct 3, 2010
815
487
Maryland US
Are you really quoting a 2008 presentation in 2020? Apple TBDR GPUs have no issues whatsoever with high polygon counts. If you have high overdraw, their throughout will always outperform a forward renderer.
[automerge]1594219700[/automerge]


FP16 is used routinely for color calculations, as you don't need much precision. Even if you don't use it yourself, there is a good chance that a driver optimization will rewrite your shaders for you.

TBDR makes a ton of sense for an SOC due to the memory bandwidth limitations. LPDDR5 coupled with a TBDR based GPU and I think performance of Apple SOCs in mobile devices will match or exceed current Navi offerings in raw performance. How that equates to real life performance will depend on developers.
 

PortoMavericks

macrumors 6502
Jun 23, 2016
288
353
Gotham City
Are you really quoting a 2008 presentation in 2020? Apple TBDR GPUs have no issues whatsoever with high polygon counts. If you have high overdraw, their throughout will always outperform a forward renderer.

Yes I am, because Apple is about to release a Mac with mobile GPU in 2020.


FP16 is used routinely for color calculations, as you don't need much precision. Even if you don't use it yourself, there is a good chance that a driver optimization will rewrite your shaders for you.

You are correct, however, you'll work your ass off to optimize the engine for months depending on the scale of your game to get a 15% performance gain. We all know how game developers are always working on crunch conditions.

TBDR never caught up on desktop hardware — for reasons unknown to me

PC games are made for discrete GPUs.

Discrete GPUs = more bandwidth




I'll take a look again on WWDC videos to make sure if Apple is offering other rendering techniques for games. If they're going TBDR *only*, it reveals how much underpowered those GPUs will be.
 

cgsnipinva

macrumors 6502
Jan 29, 2013
494
446
Leesburg, VA
Do you think that many high end publishers will be willing to spend a lot of money porting their games over to a different OS and platform? Especially given Apple's long history of providing tepid support of gaming. They're going to go where the money is, and with only a tiny niche (I believe) of apple's 10 percent market share seem focused on gaming, it doesn't make sense imo. There really isn't much opportunity to make money.


That depends. One of the little discussed attributes of the move to Apple Silicon is that development will now address a larger market than originally just for Macs. I have no doubt that the new silicon will have the performance horsepower - but not developers can make games for the entire Apple markets space -- MacOS, IPadOS, TVOS -- not just the Mac.

That might provide the commercial incentive to port their games to the platform and leverage Metal. If Apple makes development as easy and economical as it can -- there might be a rennaisaince in Apple gaming.
 

NT1440

macrumors Pentium
May 18, 2008
15,092
22,158
Yes I am, because Apple is about to release a Mac with mobile GPU in 2020.




You are correct, however, you'll work your ass off to optimize the engine for months depending on the scale of your game to get a 15% performance gain. We all know how game developers are always working on crunch conditions.



PC games are made for discrete GPUs.

Discrete GPUs = more bandwidth




I'll take a look again on WWDC videos to make sure if Apple is offering other rendering techniques for games. If they're going TBDR *only*, it reveals how much underpowered those GPUs will be.
I don’t think you actually understand what you’re talking about here to be honest. You’re just making assumptions, and ones not even based on an actual technical understanding of the topic.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,674
Yes I am, because Apple is about to release a Mac with mobile GPU in 2020.

...

I'll take a look again on WWDC videos to make sure if Apple is offering other rendering techniques for games. If they're going TBDR *only*, it reveals how much underpowered those GPUs will be.

Yes, they are finally bringing a TBDR GPU with its many benefits to the desktop. As to how "underpowered" they will be, we will see.


PC games are made for discrete GPUs.

Discrete GPUs = more bandwidth

Then it's finally time to do something new. The brute-force approach of forward rendering is reaching it's limit. Smarter algorithms are the way to go.
 

PortoMavericks

macrumors 6502
Jun 23, 2016
288
353
Gotham City
Yes, they are finally bringing a TBDR GPU with its many benefits to the desktop. As to how "underpowered" they will be, we will see.




Then it's finally time to do something new. The brute-force approach of forward rendering is reaching it's limit. Smarter algorithms are the way to go.

I totally agree with that.
 

diamond.g

macrumors G4
Mar 20, 2007
11,438
2,664
OBX
Yes, they are finally bringing a TBDR GPU with its many benefits to the desktop. As to how "underpowered" they will be, we will see.




Then it's finally time to do something new. The brute-force approach of forward rendering is reaching it's limit. Smarter algorithms are the way to go.
I wonder how flexible Apples solution really is. AMD tends to do more things in shaders versus having a fixed function unit these days. Which is why people are interested in how AMD will implement DXR versus the fixed function units that Nvidia uses.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,674
I wonder how flexible Apples solution really is. AMD tends to do more things in shaders versus having a fixed function unit these days. Which is why people are interested in how AMD will implement DXR versus the fixed function units that Nvidia uses.

Well, Apple GPUs have been full async for a long while now, which is one of the reason why they are fast. One of the reasons why TBDR is perceived as slow (as @PortoMavericks points out rather nonchalantly) is because historical low-powered GPUs indeed had a limited front-end capacity. Their primitive throughput was low and once you tried to draw something with a lot of triangles, they would bottleneck. Apple GPUs however can schedule vertex, fragment and compute shaders asynchronously using the same hardware (feature they inherit from their PowerVR past), so they are able to get good shader core utilization in most cases. There is obviously plenty of fixed-function left (binner and clipper have to be fixed-function if you want remotely good performance). At the same time, the TBDR nature of the GPU allows Apple to integrate some other traditionally fixed function stuff into programmable pipeline (e.g. blending, which is fully programmable on Apple GPUs).

As far as ray tracing goes — good question. Currently, ray tracing in Metal seems to run entirely on programmable GPU hardware. Apple's ray tracing API seems straightforward enough, but I have to admit I am not very familiar with DX12 API, so I can't draw parallels. My uneducated guess would be that Metal API would be compatible with at least some degree of hardware accelerated RT (like ray/object intersectors), so maybe we will some something like that in future. Current performance is nothing to write home about. I believe it is currently targeted more towards professional rendering software rather than real-time games. You can do some interesting stuff with it though, such as pathfinding.
 

dogslobber

macrumors 601
Oct 19, 2014
4,670
7,809
Apple Campus, Cupertino CA
This TBDR v IMR is all down to how easy it is to implement to. If it's too much hassle then games companies will just pass. The power of TBDR appears to be the parallelism where IMR sounds brute force. I'm not an expert in GPU so am speculating but without doing it the TBDR way that will lead to the ARM GPU being pretty weak and pretty hot.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,674
This TBDR v IMR is all down to how easy it is to implement to. If it's too much hassle then games companies will just pass. The power of TBDR appears to be the parallelism where IMR sounds brute force. I'm not an expert in GPU so am speculating but without doing it the TBDR way that will lead to the ARM GPU being pretty weak and pretty hot.

TBDR is the property of the hardware itself, you don't have to do anything specific to take advantage of it. There are some pitfalls: for example, it is possible to draw your scene in a way that will disable the deferred rendering part of the pipeline, turning your GPU into a slow version of a forward renderer. But these things are suboptimal on traditional GPUs as well, so games usually avoid them.

However, a unique aspect of Apple's TBDR implementation is that it allows you to take precise control of the on-chip tile memory, which can improve performance while at the same time making your drawing code simpler. So you can invest some development time in making your games run more efficiently (=faster). And if spending a week or two tweaking the code would give your game a 30% performance boost — then suddenly it becomes possible to run it on more hardware.

And I think there might be some appeal here for a game developer. One reason why Mac does not have many games is simply because porting games to Mac is difficult. The majority of Macs only have very mediocre GPUs and driver quality is subpar. Only the most expensive Macs are suitable for gaming, and even then it's kind of "meh". All tis makes Mac game development a bad investment. But if the mainstream Mac suddenly had a decent GPU, streamlined API across the board, and you could make your game run better than on a traditional GPU by tweaking your engine, now things starts getting more interesting. Don't forget that high-end gaming PC's are a minority. Looking at the Steam hardware survey, most PC out there aren't too impressive in the GPU department either. With their TBDR architecture, Apple has the potential of bringing mid-range graphical performance to ultra-portable laptops. Of course, it all depends on how the hardware performs in the end. But if they can match a GTX 1660 in their entry-level 13" laptop — then suddenly every Mac machine is a mainstream gaming box.
 

diamond.g

macrumors G4
Mar 20, 2007
11,438
2,664
OBX
This TBDR v IMR is all down to how easy it is to implement to. If it's too much hassle then games companies will just pass. The power of TBDR appears to be the parallelism where IMR sounds brute force. I'm not an expert in GPU so am speculating but without doing it the TBDR way that will lead to the ARM GPU being pretty weak and pretty hot.
IIRC PowerVR3 and earlier were good cards. Two things changed, hardware lighting and ATI/Nvidia/Voodoo wrote better drivers. A lot of the performance you see on games today is due to AMD/Nvidia constantly tweaking the drivers to run games better (massaging how the HLSL runs). Apple could do the same if they chose to (they don't currently).
 
  • Like
Reactions: dogslobber

Erehy Dobon

Suspended
Feb 16, 2018
2,161
2,017
No service
That depends. One of the little discussed attributes of the move to Apple Silicon is that development will now address a larger market than originally just for Macs. I have no doubt that the new silicon will have the performance horsepower - but not developers can make games for the entire Apple markets space -- MacOS, IPadOS, TVOS -- not just the Mac.

That might provide the commercial incentive to port their games to the platform and leverage Metal. If Apple makes development as easy and economical as it can -- there might be a rennaisaince in Apple gaming.
It's not discussed because there is a longstanding active game development environment for the iOS/iPadOS universe. Adding a few thousand Apple Silicon Macs to an existing user base of hundreds of thousands of iOS device owners is not really a promise of much profit.

This doesn't open up a wider audience for game developers. It mostly gives existing mobile game developers a chance to add a handful of Mac users. Apple is leveraging their mobile device hegemony back to the Mac, not the other way around.

Remember that Macs are less than 10% of the world's total deployed PC market. The first wave of Apple Silicon Macs won't even amount to 0.1% of the world's total PCs. If Apple maintains the same PC marketshare, it would still take roughly 10 years for Apple users to migrate from Intel Macs to Apple Silicon Macs. It's not going to instantly create a 10% Apple Silicon Mac marketshare.

I simply don't see many game developers who will be willing to spend the required long-term effort into Apple Silicon Mac game development when the likely profits will be meager at best.
 

iindigo

macrumors 6502a
Jul 22, 2002
772
43
San Francisco, CA
Don't forget that high-end gaming PC's are a minority. Looking at the Steam hardware survey, most PC out there aren't too impressive in the GPU department either.

This either gets forgotten or goes unacknowledged in so, so many online discussions. People talk it's the norm to be running at least an RTX 2070 Super, but in reality is that on average, computers being used for gaming sit somewhere between the GTX 950 and 1060, with the low end dipping down as far as Intel HD 4000. Those with high end gaming rigs are just disproportionately vocal.
 

diamond.g

macrumors G4
Mar 20, 2007
11,438
2,664
OBX
This either gets forgotten or goes unacknowledged in so, so many online discussions. People talk it's the norm to be running at least an RTX 2070 Super, but in reality is that on average, computers being used for gaming sit somewhere between the GTX 950 and 1060, with the low end dipping down as far as Intel HD 4000. Those with high end gaming rigs are just disproportionately vocal.
And the consoles were also midrange (upon release). Even the next gen will be midrange as Big Navi is expected to have ~60 or more CU's.

But that never stopped enthusiast PC folks from being able to take advantage of things not found elsewhere (see variable rate shaders, Ray Traced lighting, etc). Those things eventually get trickled down to the mainstream if popular enough (or in the next gen consoles become the new base).
 

bousozoku

Moderator emeritus
Jun 25, 2002
16,120
2,399
Lard
If Apple would help port the greatest game engines to their technology, a lot of progress could be made. If Apple is not going to abandon Metal, they could provide help in converting code that uses OpenGL to use Metal. I still remember in the 1990s when they tried to lure game developers with, what was it, GameKit, SpriteKit, etc. They've been a moving target.

The trouble for me is waiting for a Windows game to be ported and finding it at US$9.99 on that side, while the recently ported game is US$49.99.

If Apple threw serious money at a small studio like Deep Silver--the Exodus titles--they might be seen in a different light.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.