If anyone is confused how this rendering technique works, here's a Beyond 3D very detailed article from the Xbox 360 GPU:
Yes, AMD supported it on the Xbox 360.
TBDR is mostly used on underpowered systems.
So the fact that nVidia is using TBDR on 10-20 series cards means nothing? TBDR is just another way of rendering a scene that has traditionally been used for lower power systems for efficiency reasons, not because it would hinder performance of higher power chips.
If you play a game on the iPad and the iPhone, very small screens, it looks OK, when you start pushing across screens sizes... on a 16" MacBook, things start to get ugly.
Guys, TBDR is like really, really bad news for gaming on the Mac.
TBDR has been used to circumvent bandwidth restrictions on old, underpowered hardware (like the Xbox 360 and then the Xbox One S and Fat) and the reconstruction is always very poor on action games.
At the end of the day, the message is: we have an underpowered GPU and we'll force you (dev) to use this trick. This is the final blow on the Mac as a gaming machine like the title suggests.
These aren't weak mobile GPUs?
Lower polygon counts in mobile games are an ideal match for TBR. LOWER POLYGON COUNTS.
What's your point? AMD and Nvidia also offer support for Half-precision floating-point operations, the famous FP16, that no one uses in the PC gaming world.
Mostly, sure. But don’t confuse “typical” cost cutting application for proof of anything.Yes, AMD supported it on the Xbox 360.
TBDR is mostly used on underpowered systems.
TBDR captures the entire scene before it starts to render it, splitting it up into multiple small regions, or tiles, that get processed separately, so it processes information pretty fast and doesn’t require a lot of memory bandwidth. From there, the architecture won’t actually render the scene until it rejects any and all occluded pixels.
On the other hand, IMR does things the opposite way, rendering the entire scene before it decides what pixels need to be thrown out. As you probably guessed, this method is inefficient, yet it’s how modern discrete GPUs operate, and they need a lot of bandwidth to do so.
For Apple Silicon ARM architecture, TBDR is a much better match because its focus is on speed and lower power consumption – not to mention the GPU is on the same chip as the CPU, hence the term SoC [System on a Chip].
Are you really quoting a 2008 presentation in 2020? Apple TBDR GPUs have no issues whatsoever with high polygon counts. If you have high overdraw, their throughout will always outperform a forward renderer.
[automerge]1594219700[/automerge]
FP16 is used routinely for color calculations, as you don't need much precision. Even if you don't use it yourself, there is a good chance that a driver optimization will rewrite your shaders for you.
Are you really quoting a 2008 presentation in 2020? Apple TBDR GPUs have no issues whatsoever with high polygon counts. If you have high overdraw, their throughout will always outperform a forward renderer.
FP16 is used routinely for color calculations, as you don't need much precision. Even if you don't use it yourself, there is a good chance that a driver optimization will rewrite your shaders for you.
TBDR never caught up on desktop hardware — for reasons unknown to me
Do you think that many high end publishers will be willing to spend a lot of money porting their games over to a different OS and platform? Especially given Apple's long history of providing tepid support of gaming. They're going to go where the money is, and with only a tiny niche (I believe) of apple's 10 percent market share seem focused on gaming, it doesn't make sense imo. There really isn't much opportunity to make money.
I don’t think you actually understand what you’re talking about here to be honest. You’re just making assumptions, and ones not even based on an actual technical understanding of the topic.Yes I am, because Apple is about to release a Mac with mobile GPU in 2020.
You are correct, however, you'll work your ass off to optimize the engine for months depending on the scale of your game to get a 15% performance gain. We all know how game developers are always working on crunch conditions.
PC games are made for discrete GPUs.
Discrete GPUs = more bandwidth
I'll take a look again on WWDC videos to make sure if Apple is offering other rendering techniques for games. If they're going TBDR *only*, it reveals how much underpowered those GPUs will be.
Yes I am, because Apple is about to release a Mac with mobile GPU in 2020.
...
I'll take a look again on WWDC videos to make sure if Apple is offering other rendering techniques for games. If they're going TBDR *only*, it reveals how much underpowered those GPUs will be.
PC games are made for discrete GPUs.
Discrete GPUs = more bandwidth
Yes, they are finally bringing a TBDR GPU with its many benefits to the desktop. As to how "underpowered" they will be, we will see.
Then it's finally time to do something new. The brute-force approach of forward rendering is reaching it's limit. Smarter algorithms are the way to go.
I wonder how flexible Apples solution really is. AMD tends to do more things in shaders versus having a fixed function unit these days. Which is why people are interested in how AMD will implement DXR versus the fixed function units that Nvidia uses.Yes, they are finally bringing a TBDR GPU with its many benefits to the desktop. As to how "underpowered" they will be, we will see.
Then it's finally time to do something new. The brute-force approach of forward rendering is reaching it's limit. Smarter algorithms are the way to go.
I wonder how flexible Apples solution really is. AMD tends to do more things in shaders versus having a fixed function unit these days. Which is why people are interested in how AMD will implement DXR versus the fixed function units that Nvidia uses.
This TBDR v IMR is all down to how easy it is to implement to. If it's too much hassle then games companies will just pass. The power of TBDR appears to be the parallelism where IMR sounds brute force. I'm not an expert in GPU so am speculating but without doing it the TBDR way that will lead to the ARM GPU being pretty weak and pretty hot.
IIRC PowerVR3 and earlier were good cards. Two things changed, hardware lighting and ATI/Nvidia/Voodoo wrote better drivers. A lot of the performance you see on games today is due to AMD/Nvidia constantly tweaking the drivers to run games better (massaging how the HLSL runs). Apple could do the same if they chose to (they don't currently).This TBDR v IMR is all down to how easy it is to implement to. If it's too much hassle then games companies will just pass. The power of TBDR appears to be the parallelism where IMR sounds brute force. I'm not an expert in GPU so am speculating but without doing it the TBDR way that will lead to the ARM GPU being pretty weak and pretty hot.
It's not discussed because there is a longstanding active game development environment for the iOS/iPadOS universe. Adding a few thousand Apple Silicon Macs to an existing user base of hundreds of thousands of iOS device owners is not really a promise of much profit.That depends. One of the little discussed attributes of the move to Apple Silicon is that development will now address a larger market than originally just for Macs. I have no doubt that the new silicon will have the performance horsepower - but not developers can make games for the entire Apple markets space -- MacOS, IPadOS, TVOS -- not just the Mac.
That might provide the commercial incentive to port their games to the platform and leverage Metal. If Apple makes development as easy and economical as it can -- there might be a rennaisaince in Apple gaming.
Don't forget that high-end gaming PC's are a minority. Looking at the Steam hardware survey, most PC out there aren't too impressive in the GPU department either.
And the consoles were also midrange (upon release). Even the next gen will be midrange as Big Navi is expected to have ~60 or more CU's.This either gets forgotten or goes unacknowledged in so, so many online discussions. People talk it's the norm to be running at least an RTX 2070 Super, but in reality is that on average, computers being used for gaming sit somewhere between the GTX 950 and 1060, with the low end dipping down as far as Intel HD 4000. Those with high end gaming rigs are just disproportionately vocal.