It’ll be interesting to see what graphic capabilities Apple provides as their initial Mac offering. That certainly won’t utilize fast RAM BUT the level of performance could reset folks’ expectation of TBDR.
I agree. 3DMark Wild Life ( a benchmark I tend to trust, because it does exactly the same work across the platform) suggest that the 4-core GPU in the iPhone is a match for any AMD APU (and significantly faster than the Iris plus in the current 13" MBP). A higher-clocked, more core version of that will definitely be a big upgrade for Apple's entry-level systems.
No, they’re not super-dumb. I’d wager than TBDR is “dumber” as it’s simpler. With IMR there are a lot of things that need to be done and done FAST (huge bandwidth) when you’re dealing with the entire screen at once (while also decreasing the need as much as possible for expensive power hungry fast ram). Even the games factor into it as they’re spending additional processing time performing a depth check that wouldn’t be required for TBDR. There’s a LOT here that doesn’t apply directly to TBDR.
Not really. From engineering (and system design) perspective, TBDR is much more complicated. With TBDR, you need to deal with binning, tile rasterization (while tracking the front-most primitive for each pixel), pixel shading while fetching primitive data for each pixel... all while treating tricky corner cases like transparent pixels and overflowing tile buffers. Basic IMR rendering is much more simpler in comparison: you fetch a primitive, you rasterize it, you shade the rasterized pixels, done.
And of course a TBDR GPU needs to do a depth check — how would they detect which objects are in the front and which are in the back otherwise? If you mean a depth pre-pass instead, then yes, TBDR GPU doesn't benefit from it (actually, it will suffer a performance penalty from it), but there are other things that developers need to keep in mind. Like correctly annotating render pass attachments or drawing transparent objects in a proper order.