AMD would call it an APU, but that may be semantics? https://en.wikichip.org/wiki/microsoft/scorpio_engine shows what the Xbox One X looks like, Series X (and PS5) will be similar.what's your source for this?
AMD would call it an APU, but that may be semantics? https://en.wikichip.org/wiki/microsoft/scorpio_engine shows what the Xbox One X looks like, Series X (and PS5) will be similar.what's your source for this?
SGI did Unified Memory Architecture (UMA) in the mid-Nineties in their O2 graphics workstations.So many multi cores of CPU sharing a same memory with 3080ti level GPU? Nobody has been able to done that, and Apple is not a magician.
AMD would call it an APU, but that may be semantics? https://en.wikichip.org/wiki/microsoft/scorpio_engine shows what the Xbox One X looks like, Series X (and PS5) will be similar.
If your work is both CPU & GPU heavy, with heavy memory workload (all these are well mentioned by MP users as necessity), then there will be a bottleneck.
In addition, what has been discussed in this forum as Apple's revolutionary way to surpass other chip competitors are not industry secret. The competition in this sector is so fierce that if incorporation of such theory vastly outperform previous method, I'd guess Intel, AMD, and nVida must have already done it.
By ignorant Western media and technologists mostly.... or HiDPI screens. Those were laughed at in the beginning.
A lot of people didn't get it when Retina displays debuted on iPhones and many of them still don't get it. HiDPI video playback isn't the primary benefit. HiDPI text rendering is.
I totally agree, the skepticism is mostly unfounded.
Let's talk numbers by comparing Metal GPU performance figures...
Fastest Apple GPU to date (current iPad Pro) - Apple A12Z --> 9105.
Fastest GPU option available for current Macbook Pro 13" - Intel Iris Plus --> 8499 (7% slower).
Fastest GPU option available for current MacBook Pro 16" - AMD Radeon Pro 5600M --> 40714 (3.5x faster).
Fastest GPU option available for current iMac 27" - AMD Radeon Pro Vega 48 --> 49589 (4.4x faster).
And as has been pointed out numerous times, not every 'power' customer requires lots of graphics compute power. Therefore, Apple will likely design some fantastic CPUs no-doubt, but I don't believe that the embedded graphics will be anything in that power range.
No.
There are tons of similar GPU units in data centers all around the world right now.
High-performance GPU architecture made cryptocurrency mining on CPUs obsolete years ago.
What you are asking happened about five years ago.
what's your source for this?
No one has made a big TBDR part before so it is hard to say how easily it can scale. Remember there are two performance metrics folk are looking at here. Compute and Rasterization, being good in one doesn't immediately make you good in the other.Thanks. I was aware of GPGPU compute for BitCoin mining and other high-performance compute tasks, but wasn't sure how this translated into graphical performance of GPUs on SoCs.
It does sound like there is no technical limitation (other than TDP) to putting the equivalent of today's top desktop GPUs into a SoC. So why are they naysayers that claim "it can't be done"? Do they know something I don't?
[automerge]1595120788[/automerge]
Lots of specs on the web:
Xbox Series X review
The Xbox Series X is the pinnacle of Microsoft's gaming effortswww.tomsguide.comXbox Series X games, specs, price, how it compares to PS5, Xbox Series S
All we know about Microsoft's forthcoming game console, which is almost everything at this point.www.cnet.com
These don't appear to be rumors, but actual specificiations.
I don’t know whether mobile graphics benchmarks use different precision than the desktop one (I hope not, that would be awkward...)
Well. TBDR was one of the exampless I was referring to. Other being hyper bandwidth, P/E cores etc.Are you referring to TBDR? That’s an interesting topic. From my layman understanding, TBDR renderers didn’t establish themselves in the desktop segment because they are much more complex and because with a larger thermal budget a forward renderer can just brute force its way through. A criticism often brought up with TBDR is poor geometry throughput - less of an issue with mobile applications and their traditionally lower polygon counts, but critical for high-poly PC games. But that was the state of the art ten years ago. Apple seems to have solved it by utilizing the unified shader pipeline - since geometry, compute and fragment processing runs asynchronously on the same hardware, it’s easier to balance out the eventual bottlenecks. As to why Nvidia and co don’t use it - well, probably because they were not interested in this tech. Their stuff works well enough and they did borrow some ideas like tiling (but without deferred fragment shading) to make their GPUs more efficient. Revolutions sometimes come simply because someone has tried (and succeeded) something that others thought would not work. Again, think about MacBook Air or HiDPI screens. Those were laughed at in the beginning.
Sorry for quoting wikipedia, and correct me if that's wrong, but looks like all nVidia and AMD current models are doing tile architecture with immediate mode GPUs running in tendom (or I'm not sure, but atleast based on those description) to get pros of both worlds. I don't know enough about that tech field to distinguish TBR vs TBDR, but looks like industry is already using many aspect of TBR than immediate mode rendering only.
Anyhow, as for MBA and HiDPI, I don't think any customer was against those introduction. When hidpi was first introduced with retina MBP on Mac, almost everyone here welcomed it. MBA was met with huge interest when first introduced in 2008.
In the Anandtech article you linked before it seems that the mobile version of the 3D Mark IceStorm Unlimited benchmark is indeed using different precision to the desktop version as the authors state:
"On Windows, it uses DX11, and of course the precision is not the same across mobile and PC with the PC version running at 32-bit and OpenGL ES 2.0 only using 16-bit."
I remember it differently. The forums were filled with complaints how retina Macs are laggy, blurry, too expensive for a gimmick etc.
HiDPI is a godsend for people who spend most of their time looking at text.
To be fair I can’t seem to find the link that shows devices on their site. But based on the FAQ I don’t think they intend for desktop to be compared with smartphone (or tablet).If this is the case, then UL have messed things up. What’s the point of a benchmark if it does different things on different platforms?
To be fair I can’t seem to find the link that shows devices on their site. But based on the FAQ I don’t think they intend for desktop to be compared with smartphone (or tablet).
The mobile 1050, or desktop one?Possible. I’m still confused why they have the same benchmark for mobile and desktop in that case.
Anyhow, to make matters worse, it is very much possible that some mobile GPUs do not support 32-Bit precision in the fragment domain at all. It is really difficult to compare things if you don’t know what you are comparing. Apple GPUs occupy this weird niche since they are basically a hybrid.
I believe that GFXbench might be the most representable benchmark at the moment. According to it, iPad Pro trades blows with the GTX 1050.
The mobile 1050, or desktop one?
Ah yeah they had a ton of 1050’s. So I wasn’t sure which one to choose. I ran the benchmark on my pc and the dx12 render appears to be broken. Sigh. Testing Metal vs DX11 isn’t comparable as Metal is lower level like DX12.Mobile of course. For example:
GFXBench - Unified cross-platform 3D graphics benchmark database
The first unified cross-platform 3D graphics benchmark database for comparing Android, iOS, Windows 8, Windows Phone 8 and Windows RT capable devices based on graphics processing power.gfxbench.com
Ah yeah they had a ton of 1050’s. So I wasn’t sure which one to choose. I ran the benchmark on my pc and the dx12 render appears to be broken. Sigh. Testing Metal vs DX11 isn’t comparable as Metal is lower level like DX12.
There appears to be a fair bit of skepticism that Apple Silicon can achieve decent graphics performance without a discrete GPU, with the implied unsuitably of new Macs for games or intensive graphical applications.
Given that both the upcoming XBox System X and Playstation PS5 will have powerful GPUs on a SoC, I am curious as to why some people think that Apple won't be able to do the same?
I understand that the Sony and Microsoft consoles will be running their AMD SoCs at a high TDP (maybe >200W?), but this wouldn't be a problem for the iMac which already can run Intel Xeon + Radeon Pro GPU combinations with over 300W TDP
If the XBox X will allegedly have performance close to an NVidia 2080 Ti on a SoC, then what would stop Apple doing the same?
Obviously for the laptops a lower TDP would be needed. I think current MBP16s have a TDP of about 100W for the combined CPU + dGPU, but this should still mean that it would be possible to have a pretty powerful 50-70W TDP GPU on the SoC, similar in capability to the current AMD dGPUs
So I don't understand why there is doubt that it's possible to have a powerful GPU on Apple Silicon.
Or is it simply a lack of confidence that Apple has the expertise in this area to build one, compared to AMD & NVidia who have been producing GPUs for longer?