Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

IvanKaramazov

macrumors member
Jul 23, 2020
32
49
I couldn't find results from AMD APUs. The 3DMark database is broken. The search filters don't appear to work.

EDIT: ok, I can filter out the best results and find devices that yield results that are close to the A14 results reported on other sites. Indeed, the best mobile AMD are about as fast as the A14 (score of 8600). The intel iris are nowhere to be seen.
The 3DMark database browser is a mess. The A14 however is faster than the Ice Lake iGPU (in the current 13" Macbook Pro e.g.), but significantly slower than the Tiger Lake iGPU. You can see the Tiger Lake number on the top-end spec here.

On the other hand, the current iPad Pro GPU is marginally faster than the new Tiger Lake chips, despite being far less power hungry. And remember that is an A12-based GPU as well. If we assume only the same 30% increase as was seen from the A12 to A14 in the iPad Airs, that number will be substantially higher in a theoretical A14X.

EDIT:

Also instructive is the comparison with dedicated GPUs as seen here. The article is, if I may be frank, pretty stupid. But note that if the A14X does indeed see a 30% increase it would be around 16,000. That is spitting distance of the 1650 Max-Q. If the Apple Silicon chip in the 13" Pro, for example, were to move from 8 GPU cores (ala the iPad Pro) to 12, it's not at all unreasonable that it will be a good deal faster than the 1650.

EDIT 2:

One final note, the A13, at least, throttles when run under the Wild Life Stress Test (where it's run multiple times). Even once it throttles all the way down it's still faster than competing Android phones. That said, I can't find results for AMDs APUs or for Ice Lake / Tiger Lake on the stress test, but I think it's safe to say those chips are all faster for sustained GPU use than the A13. Irritatingly I can't find anyone having run the Stress Test on an A14 to see if it's improved the thermal situation. Nor for the iPad Pro, so I can't say whether its performance over long periods is less thermally constrained than the phones. Anyone want to run this?

At least in theory an Apple Silicon chip could probably run at duration without throttling, much like the Intel and AMD iGPUs in laptops.
 
Last edited:

jeanlain

macrumors 68020
Mar 14, 2009
2,463
958
The 3DMark database browser is a mess. The A14 however is faster than the Ice Lake iGPU (in the current 13" Macbook Pro e.g.), but significantly slower than the Tiger Lake iGPU. You can see the Tiger Lake number on the top-end spec here.
Ok. On GFXbench, we see "intel Xe graphics" (no detail on the precise GPU) being only as fast at the A13 (aztec ruins offscreen). But it seems that this test somehow favours mobile GPUs.

As for stress tests, I don't find them much relevant. Their results reflect the cooling capacity of phones vs. laptops.
 

IvanKaramazov

macrumors member
Jul 23, 2020
32
49
Ok. On GFXbench, we see "intel Xe graphics" (no detail on the precise GPU) being only as fast at the A13 (aztec ruins offscreen). But it seems that this test somehow favours mobile GPUs.

As for stress tests, I don't find them much relevant. Their results reflect the cooling capacity of phones vs. laptops.
Yes, you're of course right about stress tests. I just think it would be interesting to see how the A12X or A14 in the iPad Air does on those with regards to throttling, as it might more closely reflect the cooling capacity of, for example, a new 12" fanless Macbook with an A14X. A fan-cooled Pro machine, like I said, would presumably not throttle.

Regarding GFXbench, I see the Intel Xe Graphics entry but it's definitely incorrect. If you look at the offscreen results for Aztec Ruins High and search for i7-1065 you'll see the top-end Ice Lake Iris Plus config (currently in the Macbook Pro 13). It is indeed slightly slower than the A13, but all evidence so far suggests that that Tiger Lake's iGPU is 2x as fast. Which, again, suggests that Tiger Lake's top-end iGPU is slightly slower than the current iPad Pro in Aztec Ruins, high, offscreen. In fact, GFXBench and 3DMark show extremely consistent results in that regard.
 
Last edited:
  • Like
Reactions: jeanlain

jeanlain

macrumors 68020
Mar 14, 2009
2,463
958
The article is, if I may be frank, pretty stupid.
Indeed. The authors blames Vulkan drivers for the poor score of the intel Iris and AMD APU. But for the few games that are able to use DX and Vulkan, we see nothing suggesting that Windows Vulkan drivers are problematic.
 
Last edited:

IvanKaramazov

macrumors member
Jul 23, 2020
32
49
Indeed. The authors blames Vulkan drivers for the poor score of the intel Iris and AMD APU. But for the few game that are able to use DX and Vulkan, we see nothing suggesting that Windows Vulkan drivers are problematic.
Yep. He is an old standard in the tech press and a super sharp guy. But the article is really weird, and does the whole slight-of-hand thing I see a lot of people doing, which is to say "sure, the A-series chips appear to be just as fast as PCs, but look at how much more powerful this massive, actively cooled laptop with a 115w GPU is!". As if Apple were just going to take a 6w phone processor and shove it in a Macbook Pro. It's frankly astonishing that Apple's 2-year-old tablet GPU still outpaces the much-hyped Tiger Lake iGPU, and at a lower operating power. Obviously we won't know the quality of Apple Silicon until it launches, but certainly all the indicators are that it will be, at the worst, a reasonable improvement on both Intel and AMD's latest and greatest.

Regarding Metal v DirectX and Vulkan, if anything the GFXBench database suggests that the DirectX tends to show faster on the benchmark, and Vulkan and OpenGL tend to more or less equal Metal.
 

jeanlain

macrumors 68020
Mar 14, 2009
2,463
958
Regarding GFXbench, I see the Intel Xe Graphics entry but it's definitely incorrect. If you look at the offscreen results for Aztec Ruins High and search for i7-1065 you'll see the top-end Ice Lake Iris Plus config (currently in the Macbook Pro 13). It is indeed slightly slower than the A13, but all evidence so far suggests that that Tiger Lake's iGPU is 2x as fast
Maybe the intel Xe variant that was tested is not the fastest one.
 
  • Like
Reactions: IvanKaramazov

MikhailT

macrumors 601
Nov 12, 2007
4,583
1,327
That’s correct, in Apple Silicon Macs, there’s one memory location, like on iPhones, where the GPU and CPU both access. They call it Unified Memory.

It’s rumored that they will come with at least 16 Gigs which would be shared between the CPU/GPU.

Not exactly, Apple GPUs do have on-chip memory in addition to sharing the system memory, it's called tile memory. Source is this WWDC video: https://developer.apple.com/videos/play/wwdc2020/10602/

Apple processors are very power efficient, and they have this unified memory architecture, which means that the CPU and the GPU share System Memory.

Also, the GPU has a dedicated pool of on-chip memory, which we call Tile Memory.

Notice, though, that the GPU does not have video memory. So bandwidth could be a problem if the content has not been tuned.

In order to be fast and efficient without video memory, our GPUs have a unique architecture known as TBDR, or Tile Based Deferred Renderer. So let's talk about that. Today, we will review the rendering pipeline, as well as some of the features that make the GPUs so efficient.

1604089878468.png
 
  • Like
Reactions: Unregistered 4U

leman

macrumors Core
Original poster
Oct 14, 2008
19,522
19,679
Also instructive is the comparison with dedicated GPUs as seen here. The article is, if I may be frank, pretty stupid.

Right? I also found that article really irritating. "Comparing phones to desktops is clearly misleading as we literally don't believe that a phone can outperform a laptop..."

Irritatingly I can't find anyone having run the Stress Test on an A14 to see if it's improved the thermal situation.

It's about 6500 (https://forums.macrumors.com/thread...enchmark.2262692/?post=29121388#post-29121388), so we still have a drop of around 30% for sustained performance

Still, I'd say it looks very promising. It's only a quad-core GPU, thermally constrained by a phone enclosure, so probably running at or under 5W TDP. An 8-core GPU with 15W TDP should score about 15000-16000. For games in particular, given how fast the CPU is and the ridiculous amounts of cache Apple has in their chips, I'd expect it to be trading blows with a GTX 1650 Max-Q or at least a GTX 1050. Basically a gaming performance similar to Razer Blade Stealth, but with a faster CPU and more energy efficient.

Maybe the intel Xe variant that was tested is not the fastest one.

If I were a more paranoid individual I'd almost believe that Intel's marketing for Tiger Lake is trying to purposely confuse the customer. Starting with Tiger Lake it's the OEM manufacturer who decides the TDP of the CPU, so a Tiger Lake with the same model name can run 30% or more slower in a different laptop. To make it worse, most of the Intel marketing material is prominently showing the results using the 30W TDP, while most of the actual laptops are getting a 15W config.
 

leman

macrumors Core
Original poster
Oct 14, 2008
19,522
19,679
Not exactly, Apple GPUs do have on-chip memory in addition to sharing the system memory, it's called tile memory. Source is this WWDC video: https://developer.apple.com/videos/play/wwdc2020/10602/

They also have some additional memory they call "threadgroup memory" for communication between different threads running compute tasks. It's not too much of it, but it's enough and it's fast as heck (it's essentially GPU-controlled cache).
 

IvanKaramazov

macrumors member
Jul 23, 2020
32
49
It's about 6500 (https://forums.macrumors.com/thread...enchmark.2262692/?post=29121388#post-29121388), so we still have a drop of around 30% for sustained performance
Hah, clearly I didn't search long enough. I should have known you'd be looking in to that. Have you seen numbers on the iPad Pro? This seems like great news, honestly, they've needed to emphasize sustained performance and the A13 already appeared to be a move in that direction from the A12. Guess they're continuing the trend. Eagerly awaiting the Anandtech deep dive.

And yeah, the Tiger Lake stuff is frustrating. Hardly anything aside from the Macbook Pro actually used the full-fat Ice Lake U-Series chip, and I expect the situation will be similar with Tiger Lake. Probably just the Razer Blade and maybe a couple Lenovo business laptops. Frankly neither the CPU nor GPU looks all that impressive on the 15w variants that have surfaced.
 

leman

macrumors Core
Original poster
Oct 14, 2008
19,522
19,679
Hah, clearly I didn't search long enough. I should have known you'd be looking in to that. Have you seen numbers on the iPad Pro? This seems like great news, honestly, they've needed to emphasize sustained performance and the A13 already appeared to be a move in that direction from the A12. Guess they're continuing the trend. Eagerly awaiting the Anandtech deep dive.

Anandtech's Andrei Frumusanu made it clear on twitter that he was disappointed by all existing TSMC's chips (including the A14). I am very curious to read his findings, he writes very good articles. So far my impression is — as you say — that Apple is prioritizing sustained performance and energy efficiency (hence smaller performance increases and smaller batteries on the iPhone with comparable battery life), but it could also be that A14 is more power-hungry than expected. But even if the later is the case, a 10-15W entry-level Mac Laptop with a quad-core A14 CPU will still be much more powerful than anything else in that segment.
 

jeanlain

macrumors 68020
Mar 14, 2009
2,463
958
Do we have numbers about the power consumption of the intel Xe under load? The top-end Tiger Lake SoC is rated at 28W, but it can consume more than 50W for a short amount of time. And that's the CPU part only.
 

leman

macrumors Core
Original poster
Oct 14, 2008
19,522
19,679
Do we have numbers about the power consumption of the intel Xe under load? The top-end Tiger Lake SoC is rated at 28W, but it can consume more than 50W for a short amount of time. And that's the CPU part only.

Not the GPU, but the CPU. A single core running at 4.8Ghz draws about 20 Watts (Apple A14 delivers essentially the same performance at 5W and 3.0 ghz). For cores at sustained 30W TDP seem maintain around 4.5Ghz.

Source: Andrei Frumusanu's post on www.realworldtech.com which I can't find right now.

Thanks, had not had an opportunity to go back over those since this morning. :) No “video RAM” and I wonder how much “tile RAM” there would be. Any indication based on prior A chips?

It's 32KB (I think per core), could be more, but that's what you can use simultaneously. Soure: https://developer.apple.com/metal/Metal-Feature-Set-Tables.pdf

It doesn't sound much, but it's plenty for what it is supposed to do. It's not really for textures or any other kind of user data, but its the working RAM for the code running at the GPU. I would compare it with CPU registers rater than with RAM. With Apple GPUs, the contents of this memory is fully programmable by the way. 32KB gives you up to 128 bytes of data per pixel (for 16x16 tiles) which is enough even for some of the more advanced rendering algorithms.
 
  • Like
Reactions: Unregistered 4U

jeanlain

macrumors 68020
Mar 14, 2009
2,463
958
Not the GPU, but the CPU. A single core running at 4.8Ghz draws about 20 Watts (Apple A14 delivers essentially the same performance at 5W and 3.0 ghz).
Where did you find that 5W number, from RealWorldTech?
 

Unregistered 4U

macrumors G4
Jul 22, 2002
10,610
8,629
It's 32KB (I think per core), could be more, but that's what you can use simultaneously. Soure: https://developer.apple.com/metal/Metal-Feature-Set-Tables.pdf

It doesn't sound much, but it's plenty for what it is supposed to do. It's not really for textures or any other kind of user data, but its the working RAM for the code running at the GPU. I would compare it with CPU registers rater than with RAM. With Apple GPUs, the contents of this memory is fully programmable by the way. 32KB gives you up to 128 bytes of data per pixel (for 16x16 tiles) which is enough even for some of the more advanced rendering algorithms.
Thanks, I was wondering if tile!=texture :)
 

mr_roboto

macrumors 6502a
Sep 30, 2020
856
1,867
It was the peak power consumption for the previous iPhones, I doubt it changed much.

To put the number in perspective, the iPhone 12 (regular or Pro) has a 10.78 Wh battery. A continuous load of 5 watts would drain the battery 100% to 0% in about 2 hours. (and here, 5 watts is the budget for everything, not just the A14 SoC - the display and cellular modem are two other substantial power sinks.)

Until there's some kind of fundamental revolution in battery tech, iPhone SoCs will always be designed to target somewhere around 5W peak, and substantially less than that average.
 

thenewperson

macrumors 6502a
Mar 27, 2011
992
912
That certainly won’t utilize fast RAM
Interestingly this is an aspect of their GPU system that they highlighted in one of their ASi session videos. I'm simply assuming it'll apply to all. Or maybe LPDDR5 is included in their definition of high bandwidth memory?
 

Unregistered 4U

macrumors G4
Jul 22, 2002
10,610
8,629
Interestingly this is an aspect of their GPU system that they highlighted in one of their ASi session videos. I'm simply assuming it'll apply to all. Or maybe LPDDR5 is included in their definition of high bandwidth memory?
Possibly. I’m just meaning their solution won’t requirie HBM or GDDR?
 

Unregistered 4U

macrumors G4
Jul 22, 2002
10,610
8,629
GDDR has too much latency for CPU. HBM would be novel, though could it be made in the quantities Apple needs?
No, I think I read a post where someone said that there wouldn’t be enough capacity. Now, having said that, since the RAM isn’t swappable anyway, they could devise a unique solution...
 
  • Like
Reactions: gnomeisland

thenewperson

macrumors 6502a
Mar 27, 2011
992
912
No, I think I read a post where someone said that there wouldn’t be enough capacity. Now, having said that, since the RAM isn’t swappable anyway, they could devise a unique solution...
I’ve wondered this as well. Why go with established solutions if your RAM is going to be (mostly) soldered anyway?
 
  • Like
Reactions: Unregistered 4U

EntropyQ3

macrumors 6502a
Mar 20, 2009
718
824
Anandtech's Andrei Frumusanu made it clear on twitter that he was disappointed by all existing TSMC's chips (including the A14). I am very curious to read his findings, he writes very good articles.
Actually he backtracked on that on twitter. He accidentally made his comparisons vs. Qualcomms 865 rather than 865+. With that, efficiency was back to a roughly 15% improvement, as expected from TSMCs material.

I respect Andrei for being able to backtrack and correct himself. It’s a property of character rather than just being knowledgeable.

That said, the fact that such a simple thing, and such small differences in performance constitute the difference between "disappointing" and "expected" says something about the state of lithographic progress today.
 

diamond.g

macrumors G4
Mar 20, 2007
11,437
2,665
OBX
Thing is, AMDs approach is different from nVidias. Getting to grips with what that means, and how to get the best out of either is work that remains to be done.
All RT game code up until now has been financed and co-written by nVidia. All RT in future AAA titles will target next generation consoles (AMD). It’s very early days for RT as a way of dealing with some lighting issues, and I’m personally not convinced it’s a great idea for consumers. The interest of the graphics IHVs to come up with new stuff to sell doesn’t necessarily align with the consumer interest in energy efficient, cheap and performant graphics.
Hmm, wccftech is reporting some numbers (leaks from China) where the 6800 is actually faster at RT than 3070 with DLSS disabled (ie native resolutions). The game tested was Shadows of the Tomb Raider. So far the performance leaks folks heard about before the presentation were (more or less) true. So this is an interesting plot twist.
 
  • Like
Reactions: alels
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.