Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Realityck

macrumors G4
Nov 9, 2015
11,414
17,205
Silicon Valley, CA
Do you feel as if your late 2015 iMac is adequate for AAA gaming? If so, the A12Z in the DTK has a faster CPU (per that article) and a GPU very much in the ballpark of your iMac.

Naturally newer iMacs are significantly faster. But recall that this DTK chip is a ~10w iPad chip v your 65w cpu + high wattage gpu, and the shipping ASi Macs will benefit from two years of architectural improvements to CPU / GPU as well as a die shrink. A future "13" Airbook" (assuming you mean a Macbook Air) would have at least the equivalent of an A14X in it, and will likely be 50 to 100% faster than your iMac in both CPU and GPU. The entire ASi line from there up will presumably have at least the rumored 8x4 cpu and presumably increased GPU core counts as well. We don't know what Apple can pull off in a large iMac yet gpu-wise, but at the minimum the shift to Apple Silicon will bring a large gain to potential gaming performance across the Mac line broadly. Raising the average gaming potential of a Mac is probably more valuable to AAA developers than raising the top-end.
I ran current 5.2.1 geekbench to verify my 4+ year old computer without discussing the inefficiency of running Rosetta 2 that you could see had more effect on multicore score as previously posted.

I7@4Ghz + AMD Radeon R9 M395X. (native intel)
Single-core score of 1111
Multi-core score of 4207
GPU OpenCL 28407
GPU Metal 31566

Compared to A12Z@2.4. Ghz (native tests)
single-core score of 1098
multi-core score of 4555
GPU Metal 12610

But note the efficiency of Rosetta ( non-native tests)
single-core score of 800
multi-core score of ~2800 see this URL

I would say the current ARM in the Mac mini is not that fast GPU (metal) wise, but there you have a comparison. Given the speed improvements from my dated iMac late 2015 (Oct 2015) -> iMac 2017 -> iMac 2019 -> rumored iMac 2020 paired with GPU we have to see whose horse is faster. Whatever it is it has to be able to show a 4K HDR video via T2 chip. ;)
 
Last edited:

jeanlain

macrumors 68020
Mar 14, 2009
2,460
954
Geekbench measures the compute performance of the GPU, which is not the same as performance for graphic tasks, where Apple GPUs appear to have nice optimisations. On GFXBench, Apple GPUs perform as well as other GPUs having much more compute power (FLOPS). This may in part due to the fact that they can use half-precision shaders, but the result remains. They are far more efficient.
Apparently, PC GPUs are "stuck" with 32-bit shaders even though these consume more energy and don't make a visual difference in most cases. Apple can do without these limitations.
 

throAU

macrumors G3
Feb 13, 2012
9,199
7,353
Perth, Western Australia
Performance will not solve the problem. iPad, for example, is already powerful enough to render graphic well, but what happened to ipad gaming? Yes Portnite, yes Battleground, but what other AAA games? iOS is already a really big market, but what happened to iOS gaming world? Have you seen the list of top revenue ios game apps? They are all 'free' games with in app purchase for pay to win stuff. Many people do enjoy that kind of games hence large revenue is made from mobile games, but I don't think we are referring those games as "real gaming" here. If future of Mac gaming is only ios like pay to win mobiles games running on Mac OS, then I can say it's ended. PC guys are already running android apps on Windows through virtualization.

The question we should ask is this : will Arm Mac encourage more porting of AAA titles than intel Mac? Actually I'm worried. If some third party vendors wishes to consider porting their games to Arm, they would rather make ios/ipad (which has much larger market) with some kind of mac friendly features added to it than Mac centric performance based games that can be only played on Mac. This is based on an assumption that Apple Silicon used in Mac will be better performing (with more power put in to it) than ios/ipad. Then those games will be looking like gimped graphic games and feels similar to those PC games developed primarily for consoles with awkward console like ui.

what happened to ipad gaming is that there WAS no controller support.
that changed recently.

the ipad, (and the appleTV for that matter) are now legit gaming platforms, software just needs to catch up. both more powerful than the switch, by a lot. as is the iphone.

with a single API, developers will now be able to target living room, mobile and desktop, and the potential size of that market is bigger than both the console market and the PC combined.
 

theorist9

macrumors 68040
May 28, 2015
3,880
3,060
9-5 had this Apple Silicon benchmarks article (7/23) which gave some specifics to speed.
It's curious that the multi-core result is more than 4x the single-core result, given that others have reported the DTK uses only the four high-power cores, leaving the four low-power cores disabled (https://www.eejournal.com/article/whats-inside-apple-silicon-processors/). Maybe whatever they did to get GB5 running natively also allowed them to turn on all eight cores (?). Alas, the screenshot cuts off just above where GB reports the core count.

Having said that, GB results aren't very meaningful in making cross-platform comparisons (e.g., AS vs. Intel x86). Not to mention the usual caveat that we don't know how different the first Mac AS chips will be from the DTK chip, only that it will be different. We won't have a good idea how the new AS Mac chips actually perform until independent testing can be done on applications we actually use, which won't happen until after the first AS Mac is released.

Probably the most informative and interesting part of this is what it tells us about Rosetta vs. native (at least for GB), with the caveat that we don't really know exactly what modifications they made.

1595568770915.png
 

jerwin

Suspended
Jun 13, 2015
2,895
4,652
Apparently, PC GPUs are "stuck" with 32-bit shaders even though these consume more energy and don't make a visual difference in most cases.

Back when DirectX 11 was new and exciting, the Unified Shader Model meant that a card could render a scene with a "heavier geometry workload, but lighter pixel workloads", or "a lighter geometry workload but heavier pixel workloads". It was flexible. Plus, the DirectX 11 spec had hull shaders, domain shaders, geometry shaders and compute shader. A mixed precision would have limited the flexibility of developers.

http://download.nvidia.com/developer/cuda/seminar/TDCI_Arch.pdf

I'm not sure what Vulkan or Direct X12 changed, or whether this flexibility proved to be of much use to game developers.

Apple Silicon, btw, uses Tile Based Deferred Rendering instead of more usual Intermediate Mode Rendering. More buzzwords to learn...
 

theorist9

macrumors 68040
May 28, 2015
3,880
3,060
Apple Silicon, btw, uses Tile Based Deferred Rendering instead of more usual Intermediate Mode Rendering. More buzzwords to learn...
Interesting! Unfortunately that article is biased, b/c it's written by a company that uses TBDR instead of IMR, and thus only talks about the strengths of TBDR, not its weaknesses. Here's what appears to be a more balanced comparison, on anandtech. It's very old (2011), but still worth reading b/c of its clarity.

Essentially, the anandtech article agrees with the Imagination article that TBDR can be more efficient, since it eliminates overdraw (drawing objects that are hidden behind other objects and thus can't be seen anyways). But it mentions that TBDR can have problems with more complex geometries. Since you don't see very complex geometries in mobile games, it seems Apple's approach back then was optimized for that market (where efficiency matters).

But even at the time (2011), sophisticated IMR implementations featured aspects of TBDR to improve their efficiency. So fast-forwarding to today, perhaps Apple has sophisticated TBDR implementations that have refinements that improve their performance for complex geometries.

Currently, according to this 2020 article from Gizmodo, "Apple GPU architecture is a tile-based deferred renderer (TBDR), and Intel, Nvidia, and AMD are immediate mode renderer GPUs (IMR)." https://www.gizmodo.co.uk/2020/07/apples-homegrown-chips-could-be-the-end-for-amd-graphics-in-macs/

So this is conceptually interesting: Just as the AS Mac CPU's will be qualitatively different from those of essentially everyone else in the PC space by having an ARM ISA instead of an x86 ISA, the AS Mac GPU's will be qualitatively different from everyone else's by using TBDR instead of IMR (again, referring to the PC space). Though the difference in ISA is a macroarchitecture difference, while that between TBDR and IMR is, IIUC, one of microarchitecture (of course, AS CPUs will also have extensively customized microarchitecture as well).
 
Last edited:

thekev

macrumors 604
Aug 5, 2010
7,005
3,343
Apple said on their keynote High Performance GPU so I think their performance will be on par of some discrete GPU, probably will blow intel’s iGPU performance

That isn't a meaningful comparison. It's unlikely that they could match a recent model gpu that would be considered anything but low end today. That hasn't been Apple's target in the past though. If they beat Intel's igpu numbers by 50% or more on the latest generation, then that would probably get significant interest. If they compare to an older chip that they happened to use in whatever machine, then it's more disingenuous marketing.


Back when DirectX 11 was new and exciting, the Unified Shader Model meant that a card could render a scene with a "heavier geometry workload, but lighter pixel workloads", or "a lighter geometry workload but heavier pixel workloads". It was flexible. Plus, the DirectX 11 spec had hull shaders, domain shaders, geometry shaders and compute shader. A mixed precision would have limited the flexibility of developers.

http://download.nvidia.com/developer/cuda/seminar/TDCI_Arch.pdf

I'm not sure what Vulkan or Direct X12 changed, or whether this flexibility proved to be of much use to game developers.

Apple Silicon, btw, uses Tile Based Deferred Rendering instead of more usual Intermediate Mode Rendering. More buzzwords to learn...

That's a stupid marketing term, but it's likely a response to increasing resolution in games and outputs. They omit the detail that if you go too small, you'll no longer saturate your pipeline with parallelizable tasks. This is uncommon enough today that they appear to be optimizing this down to problems that better fit in whatever their hardware uses as a cache.
 
Last edited:

leman

macrumors Core
Oct 14, 2008
19,521
19,675
Back when DirectX 11 was new and exciting, the Unified Shader Model meant that a card could render a scene with a "heavier geometry workload, but lighter pixel workloads", or "a lighter geometry workload but heavier pixel workloads". It was flexible. Plus, the DirectX 11 spec had hull shaders, domain shaders, geometry shaders and compute shader. A mixed precision would have limited the flexibility of developers.

Mixed precision does not limit flexibility in any way, in fact, it gives the developer more flexibility. It’s another tool in the programmers arsenal. Modern GPUs support full precision floating point, half precision floating point and integer Dara types natively, with different performance characteristics. Fir example new Nvidia and AMD cards can execute single and half precision operations simultaneously on the same shader core.

As to the other point you mention, that’s a complex one. Various shaders have run in the same hardware fir a while now, but it’s only recently that GPUs are able to schedule tasks efficiently. Apple GPUs are fully asynchronous, they have dedicated hardware task controllers that try to use all the available cores efficiently. Older desktop GPUs rely on driver to do task scheduling.



Interesting! Unfortunately that article is biased, b/c it's written by a company that uses TBDR instead of IMR, and thus only talks about the strengths of TBDR, not its weaknesses. Here's what appears to be a more balanced comparison, on anandtech. It's very old (2011), but still worth reading b/c of its clarity.

Essentially, the anandtech article agrees with the Imagination article that TBDR can be more efficient, since it eliminates overdraw (drawing objects that are hidden behind other objects and thus can't be seen anyways). But it mentions that TBDR can have problems with more complex geometries. Since you don't see very complex geometries in mobile games, it seems Apple's approach back then was optimized for that market (where efficiency matters).

As you say, that article is old. Ten years ago mobile GPUs were often front-end limited. I don’t think this is the case anymore for modern Apple stuff. Asynchronous shaders take care if situations were you happen to have a lot of geometry. If you have too many primitives, the bin lists can spil, causing overdraw, but that’s Stil more efficient than immediate rendering. And even if one argues that binning (sorting polygons into tiles) demands a lot of work - well, current Nvidia and AMD GPUs do it as well, so it must be worth it for them.

I would like to see some geometry throughout benchmarks, I don’t think that anyone published data on that.



So this is conceptually interesting: Just as the AS Mac CPU's will be qualitatively different from those of essentially everyone else in the PC space by having an ARM ISA instead of an x86 ISA, the AS Mac GPU's will be qualitatively different from everyone else's by using TBDR instead of IMR (again, referring to the PC space). Though the difference in ISA is a macroarchitecture difference, while that between TBDR and IMR is, IIUC, one of microarchitecture (of course, AS CPUs will also have extensively customized microarchitecture as well).

I wouldn’t call it a microarchitecture difference, it’s an algorithmic difference. I’m quite exited to get TBDR on desktop, especially one with programmable GPU cache as offered by Apple. It allows one to utilize the hardware in much more efficient ways.


That isn't a meaningful comparison. It's unlikely that they could match a recent model gpu that would be considered anything but low end today. That hasn't been Apple's target in the past though. If they beat Intel's igpu numbers by 50% or more on the latest generation, then that would probably get significant interest. If they compare to an older chip that they happened to use in whatever machine, then it's more disingenuous marketing.

A two year old Apple tablet GPU already outperforms any integrated graphics solution currently offered by Intel or AMD and is head to head with last gen Nvidia 50watt laptop GPUs. With more GPU cores, faster RAM, newer architecture and smaller process AS has the potential to offer very competitive performance. I expect 5300M pro class in the 13” laptops at 30W combined SoC TDP.
[automerge]1595576803[/automerge]
I would say the current ARM in the Mac mini is not that fast GPU (metal) wise, but there you have a comparison. Given the speed improvements from my dated iMac late 2015 (Oct 2015) -> iMac 2017 -> iMac 2019 -> rumored iMac 2020 paired with GPU we have to see whose horse is faster. Whatever it is it has to be able to show a 4K HDR video via T2 chip. ;)

Again, who cares. You are comparing your iMac to an iPad. This is not representative of the upcoming Apple desktop hardware.
 

Waragainstsleep

macrumors 6502a
Oct 15, 2003
612
221
UK
Whatever it is it has to be able to show a 4K HDR video via T2 chip. ;)

The iPad Pro can run 3 simultaneous 4K streams supposedly. I've never tried it myself though.


It's curious that the multi-core result is more than 4x the single-core result, given that others have reported the DTK uses only the four high-power cores, leaving the four low-power cores disabled (https://www.eejournal.com/article/whats-inside-apple-silicon-processors/). Maybe whatever they did to get GB5 running natively also allowed them to turn on all eight cores (?). Alas, the screenshot cuts off just above where GB reports the core count.

Is it possible that Rosetta 2 imply doesn't make use of the efficiency cores? Since there aren't any x86 Apple devices that have any.
 

Janichsan

macrumors 68040
Oct 23, 2006
3,126
11,927
what happened to ipad gaming is that there WAS no controller support.
that changed recently.

the ipad, (and the appleTV for that matter) are now legit gaming platforms, software just needs to catch up. both more powerful than the switch, by a lot. as is the iphone.
Proper controller support has been in iOS for years already. The market doesn't move so slow that it wouldn't have had time to adapt to that.

iOS gaming is held back by Apple's policies: there are strict size limitations which are even lower than would you can use on the Switch, and you are not allowed to rely on the player having a controller, but always have to include Apple's own limited options (i.e. touch, Siri remote).

As long as this does not change, the vision of an unified Apple gaming platform competing with the market leading consoles and PCs remains a pipedream.
It's curious that the multi-core result is more than 4x the single-core result, ...
I wouldn't read too much into that. The multi-core benchmark rarely scales linearly with the single-core number. Usually, it's less, but there are a couple of examples with multi-core > single-core x cores, for instance this one.
 
Last edited:

leman

macrumors Core
Oct 14, 2008
19,521
19,675
I ran current 5.2.1 geekbench to verify my 4+ year old computer without discussing the inefficiency of running Rosetta 2 that you could see had more effect on multicore score as previously posted.

I7@4Ghz + AMD Radeon R9 M395X. (native intel)
Single-core score of 1111
Multi-core score of 4207
GPU OpenCL 28407
GPU Metal 31566

Compared to A12Z@2.4. Ghz (native tests)
single-core score of 1098
multi-core score of 4555
GPU Metal 12610

Another thing I just realized (assuming for a moment that Geekbench is representative): the 10watt iPad chip offers 1/3 of the compute performance of a large GPU that consumes 10x more power. Not to mention that the CPU seems to outperform a 90Watt Haswell desktop. That is crazy.
 

diamond.g

macrumors G4
Mar 20, 2007
11,438
2,664
OBX
I really want to see how well Apple Silicon can run this, it would be quite telling if they drop support of Lumen and Nanite from the Apple Side.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,675
I really want to see how well Apple Silicon can run this, it would be quite telling if they drop support of Lumen and Nanite from the Apple Side.

From Epic news page:

Unreal Engine 5 will be available in preview in early 2021, and in full release late in 2021, supporting next-generation consoles, current-generation consoles, PC, Mac, iOS, and Android.

So I would expect Macs to support these features of EU5. But we will have to wait until next year to know for sure.
 

Rashy

Suspended
Jan 7, 2020
186
372
Without reading this whole thread:
So far, driver support already had been terrible, they abandonned OpenGL (while not offering CUDA or Vulcan either, just their propiertary Metal crap) and 32bit, and now we have the pretty unnecessary shift to ARM, something the majority of end user will NOT benefit from (macbooks already have great battery runtime, desktops rely on high performance rather than saving power). Only Apple will make even bigger profit margins and ensure they have everything under control in their golden macOS/iOS cage.

It's a shame, since all iMac since 2017 (especially with the R580/8GB or Vega48) and the current 16" Macbook have pretty capable horsepower even for AAA gaming, as long your play in Bootcamp, with half resolution like 1440p on the iMac 27", set the frame-cap to 60fps, and keep some settings at medium or high instead of Ultra.

"Then get a gaming PC or Playstation!"
--> And those guys can s.t.f.u. as most people don't have the money (and space) for that. I want to do my creative work and some occassional gaming on my iMac 5K. With Bootcamp this is working pretty well, and I would rather have stayed with Intel, not caring if thei have 10nm, 7nm or whatsoever, I want things running without much hassle, period. Will keep my iMac with Mojave as long as I can, then switch back to Windows eventually. Mac sucks at CAD already, can't even use GPU rendering in Blender 3D because of that OpenGL crap-move, a real shame for a multi-billion company to not offer necessary drivers at least via optional download for the pro-users that require that.

But hey, we all gonna have fun with Apple Arcade subscription toy games at least, and don't forget even more Memomjis wohooo!
 
Last edited:
  • Like
Reactions: jerwin

diamond.g

macrumors G4
Mar 20, 2007
11,438
2,664
OBX
From Epic news page:



So I would expect Macs to support these features of EU5. But we will have to wait until next year to know for sure.
Yeah, but in later talks they made it seem like using lumen and/or Nanite is optional, so there is no promise that it will come to Apple Silicon. It would behoove Apple to get Epic to show the same, controllable, demo running on their hardware, if they are serious about gaming like folk seem to think they are.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,675
Without reading this whole thread:
So far, driver support already had been terrible, they abandonned OpenGL (while not offering CUDA or Vulcan either, just their propiertary Metal crap) and 32bit, and now we have the pretty unnecessary shift to ARM, something the majority of end user will NOT benefit from (macbooks already have great battery runtime, desktops rely on high performance rather than saving power). Only Apple will make even bigger profit margins and ensure they have everything under control in their golden macOS/iOS cage.

Just few comments from a technical standpoint:

- OpenGL was the main reason for terrible driver support and it was yers overdue to be dropped

- Metal supports everything Vulkan can do (actually, Metal is more powerful in some key areas), while being significantly more user-friendly and way easier to learn. There are open source implementations of Vulkan layer on top of Metal, if you prefer Vulkan by the way.

- There is no excuse for supporting 32bit mode in 2020. This is not an embedded platform. Microsoft has been holding the industry back for way too long

- Apple CPUs and GPUs are significantly more performant than the Intel/AMD ones, which is one of the main reasons behind the switch. You will see better battery and performance on laptop, and better performance on desktops with Apple ARM

Overall, we are getting a high-performance, high-efficiency platform with state of the art APIs and very good developer tools, especially in the GPU department (for the first time in Apple's history). It is a shame that we don't have native support for open standards, but on the other hand Vulkan is a bit of a disappointment. It makes way too many sacrifices in order to support the largest common denominator, it's support for Apple GPUs is fairly poor, and it is unnecessarily complicated. I was disappointed when Apple dropped out of the Vulkan design group, but now I can understand why.
 

jeanlain

macrumors 68020
Mar 14, 2009
2,460
954
Metal supports everything Vulkan can do
Well except geometry shaders at least. I'm not sure how useful these are. Some say their absence is not a big deal, other say it is.
I've also read (from mark I think) that the way Metal implements tessellation prevents some modern rendering techniques, but I'm not competent enough to explain why.
[automerge]1595596757[/automerge]
So far, driver support already had been terrible
I think Metal drivers for Apple GPUs are quite good. I cannot talk from first-hand experience, but some unity developer said that porting Unity 5 to iOS Metal was a pleasure, and performance increased (suggesting that drivers were not an issue). By comparison, porting to DX12 was a pain.
 
Last edited:

leman

macrumors Core
Oct 14, 2008
19,521
19,675
Well except geometry shaders at least. I'm not sure how useful these are. Some say their absence is not a big deal, other say it is.

Geometry shaders never worked properly because they are a bad abstraction given how modern hardware works. They don't exploit the parallel architecture of the Gpu properly.

Metal does not have geometry shaders because it does not need them. You can use compute shaders to generate and draw the geometry entirely on the GPU. This also covers much if not most of the mesh shader functionality. It is your responsibility as a developer to use efficient use of the hardware for your specific use case.

I've also read (from mark I think) that the way Metal implements tessellation prevents some modern rendering techniques, but I'm not competent enough to explain why.

As far as I know, Metal's tessellation pipeline is equivalent in features to the more traditional setup, Apple just simplified how things work. Their setup makes more sense to me intuitively at least.

By the way, MoltenVK is able to translate Vulkan tessellation pipelines to Metal one's, so I would assume that supported by Metal here is also supported by metal.
 

diamond.g

macrumors G4
Mar 20, 2007
11,438
2,664
OBX
Geometry shaders never worked properly because they are a bad abstraction given how modern hardware works. They don't exploit the parallel architecture of the Gpu properly.

Metal does not have geometry shaders because it does not need them. You can use compute shaders to generate and draw the geometry entirely on the GPU. This also covers much if not most of the mesh shader functionality. It is your responsibility as a developer to use efficient use of the hardware for your specific use case.



As far as I know, Metal's tessellation pipeline is equivalent in features to the more traditional setup, Apple just simplified how things work. Their setup makes more sense to me intuitively at least.

By the way, MoltenVK is able to translate Vulkan tessellation pipelines to Metal one's, so I would assume that supported by Metal here is also supported by metal.
In your estimation, there is nothing DX12 or Vulcan can do that Metal cannot?
 

leman

macrumors Core
Oct 14, 2008
19,521
19,675
In your estimation, there is nothing DX12 or Vulcan can do that Metal cannot?

Oh, I am sure there is. I don't have a good knowledge of DX12 and it has been a while since I last worked with Vulcan or read the specification in detail (it's a very dry read in my defense, and it's not even my job).

It is kind of difficult to compare these APIs, especially when they choose different abstractions to implement similar things. Also, Metal is organized quite differently from other low-overhead APIs. For example, Microsoft moved from managed resource tracking in DX11 (meaning that the API will make sure that your data etc. are loaded on the GPU) to unmanaged resource tracking in DX12 (meaning that the developer has to take care of low-level stuff manually). But Metal includes both the managed (easier to use) and the unmanaged (better performance, lower overhead) APIs — and you can use them simultaneously. These things make comparisons even more difficult.

One area where Metal has a very obvious gap is high-precision computing. Metal does not double precision numbers, which can be useful for some scientific applications. I suppose it's because Apple GPUs don't have native support for double precision. But then again, double precision performs terribly on modern consumer GPUs: it's usually 1/32 or so of the single-precision performance. You can in fact emulate double precision in "software" using a couple of single precision numbers and get better performance than that. So in a way, metal is being "fair" here — if it exposes a feature, you know that it will be FAST on all the supporting hardware.
 

Waragainstsleep

macrumors 6502a
Oct 15, 2003
612
221
UK
Theres a lot of complaints here that are predicated on two things:

That Apple's Metal sucks;
That Apple's new in house GPUs will suck;

Given all the info posted (much of which I don't really care to understand), it sounds like Metal is actually pretty good so the first complaint seems like its not something to worry about other than reluctance of some developers to learn a new system. I'd say they will be less reluctant to learn it if the second point also turns out to be false.

We have very little to go on regarding exactly how good Apple GPUs will be. That said, they seem to be leading the industry more often than not in phones and tablets so that has to be a good sign. At the very least its not a bad one.

There are some other details that may clue us in about Apple's enthusiasm when it comes to games:

They released new Metal dev tools for Windows which says they want people to port more games to Mac/iOS;
Apple Arcade;
The rumoured Apple games console and controller: It makes a lot of logical sense that Apple would want to move into this space. Its a missing component of the Apple ecosystem that now overlaps substantially with a lot of their existing gear and services. This in itself is not always enough for Apple to move. They typically only do so if they can shake things up and make a big impact. Any new player would have to do this to challenge the duopoly of Sony and Microsoft.

So what is it that Apple thinks they can bring? I have two 'clues':

Spatial Audio: This new feature on AirPods Pro sounds really cool and while it would doubtless be nice while watching movies, not many people really watch movies with earbuds in. Games on the other hand often include headset usage and are popular among kids. And if you have a house with several of them all playing their own noisy games, you want them wearing headphones.

Touch friendly interface: It has been suggested Apple will bring touchscreens to Macs but it has been suggested that above this Apple has ideas about an AR interface. They have fancy LIDAR cameras already in use, and rumours persist of AR glasses, its not a huge stretch at all to think this is coming. While the haters will complain that AR and VR already exist elsewhere and no-one really uses it, its standard form for Apple to tie a few existing technologies together into something so usable its brilliant.

I remember reading ages ago about eye-tracking cameras that allow the GPU to render in higher detail the areas of the screen where the eyes are actually focused and slack off on the peripheral. Seems like something that would save on power or the need for massive performance quite significantly.

So maybe they have a whole new class of games up their sleeves that will become new AAA titles. Or maybe they have an absolutely killer GPU in the works. Maybe both.

At any rate, why not err on the side of optimism for once instead of being miserable killjoys?
 
  • Like
Reactions: unsui_grep

leman

macrumors Core
Oct 14, 2008
19,521
19,675
I remember reading ages ago about eye-tracking cameras that allow the GPU to render in higher detail the areas of the screen where the eyes are actually focused and slack off on the peripheral. Seems like something that would save on power or the need for massive performance quite significantly.

You can already do this. Metal on newer iOS devices support variable rate shading (Apple calls is "rasterization rate maps") and you can do eye tracking via ARKit.
 
  • Like
Reactions: unsui_grep

JMacHack

Suspended
Mar 16, 2017
1,965
2,424
Will keep my iMac with Mojave as long as I can, then switch back to Windows eventually.
I'm guessing by the time you need to switch the transition hiccups will be over. It's always a smart move to use what you got as long as you can, and see what's available when you need to upgrade. We'll see down the line.

Either way, chill. You're probably not gonna have to leave the platform.
 

PortoMavericks

macrumors 6502
Jun 23, 2016
288
353
Gotham City
Metal does not have geometry shaders because it does not need them. You can use compute shaders to generate and draw the geometry entirely on the GPU. This also covers much if not most of the mesh shader functionality. It is your responsibility as a developer to use efficient use of the hardware for your specific use case.

Yes you can, not to mention you need to keep all of that in VRAM. In this case, Apple Silicon will use unified memory and likely the LPDDR5, which is slow.

I get you, you’ll come up and say something about the vertex data and the cache will have zero impact because it’s all in the same silicon.

It’s a pipe dream, the Mac line will get a huge performance boost over the current Intel line but it will never, ever reach the level of performance of a discrete GPU for gaming.

Like, never.

Is that a bad thing? I don’t think so, with much more performance available on the entry line it can entice developers. I’m sure if I were running a studio I’d consider very seriously to bring my game to the Mac. The iPad not so much because of mandatory touch controls and in my opinion, this is something that Apple should rethink.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.