Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Appletoni

Suspended
Mar 26, 2021
443
177
I saw a lot of people including myself wishing to use Unreal Engine on Apple Silicon Mac natively but so far, they are not interested at all. Maybe because of the lawsuit between Epic and Apple? Hmmm... Someone mentioned that Apple Silicon's GPU sucks because it has no capability to run ray tracing features on Unreal engine and therefore, there won't be a native support for AS Macs. Well, not an official statement but lack of ray tracing and unreal engine support is quite disappointing so far. I dont know if Apple is gonna make a GPU for ray tracing since they weren't interested in gaming industry but for 3D, it can be useful.


I know that Apple had been working on ray tracing since 2018 but they still didnt add any ray tracing features so far. Some people say Apple Silicon chip does not have a dedicated ray tracing chip or the GPU isn't powerful enough to use ray tracing. Since Both Nvidia and AMD have ray tracing technologies, it's a huge question that Apple need to show or answer. Not just for gaming but also for 3D works.

Do you think Apple will add and support ray tracing with Apple Silicon chip?
I hope that Apple will fix it fast so we can have an amazing Raytracing.
It‘s bad to buy a device, for example a MacBook Pro without M1 Ultra and without an amazing Raytracing.
 

diamond.g

macrumors G4
Mar 20, 2007
11,437
2,659
OBX
I hope that Apple will fix it fast so we can have an amazing Raytracing.
It‘s bad to buy a device, for example a MacBook Pro without M1 Ultra and without an amazing Raytracing.
Apple has Raytracing, it is just done with shaders/CPU instead of being all GPU driven. I don‘t think Apple would ever offer the M1 Ultra in a notebook (thermals and battery life), but I could be wrong.
 

sunny5

macrumors 68000
Original poster
Jun 11, 2021
1,835
1,706
Apple has Raytracing, it is just done with shaders/CPU instead of being all GPU driven. I don‘t think Apple would ever offer the M1 Ultra in a notebook (thermals and battery life), but I could be wrong.
CPU itself is fine cause it's only 60W. But GPU's power consumption is 120W. Unless Apple want to make a powerful laptop despite the battery issue, it's possible but since they are limiting themselves for 100W, I guess it's not possible.
 
  • Like
Reactions: satcomer

l0stl0rd

macrumors 6502
Jul 25, 2009
479
412
CPU itself is fine cause it's only 60W. But GPU's power consumption is 120W. Unless Apple want to make a powerful laptop despite the battery issue, it's possible but since they are limiting themselves for 100W, I guess it's not possible.
Yeah I don't think battery life has s on of their selling points.

I thing the 64 Ultra would throttle considering that you can make the M1 Max throttle if you put a lot of load on the GPU and CPU.

I guess if people want better they need to wait for the M2 Max.
 

Boil

macrumors 68040
Oct 23, 2018
3,477
3,173
Stargate Command
I thing the 64 Ultra would throttle considering that you can make the M1 Max throttle if you put a lot of load on the GPU and CPU.

That is in the 14" & 16" MBP laptops; I feel the Mac Studio chassis is the perfect solution, it should allow the M1 Max SoC to reach full power without any thermal throttling issues...?
 

l0stl0rd

macrumors 6502
Jul 25, 2009
479
412
That is in the 14" & 16" MBP laptops; I feel the Mac Studio chassis is the perfect solution, it should allow the M1 Max SoC to reach full power without any thermal throttling issues...?
Yes I guess so, even hoping they will let the GPU run at a bit higher clockrate.
 

Ethosik

Contributor
Oct 21, 2009
8,141
7,119
Ray tracing will come. It will be a standard feature set of all GPUs in the world. After that it will even come to all the iGPUs too.

Right now it is a new feature and very energy intensive. For heat management reasons it is not good to enable RT on a laptop or thin AIO desktop.

In a couple of generations computers will be powerful enough and efficient enough to do RT with a much smaller energy footprint.
Yep, I am surprised people don't know how Apple works at this point. While the haters will say "Apple is slow to introduce feature X", they do it in an organized way. Only a handful of games use RT, and a lot of people can't really tell the difference. Its getting more popular thanks to the new consoles. Apple will include it when its not a major performance hit and more popular.
 

leman

macrumors Core
Oct 14, 2008
19,517
19,664
Yep, I am surprised people don't know how Apple works at this point. While the haters will say "Apple is slow to introduce feature X", they do it in an organized way. Only a handful of games use RT, and a lot of people can't really tell the difference. Its getting more popular thanks to the new consoles. Apple will include it when its not a major performance hit and more popular.

High-performance raytracing is mostly about reordering memory access to properly utilize massive memory and computational parallelism of modern GPUs. Nvidia has some magic sauce here (there were investigations in how their hardware works that concluded that RT cores are probably tightly integrated with texture units). AMD doesn’t - their “hardware RT” boils down to trivial fixed-function ray/box intersection, which is also probably why their RT is much slower in practice. Apple wants to have very programmable RT so they probably need a more general memory coalescing solution. But I’m sure they are working on it and I hope it can be shipped with M2 series.
 

diamond.g

macrumors G4
Mar 20, 2007
11,437
2,659
OBX
High-performance raytracing is mostly about reordering memory access to properly utilize massive memory and computational parallelism of modern GPUs. Nvidia has some magic sauce here (there were investigations in how their hardware works that concluded that RT cores are probably tightly integrated with texture units). AMD doesn’t - their “hardware RT” boils down to trivial fixed-function ray/box intersection, which is also probably why their RT is much slower in practice. Apple wants to have very programmable RT so they probably need a more general memory coalescing solution. But I’m sure they are working on it and I hope it can be shipped with M2 series.
Not yet, or at least not the base M2. Lets see if they release a M2 Pro/Max in Q4 or if they will wait till Q4 of 2023 to update them.
 

dugbug

macrumors 68000
Aug 23, 2008
1,929
2,147
Somewhere in Florida
metal 3 brings improvements to raytracing performance for Apple Silicon. I haven't watched the session yet but it was talked about in the state of the union

Edit from Metal 3 improvements:

New Ray Tracing features​


The latest advancements in Metal Ray Tracing mean less GPU time is spent building acceleration structures, work like culling can move to the GPU to reduce CPU overhead, and both intersection and shading can be optimized with direct access to primitive data.


As mentioned, I have not yet watched any metal 3 sessions yet so thats all I know.
 
Last edited:

l0stl0rd

macrumors 6502
Jul 25, 2009
479
412
7BDF7DAB-4FCD-47D8-96EB-80ED1A59532A.png

From their presentation, I guess there is hope.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
CPU itself is fine cause it's only 60W. But GPU's power consumption is 120W. Unless Apple want to make a powerful laptop despite the battery issue, it's possible but since they are limiting themselves for 100W, I guess it's not possible.

The power consumption won't totally disappear, but it will get mitigated over time. So the "Max" class die will get batter at it when Apple goes to TSMC N3. They will likely crank up the 'horsepower' of the P cores and GPU cores while also growing the shared on die memory workspace space larger (more megabytes ) . Crank up the number of GPU cores.

All of that will make their Metal 3 based raytracing stuff rolling out now to just preform better. So will be able to do more with 80W than can do now. ( Metal 3 will bring some of that to the 'old' M1 Max ).

The sizable fab shrink after that would be perhaps be a time where some dedicated fixed function logic kicks in. Apple's ambitions in the AR/VR space would likely be were they probably have more intermediate term future need for fixed function logic rather than the laptops.

Metal 3 is suppose to have support for GPU hosted ray tracing hardware. But likely more so for the minimal support that AMD added to RNDA2 that Apple is "leaving on the floor unused" now. Again not completely skewed fixed function logic support. Currently Apple is shipping hardware whee ray trace logic isn't being used. Need to fix that first before moving on to potentially their own stuff later.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
View attachment 2015657
From their presentation, I guess there is hope.

The AMD RDNA2 GPUs that Apple is already shipping have unused ray trace logic. That is most likely just 'catch up' to hardware already in the field.

I wouldn't read lots of new RNDA3 AMD GPUs into that.

Metal 3 covers Apple GPUs , newer Intel iGPUs , and newer AMD GPUs. ( they are leaving older 3rd party GPUs behind but are still making contributions to the ones that are left . )
 

l0stl0rd

macrumors 6502
Jul 25, 2009
479
412
The AMD RDNA2 GPUs that Apple is already shipping have unused ray trace logic. That is most likely just 'catch up' to hardware already in the field.

I wouldn't read lots of new RNDA3 AMD GPUs into that.

Metal 3 covers Apple GPUs , newer Intel iGPUs , and newer AMD GPUs. ( they are leaving older 3rd party GPUs behind but are still making contributions to the ones that are left . )
Yes perhaps I guess we don’t know for sure till the M2 Pro/Max show.
Should they not have any RT hardware we will not see it till the M3 I bet.

It could also mean that their Metal Raytracing was running on the CPU and is now running on the GPU🤷‍♂️
 
Last edited:

jmho

macrumors 6502a
Jun 11, 2021
502
996
There are different parts of Raytracing: the low level part of comparing millions of mathematical rays with various mathematical objects. Apple was definitely already doing this on the GPU.

There are higher level things like building and traversing the bounding volume hierarchy and various acceleration structures (that are used to help cull this massive list of ray intersections) that Apple potentially wasn't doing on the GPU, but will now in Metal 3.
 

throAU

macrumors G3
Feb 13, 2012
9,198
7,346
Perth, Western Australia
There are different parts of Raytracing: the low level part of comparing millions of mathematical rays with various mathematical objects. Apple was definitely already doing this on the GPU.

There are higher level things like building and traversing the bounding volume hierarchy and various acceleration structures (that are used to help cull this massive list of ray intersections) that Apple potentially wasn't doing on the GPU, but will now in Metal 3.

For current implementations, to make it run fast enough there's also denoising and upscaling, which the Apple processor's neural engine capabilities can potentially be utilised for.
 

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,627
1,101
There are different parts of Raytracing: the low level part of comparing millions of mathematical rays with various mathematical objects. Apple was definitely already doing this on the GPU.
For current implementations, to make it run fast enough there's also denoising and upscaling, which the Apple processor's neural engine capabilities can potentially be utilised for.
Does any software use it?
 

jmho

macrumors 6502a
Jun 11, 2021
502
996
Does any software use it?
I don't think anyone is using the neural engine for de-noising and upscaling just yet, although maybe that's part of the new MetalFX library, or something that could be added to it in the future.

I also have no idea if anyone is actually using the Metal APIs for raytracing. I believe Blender is using its own custom compute kernels, which will likely be just as fast as the metal APIs until we get hardware.
 
  • Like
Reactions: throAU

leman

macrumors Core
Oct 14, 2008
19,517
19,664
I don't think anyone is using the neural engine for de-noising and upscaling just yet, although maybe that's part of the new MetalFX library, or something that could be added to it in the future.

Looking at the API design, it's flexible enough that it could be moved to some other coprocessor in the future. That's the advantage of wrapping the APIs into a black box instead of letting the devs do things themselves I suppose.
 

lcubed

macrumors 6502a
Nov 19, 2020
540
326
I don't think anyone is using the neural engine for de-noising and upscaling just yet, although maybe that's part of the new MetalFX library, or something that could be added to it in the future.
DXO's photo lab 5 deep prime uses the ANE for noise reduction. works amazingly well
 
  • Like
Reactions: throAU and jmho

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
Yes perhaps I guess we don’t know for sure till the M2 Pro/Max show.
Should they not have any RT hardware we will not see it till the M3 I bet.

The M2 Pro/Max hardware isn't really the point. The Metal API really didn't have a way to semantically 'talk' about having GPU ray tracing capabilites at all. You couldn't say that the GPU did or did not have GPU ray tracing abilities or that you wanted use or not use them. If can't "talk" about it then can't do it. Whether the hardware has it or not immateral if can't express it in the code.


It could also mean that their Metal Raytracing was running on the CPU and is now running on the GPU🤷‍♂️

Apple enabled that back in 2019 with indirect command buffers. That isn't new. But it is more robust now.

The actual reveal in the Metal Ray tracing session was rather limited. Basically get to say

" Allow you to encode GPU work independently on the GPU

... icbDescriptor.supportRayTracing = true ... "




Properly implemented performance shader compiler support toolchain for AMD RDNA2 should attempt now where using an optimized library call to use the hardware on the GPU. If not the compute still gets done on the GPU. All the indirect command buffer work is bing done on the GPU. Really up the opaque , "black box" of the Apple/AMD compilers and library code to gets some traction here with the actually fixed function calls or not. And if there is not much work put into the "black box" layer then there won't be much additional new traction here.

Before couldn't really even say that an indirect buffer had Ray Tracing Support or not. It is at least a distinguishable attribute now. And it is an indicator that Apple is probably going to spend time and effort optimizing it. But not necessarily means Apple hardware is immediately eminent.

There is zero mention of non Apple GPUs in the whole session. All the speed up metrics are with the current Apple Silicon GPU . So most of these ray tracing upgrades in this session are probably largely driven by lessons learned over first two years of "indirect command buffers" being fed back into an incrementally better Metal 3.

The improvements outlined here probably do work better on Apple's largest GPUs even if there is no specific hardware ray intersection fixed function logic. Probably works well on AMD GPU's with big enough Infinity Cache also. It doesn't look like a DirectX 12 DXR 'killer' addition to Metal or Nvidia RT software/hardware 'Killer" move. Just incrementally better than Metal than last iteration.
 

diamond.g

macrumors G4
Mar 20, 2007
11,437
2,659
OBX
Do we think Apple will use AI for denoising like Nvidia is (with DLSS 3.5)? Seems like a free IQ win.
 

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,627
1,101
A user on the Blender forum did an experiment comparing the speed that ray tracing hardware provides to Intel, AMD and Nvidia GPUs in Blender. You can't draw many conclusions from it, but the charts are very interesting. For instance:

e4d31424749f782750d493ac99280187c0afefc9.png


I wonder what those charts would look like when the Apple GPU had ray tracing hardware.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.