Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

leman

macrumors Core
Oct 14, 2008
19,521
19,674
So value provided by RT hardware would appear to be [...]

One additional consideration is simply the evolution of graphics. Just like programmable shading (once niche, expensive, and slow) became a fundamental technology today, it is very likely that ray tracing will become the default rendering technology of tomorrow. A company that does not invest early might be left out.

- (very tentative...) the hardware required for ray tracing MAY possibly be useful for other GPU tasks (I've suggested this in the context of walking large pointer-based data structures) in a way that's of value to tasks apparently totally unrelated to RT. This may be present on day one; it may be a goal Apple is aware of, but was unable to fit into this year's design; or it may be a crazy idea that will never make sense!

One area where it is useful today, interestingly enough, is complex positional audio. Apple for example has this nice environmental audio library called PHASE (GPU-accelerated from what I understand). Hardware raytracing can be used to trace audio signal bouncing between occluders.
 

diamond.g

macrumors G4
Mar 20, 2007
11,438
2,664
OBX
One area where it is useful today, interestingly enough, is complex positional audio. Apple for example has this nice environmental audio library called PHASE (GPU-accelerated from what I understand). Hardware raytracing can be used to trace audio signal bouncing between occluders.
*tangent* Sony thought the same thing with the PS5. So far no takers. Seems like Dolby Atmos “won that war”.
 

altaic

macrumors 6502a
Jan 26, 2004
711
484
Ray tracing may be important for AR, to properly "ground" artificial objects in the real world. In earlier years Apple has shown demo's of how, if you don't include things like shadows, it's hard to see exactly where an artificial AR object is supposed to be in space; it may look like it is floating a foot above the ground.

So value provided by RT hardware would appear to be

RT hardware is also useful in the context of Neural Radiance Fields (NeRF) and Plenoxels for the reconstruction of 3D scenes from 2D images (or other data). It's a very active field of research, and some of the results are shocking, almost magical. I can list/link some papers if anyone is interested.
 
Last edited:

leman

macrumors Core
Oct 14, 2008
19,521
19,674
RT hardware is also useful in the context of Neural Radiance Fields (NeRF) for the reconstruction of 3D scenes from 2D images (or other data). It's a very active field of research, and some of the results are shocking, almost magical. I can list/link some papers if anyone is interested.

I would be interested! BTW, Apple seems to file a lot of patents on this very topic (along with point cloud data compression and mesh data compression)
 

Rnd-chars

macrumors 6502
Apr 4, 2023
256
237
RT hardware is also useful in the context of Neural Radiance Fields (NeRF) for the reconstruction of 3D scenes from 2D images (or other data). It's a very active field of research, and some of the results are shocking, almost magical. I can list/link some papers if anyone is interested.
Yes, please. I’m curious how well that could be applied to creating virtual environments in Vision Pro from static images (or I imagine video with sufficient compute).
 

komuh

macrumors regular
May 13, 2023
126
113
If you want read more about NeRF's neuralangelo it's pretty easy to follow paper and also video. ( Also you can just follow Thomas Müller he is the guy of the NeRF world ) Not sure about RayTracing accelerators being super useful in the case of NeRFs (it for sure CAN sometimes help with generating initial data but it's still "pure power" and memory limited process).
 
Last edited:

altaic

macrumors 6502a
Jan 26, 2004
711
484
I would be interested! BTW, Apple seems to file a lot of patents on this very topic (along with point cloud data compression and mesh data compression)
Awesome, I did not know about Apple's interest! It's been a few months since I was actively researching this and I have something like a hundred papers, so I'll try to list the best ones. Be sure to check out the supplementary data and project pages for these, but definitely don't miss the videos for NeRF in the Wild and NeRF in the Dark!

Recent Overview:
NeRF: Neural Radiance Field in 3D Vision, A Comprehensive Review

Implicit Representations:
NeRF in the Dark: High Dynamic Range View Synthesis from Noisy Raw Images
Ref-NeRF: Structured View-Dependent Appearance for Neural Radiance Fields
4K-NeRF: High Fidelity Neural Radiance Fields at Ultra High Resolutions
Zip-NeRF: Anti-Aliased Grid-Based Neural Radiance Fields

Explicit Representations:
Plenoxels: Radiance Fields without Neural Networks
HDR-Plenoxels: Self-Calibrating High Dynamic Range Radiance Fields
TensoRF: Tensorial Radiance Fields
K-Planes: Explicit Radiance Fields in Space, Time, and Appearance
HexPlane: A Fast Representation for Dynamic Scenes

Unposed Input Data:
NeRF in the Wild: Neural Radiance Fields for Unconstrained Photo Collections
BARF: Bundle-Adjusting Neural Radiance Fields
DBARF: Deep Bundle-Adjusting Generalizable Neural Radiance Fields
 
Last edited:

altaic

macrumors 6502a
Jan 26, 2004
711
484
If you want read more about NeRF's neuralangelo it's pretty easy to follow paper and also video. ( Also you can just follow Thomas Müller he is the guy of the NeRF world ) Not sure about RayTracing accelerators being super useful in the case of NeRFs (it for sure CAN sometimes help with generating initial data but it's still "pure power" and memory limited process).
Oh, right and there's also this paper by Müller et al that's really good: Instant Neural Graphics Primitives with a Multiresolution Hash Encoding

Also, I have a few papers specifically about compressing implicit representations (MLP based), but the explicit representations were so much more efficient that I ended up focusing on those.
 
Last edited:
  • Like
Reactions: Rnd-chars

komuh

macrumors regular
May 13, 2023
126
113
Oh, right and there's also this paper by Müller et al that's really good: Instant Neural Graphics Primitives with a Multiresolution Hash Encoding

Also, I have a few papers specifically about compressing implicit representations (MLP based), but the explicit representations were so much more efficient that I ended up focusing on those.
Its a news to me that explicit representations are faster (I assume fastest is still Hash based MLP) and mostly they just provide way to "easier" train the data and produce a bit higher quality results (especially for scenes that aren't good enough in I-NGP paper [dynamic movement etc]).

I never saw faster method with at least as good results on variety of scenes compared to I-NGP, would love if you can point me on something that can train scene in 2-3minutes giving good results as I'm not that deep into Planes and Plenoxels field.
 

sirio76

macrumors 6502a
Mar 28, 2013
578
416
I was referring to OPTIX vs. CUDA performance. Nvidia can achieve 3x improvement on same hardware with hardware RT enabled.
Beside specific situations that’s not the case, on an average scene is never 3x, it’s more like 30%
 
Last edited:

sack_peak

Suspended
Sep 3, 2023
1,020
959
Given that the A17 Pro has ray tracing and the M3 will likely have ray tracing as well perhaps maybe a new thread dedicated for that should be made independent from this year 2020 one as we enter 2024?
 

leman

macrumors Core
Oct 14, 2008
19,521
19,674
Beside specific situations that’s not the case, on an average scene is never 3x, it’s more like 30%

I am referring to Blender benchmarks. Your mileage with other software or more complex scenes might vary. Nvidia will have issues with very complex scenes anyway as they lack the RAM.
 

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,627
1,101

aytan

macrumors regular
Dec 20, 2022
161
110
Well, tbh, cpu rendering is still the best choice for massive things.
I mostly hope for some really smart xpu renderer that supports mac. The shared mem should be a game changer right?
From my point of view, over 1 year I am quite happy with M1 Ultra but M2 improvement was not good enough for an upgrade. CPU count and GPU improvements could be quite well on M3 Ultra over M1 Ultra based on recent leaks. Overall %50 - %60 performance improvement will be really nice by comparison M1 Ultra. M3 Ultra could be a good peak for an upgrade from M1 Ultra for me.
 

mikas

macrumors 6502a
Sep 14, 2017
898
648
Finland
Not to flame, and this is an ASi thread, but just ran Cinebench 2024 GPU test on an year 2009 6-core cMP with 2xVIIs from early year of 2019. Result = 8201. Didn't beat M2 Ultra 76 core GPU (8860), pretty close though.

I think Apple would need 3rd party GPU support in the context of this thread title. One way or another. AMD might not want to play anymore, nVidia is out the question, Intel can't perform. So they do their own discrete GPUs, or bend out and beg and pay.

GPUs scale quite good then numbered more, in a PC, or in a Mac. On a SoC you can not do that in the same way, seems so. Seems you barely get a low level 3D rendering crunch with this most advanced SoC in the World (ASi Ultra).

I've got this single core 13 14 year old cMP with two 2019 released GPUs, and I almost line up to Apples most powerfull GPU in their newest line as of today, M2 Ultra with 76 GPU cores.

That's not working so good, unfortunately. I'd like Apple to shine on Workstation computers too. For me, the 3D/render workstations please.

PCIe support is the way to go. For GPU too.
1695583166883.png
 
Last edited:
  • Like
Reactions: ssgbryan

bcortens

macrumors 65816
Aug 16, 2007
1,324
1,796
Canada
Not to flame, and this is an ASi thread, but just ran Cinebench 2024 GPU test on an year 2009 6-core cMP with 2xVIIs from early year of 2019. Result = 8201. Didn't beat M2 Ultra 76 core GPU (8860), pretty close though.

I think Apple would need 3rd party GPU support in the context of this thread title. One way or another. AMD might not want to play anymore, nVidia is out the question, Intel can't perform. So they do their own discrete GPUs, or bend out and beg and pay.

GPUs scale quite good then numbered more, in a PC, or in a Mac. On a SoC you can not do that in the same way, seems so. Seems you barely get a low level 3D rendering crunch with this most advanced SoC in the World (ASi Ultra).

I've got this single core 13 14 year old cMP with two 2019 released GPUs, and I almost line up to Apples most powerfull GPU in their newest line as of today, M2 Ultra with 76 GPU cores.

That's not working so good, unfortunately. I'd like Apple to shine on Workstation computers too. For me, the 3D/render workstations please.

PCIe support is the way to go. For GPU too.
View attachment 2279781

PCIe is not the answer, nor is the SoC design the problem. NVIDIA, AMD, and Intel are all building SoC style chips as unified memory brings and offers powerful advantages.

Additionally I don’t want game engines and 3D rendering systems wasting time optimizing for a tiny niche of Apples market. The Mac Pro is the smallest of Apples market and the Mac Studio is likely the bulk of Apples high end desktop sales and it won’t support PCIe.

An SoC built of multiple different types of tiles can grow larger than Apple’s current approach. The implementation style Apple has chosen (vs Intel for example) is what is holding them back from building a super high end GPU rather than having opted not to use PCIe…
 
  • Like
Reactions: sirio76 and leman

avkills

macrumors 65816
Jun 14, 2002
1,226
1,074
Apple needs to find a way to separate the CPU portion of the SoC with the GPU portion in their pro workstation if they decide to continue to offer one. Or at the bare minimum allow some sort of MB SoC upgrade path. But Apple will not because they are "green", right! Right!?

Let's hope the M3 turns the tide a bit.
 

quarkysg

macrumors 65816
Oct 12, 2019
1,247
841
Apple needs to find a way to separate the CPU portion of the SoC with the GPU portion in their pro workstation if they decide to continue to offer one. Or at the bare minimum allow some sort of MB SoC upgrade path. But Apple will not because they are "green", right! Right!?

Let's hope the M3 turns the tide a bit.
I'm honestly curious to know if you think the majority of Apple's customer base will upgrade their Macs?

How it is "green" if Apple were to expend engineering, manufacturing, shipping, warehouse resources, etc, so that maybe a very small portion of their customer base can enjoy upgrading their Macs?
 
  • Like
Reactions: bcortens

leman

macrumors Core
Oct 14, 2008
19,521
19,674
Apple needs to find a way to separate the CPU portion of the SoC with the GPU portion in their pro workstation if they decide to continue to offer one. Or at the bare minimum allow some sort of MB SoC upgrade path. But Apple will not because they are "green", right! Right!?

They need to make their GPUs faster, that’s it. How they do it doesn’t really matter. If they can improve the compute by 4x while implementing hardware RT, Mac Pro would be a rendering powerhouse.
 

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,627
1,101
Blender and Apple developers go the extra mile because Metal features are tied to the OS. I hope Apple can decouple Metal updates from OS updates.
  • Metal shadow implementation
    • The Metal version we are targeting doesn’t support texture atomics, which is required for Shadow rendering. For tile-renderer based GPU there is a new approach in development that improves performance on Apple Silicon. Intel/AMD GPU still need a work around by emulating texture atomics.
    • Metal 3.1 would support texture atomics, but would require a higher OS version. There are some options to still consider as we don’t want to push users into an OS update.
 

MRMSFC

macrumors 6502
Jul 6, 2023
371
381
Not to flame, and this is an ASi thread, but just ran Cinebench 2024 GPU test on an year 2009 6-core cMP with 2xVIIs from early year of 2019. Result = 8201. Didn't beat M2 Ultra 76 core GPU (8860), pretty close though.

I think Apple would need 3rd party GPU support in the context of this thread title. One way or another. AMD might not want to play anymore, nVidia is out the question, Intel can't perform. So they do their own discrete GPUs, or bend out and beg and pay.

GPUs scale quite good then numbered more, in a PC, or in a Mac. On a SoC you can not do that in the same way, seems so. Seems you barely get a low level 3D rendering crunch with this most advanced SoC in the World (ASi Ultra).

I've got this single core 13 14 year old cMP with two 2019 released GPUs, and I almost line up to Apples most powerfull GPU in their newest line as of today, M2 Ultra with 76 GPU cores.

That's not working so good, unfortunately. I'd like Apple to shine on Workstation computers too. For me, the 3D/render workstations please.

PCIe support is the way to go. For GPU too.
View attachment 2279781
I know wattage is beating a dead horse, but I think in the context of making Apple’s gpu situation more competitive it brings important context.

2x Radeon 7’s are some power hungry monsters. More so with newer cards of both top brands. If Applied were to build some sort of gpu at that wattage, I’d imagine they’d be very competitive.

I think it’s more correct to say
They need to make their GPUs faster, that’s it. How they do it doesn’t really matter. If they can improve the compute by 4x while implementing hardware RT, Mac Pro would be a rendering powerhouse.
The real issue is: how do they do this exactly? A discreet gpu is off the table given ASi’s particular architecture. And an SoC with enough cores to compete with top gpus would be enormous.

They could put a separate gpu die on an interposer, but that adds latency between the unified memory and the gpu cores. And that’s a key part of ASi’s speed. And if the M2 Ultra is any indication, it won’t scale linearly.
 

leman

macrumors Core
Oct 14, 2008
19,521
19,674
The real issue is: how do they do this exactly? A discreet gpu is off the table given ASi’s particular architecture. And an SoC with enough cores to compete with top gpus would be enormous.

They could build a larger multi-SOC module (the rumoured Extreme). Just last week they published a patent describing a quad-SoC setup with high-bandwidth interconnect. They could increase the clocks. They could redesign the GPU cores to have more SIMD units. There are many options.

They could put a separate gpu die on an interposer, but that adds latency between the unified memory and the gpu cores. And that’s a key part of ASi’s speed.

Hardly. The latency is already super high. Not that latency matters that much for GPUs, they can hide it fairly well.

And if the M2 Ultra is any indication, it won’t scale linearly.

M2 Ultra scales much better than M1 Ultra. This is technology in development. The scaling will improve as GPUs improve. BTW, the problem work distribution in GPUs has been a topic of multiple patents published last year (after the RT patents), so it's possible that this tech is already present in A17 cores.
 

MRMSFC

macrumors 6502
Jul 6, 2023
371
381
They could build a larger multi-SOC module (the rumoured Extreme). Just last week they published a patent describing a quad-SoC setup with high-bandwidth interconnect. They could increase the clocks. They could redesign the GPU cores to have more SIMD units. There are many options.



Hardly. The latency is already super high. Not that latency matters that much for GPUs, they can hide it fairly well.



M2 Ultra scales much better than M1 Ultra. This is technology in development. The scaling will improve as GPUs improve. BTW, the problem work distribution in GPUs has been a topic of multiple patents published last year (after the RT patents), so it's possible that this tech is already present in A17 cores.
I wasn’t aware of the patents, hopefully we’ll see some big improvements over the next few generations then.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.