Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

EntropyQ3

macrumors 6502a
Mar 20, 2009
718
824
Markets change, and it's not like a competent RT requires a massive transistor budget. If Apple were to deliver useable real-time RT acceleration it would make them the first company offering this feature on a reasonable power budget (unlike Nvidia where you need a 200W desktop GPU for RT in games to make any sense). That would truly bring raytracing to the masses (at least to the masses of Apple users) and put Apple in a unique position.
This is a perfectly normal tech-enthusiast response. My question though would be - to what purpose?
Even as someone who is interested in technology, who does play games, and who is theoretically capable of looking for specific lighting model peculiarities, I still don't see a meaningful upside.

Apple already has hardware compression for both data and bandwidth and it's not like the upscaling requires any dedicated hardware. As I wrote before, a good hardware RT implementation has to start with a comprehensive work/memory access reordering solution, which will have benefits way beyond RT alone. Getting good performance of the RT is not just about computing ray-triangle intersections quickly, but first and foremost solving the problem of control flow and data divergence. If this problem can be solved, the GPU suddenly becomes a much more competent programmable processor.
But now you're in a different domain entirely - and in a hypothetical future. It's not that I can't see that there could be benefits - it's that I don't see such hypothetical benefits in 3D rendering lighting modeling taking precedence over any other use of available resources, or even simply saving gates, engineering effort and money.
Even as a tech nerd, efficiency and real user benefit are the criteria I'd like to see optimized for. And RT demonstrates neither.
 

leman

macrumors Core
Oct 14, 2008
19,518
19,668
This is a perfectly normal tech-enthusiast response. My question though would be - to what purpose?
Even as someone who is interested in technology, who does play games, and who is theoretically capable of looking for specific lighting model peculiarities, I still don't see a meaningful upside.



But now you're in a different domain entirely - and in a hypothetical future. It's not that I can't see that there could be benefits - it's that I don't see such hypothetical benefits in 3D rendering lighting modeling taking precedence over any other use of available resources, or even simply saving gates, engineering effort and money.
Even as a tech nerd, efficiency and real user benefit are the criteria I'd like to see optimized for. And RT demonstrates neither.

Rasterization is an incredibly successful method, but it’s not the end of the line. I have little doubt that rasterization will be replaced as the primary mean of producing 3D graphics within the next few decades. RT-like approaches seem like a logical candidate because it just makes sense for graphics. I mean, don’t know whether RT is the future of computer graphics or whether we will just use generic compute and everything will go back to old good “software” days. But RT makes things conceptually simpler, more logical and more flexible. At any rate, future graphics applications will require more flexible hardware, and whoever ceases to innovate in this direction will find themselves irrelevant on the market before long…

And regarding efficiency and end user benefits… these are moving targets. As the hardware gets faster and resources get cheaper, things that were once impractical become feasible. Computer graphics is essentially the art of hacking together a solution that works, and in the last two decades we went through generations of such hacks. I mean, not too long ago people were saying that doing per-pixel lighting is a waste of resources and that fixed-function per-vertex processing is enough. We used rectangular apha-tested textures to approximate 3D geometry because from afar it looks acceptable. We had hardware accelerated phong lighting model because that was the efficient thing to do. Today we are doing rasterization of micropolygons in compute shaders and per-pixel sorting of geometry layers to do order-independent transparency. Neither of these things is efficient. Nor would they have been possible ten years ago.
 
  • Like
Reactions: T'hain Esh Kelch

jujoje

macrumors regular
May 17, 2009
247
288
So is it worth it for Apple to add dedicated RT hardware exclusively for the benefit of those few who use Macs for professional rendering (as opposed to benchmarketing using Blender)?

Wouldn't raytracing be useful with AR, particularly for compositing models into the environment? If Apple were to add raytracing that seems like a use for it outside games and Blender that might justify the investment. At the moment the lighting really breaks things more than anything else with AR things...
 

EntropyQ3

macrumors 6502a
Mar 20, 2009
718
824
Rasterization is an incredibly successful method, but it’s not the end of the line. I have little doubt that rasterization will be replaced as the primary mean of producing 3D graphics within the next few decades. RT-like approaches seem like a logical candidate because it just makes sense for graphics. I mean, don’t know whether RT is the future of computer graphics or whether we will just use generic compute and everything will go back to old good “software” days. But RT makes things conceptually simpler, more logical and more flexible. At any rate, future graphics applications will require more flexible hardware, and whoever ceases to innovate in this direction will find themselves irrelevant on the market before long…

And regarding efficiency and end user benefits… these are moving targets. As the hardware gets faster and resources get cheaper, things that were once impractical become feasible. Computer graphics is essentially the art of hacking together a solution that works, and in the last two decades we went through generations of such hacks. I mean, not too long ago people were saying that doing per-pixel lighting is a waste of resources and that fixed-function per-vertex processing is enough. We used rectangular apha-tested textures to approximate 3D geometry because from afar it looks acceptable. We had hardware accelerated phong lighting model because that was the efficient thing to do. Today we are doing rasterization of micropolygons in compute shaders and per-pixel sorting of geometry layers to do order-independent transparency. Neither of these things is efficient. Nor would they have been possible ten years ago.
My not too well expressed point is that the next couple of decades of lithographic process evolution are not going produce the same kind of advances as the previous couple. Not by a long, long shot. Predictions that are based on the assumption that we’ll see progress ”as usual” is, I feel, in for a rude awakening, particularly for mobile devices where the power draw is fixed or preferably even reduced in the future.
Hardware efficiency is key for future advancement, and the software side of the solution will in a sense grow in importance- but it will no longer be able to rely on lithographic advancement to solve its efficiency issues.

I won’t pretend to know what this means for realtime RT specifically. At present, even dedicated graphics cards with power budgets of hundreds of Watts have issues with adequate RT performance even for a subset of the lighting, and the hacks used to bolster performance to acceptable levels carry their own costs in visual compromises. It’s difficult for me to see this as a promising way forward in an application space with mobile computing constraints and a customer base that has little to no reason to care how things are done under the hood. It may happen anyway - in a stagnating market, manufacturers are pushed to come up with sales arguments, valid or not. But if I wear my consumer hat, I’d prefer folded camera optics, or longer battery life, or faster networking, or … for my money.
 

jav6454

macrumors Core
Nov 14, 2007
22,303
6,263
1 Geostationary Tower Plaza
Not at all, I don’t need that stuff. I mean, I do some GPU graphics programming as a hobby and RT is fun to play with, but that’s about it.




Ah, but don’t discount user psychology. RT, overhyped as it might be, is the latest fashion. Even ARM GPUs donRT now, even if it’s a joke. Apple not providing hardware RT makes them look like they are behind in technology, and that’s a problem for them.

Besides, what other GPU features you would consider a priority right now? Apple is already ahead of the curve working y everything else, especially when you consider the software stack. A competent hardware RT solution might even bring benefits beyond RT as it needs some sort of sophisticated work item reordering unit. I’m only speculating at this point but maybe the same hardware can be used to make compute more efficient.
I don’t discount user hype. But why waste resources in R&D and then increase the price of a chip just to include a feature 99% of the people won’t use? Like I said, it’s a nice to have feature, not a must have.
 

jav6454

macrumors Core
Nov 14, 2007
22,303
6,263
1 Geostationary Tower Plaza
It's not about how many times you use it, it's about whether 3D artists could make more money with a Mac Studio with hardware-based ray tracing and whether Apple would sell more Mac Studios.

What else do you want Apple to develop?
Again, you are talking about highly specific users that don’t use MacBooks, iPads or even iPhones for such things. These are users on the Mac Pro line. There I would argue RT stick to the Mx Ultra and above chips.
 

leman

macrumors Core
Oct 14, 2008
19,518
19,668
My not too well expressed point is that the next couple of decades of lithographic process evolution are not going produce the same kind of advances as the previous couple. Not by a long, long shot. Predictions that are based on the assumption that we’ll see progress ”as usual” is, I feel, in for a rude awakening, particularly for mobile devices where the power draw is fixed or preferably even reduced in the future.
Hardware efficiency is key for future advancement, and the software side of the solution will in a sense grow in importance- but it will no longer be able to rely on lithographic advancement to solve its efficiency issues.

That's even more reason to work smarter rather than harder. Apple already does this for rasterization, which allows them to compete with much larger GPUs. Why not try to improve things for complex compute as well?

I don’t discount user hype. But why waste resources in R&D and then increase the price of a chip just to include a feature 99% of the people won’t use? Like I said, it’s a nice to have feature, not a must have.

You can make the same argument for most features Apple currently ships in their GPUs or CPUs. Why bother with AMX? Just adds more complexity and only useful for people doing large matrix multiplication. Sparse textures? Only brings unnecessary complexity for the memory controller. Tile shaders? Apple-specific gimmick at best. GPU matmul intrinsics? Why bother, too niche anyway. GPU-driven rendering? Waste of driver developer's salary, just like mesh shaders (no games use those either). Even tile-based deferred rendering, why invest so much money into the R&D and making it all work right if only games will benefit from it, it's not like there are any demanding games on the iPhone or the Mac. Android does just fine with much simpler approaches :)

Jokes aside, I fully agree that if hardware RT is not worth it if will make the GPUs more expensive and require tremendous amount of energy to be even remotely useful. But if hardware RT means more efficient complex compute shaders and possibility to use new real-time rendering approaches while simultaneously simplifying the code, why not?
 

jav6454

macrumors Core
Nov 14, 2007
22,303
6,263
1 Geostationary Tower Plaza
[...]
You can make the same argument for most features Apple currently ships in their GPUs or CPUs. Why bother with AMX? Just adds more complexity and only useful for people doing large matrix multiplication. Sparse textures? Only brings unnecessary complexity for the memory controller. Tile shaders? Apple-specific gimmick at best. GPU matmul intrinsics? Why bother, too niche anyway. GPU-driven rendering? Waste of driver developer's salary, just like mesh shaders (no games use those either). Even tile-based deferred rendering, why invest so much money into the R&D and making it all work right if only games will benefit from it, it's not like there are any demanding games on the iPhone or the Mac. Android does just fine with much simpler approaches :)

Jokes aside, I fully agree that if hardware RT is not worth it if will make the GPUs more expensive and require tremendous amount of energy to be even remotely useful. But if hardware RT means more efficient complex compute shaders and possibility to use new real-time rendering approaches while simultaneously simplifying the code, why not?
I didn't discard the whole RT entirely, just make it specific for the Mx Ultra and above. Anything Mx or Ax related is wasted.
 

leman

macrumors Core
Oct 14, 2008
19,518
19,668
I didn't discard the whole RT entirely, just make it specific for the Mx Ultra and above. Anything Mx or Ax related is wasted.

It's possible that Apple will introduce different hardware capabilities for the consumer and prosumer lines, but IMO it's quite unlikely. Not their style and also probably too much work designing two separate lines. I doubt that Apple will be interested in an RT solution that requires significant die space and only offers limited functionality. They like things to synergise and work together and they have been merciless in purging everything superfluous from their hardware. They have almost no traditional fixed-function rendering hardware (save for the rasterizer and texture filtering), no FP64 — almost all the cruft has been culled. If we will ever see hardware RT from Apple it will probably be in form of a redesigned hardware thread scheduler and memory subsystem, with some changes to the GPU execution model as well.

Not that we don't have any precedent. There are many features one can see as wasteful on A- and M-series hardware (AMX, matmul GPU intrinsics, wide FP backend, 64-bit GPU integer support), and yet they were first introduced on the iPhone. And it's not like they are taking that much space away. BTW, I have a suspicion that Apple is going to remove AMX in the future and replace it with low-latency GPU programs.
 

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,627
1,101
I'd rather see Apple spending their transistor budget on features that has a wide immediate usage or wide applicability. If that went into 3D graphics at all, then spending them on features that for instance lighten the load in terms of asset storage, or more efficient high quality upscaling seems a lot more useful.
Are you also against the hardware-based ProRes encoder/decoder?

Even as a tech nerd, efficiency and real user benefit are the criteria I'd like to see optimized for. And RT demonstrates neither.
Is a 40% reduction in rendering time a real benefit for 3D artists? Because Nvidia's ray tracing cores reduce rendering time in Blender by 40%.
 

Gerdi

macrumors 6502
Apr 25, 2020
449
301
Jokes aside, I fully agree that if hardware RT is not worth it if will make the GPUs more expensive and require tremendous amount of energy to be even remotely useful.

So what would be the alternative to hardware RT? I doubt that if there are efficiency concern, that software RT would be the solution.
Rasterization just hit a wall - it is inherently limited. The only information you can get at the fragment shaders are texture lookups or some limited information from the geometry stage. Therefore in order to roughly approximate lighting you have to resort to multiple render passes - which at some point gets even more expensive than raytracing - and that is before you achieve the image quality potential and precision of raytracing.
 

leman

macrumors Core
Oct 14, 2008
19,518
19,668
So what would be the alternative to hardware RT? I doubt that if there are efficiency concern, that software RT would be the solution.
Rasterization just hit a wall - it is inherently limited. The only information you can get at the fragment shaders are texture lookups or some limited information from the geometry stage. Therefore in order to roughly approximate lighting you have to resort to multiple render passes - which at some point gets even more expensive than raytracing - and that is before you achieve the image quality potential and precision of raytracing.

If @EntropyQ3 is right and we’ve hit a transistor wall, there simply isn’t any. If there won’t be any performance increases going forward, current rasterization is as good as it gets.

Now, I dont share their pessimism. Even if the progress within semiconductor industry will slow down to a grind (and I’m not convinced it will), I doubt that the industry is out of tricks left to try. Just look at innovations like mesh shaders which completely change how one approaches traditional rasterization.
 

stevemiller

macrumors 68020
Oct 27, 2008
2,057
1,607
This all reminds me of the early days of video streaming and people claiming that watching videos over the internet was too niche of an audience vs cable/blu ray and how impossible it would be to get a 1080p stream of viewable quality.

While I’m not personally sold on the future of spatial computing as currently imagined, there’s lots of chatter that apple is looking to release some version of an AR/VR device in the near future. Whether they do a direct analog of Rtx or some other technology, I doubt any of the major tech firms are sitting on their laurels with regards to visualizing and interacting with virtual objects these days. Even if their first use cases will be Memoji nonsense.

Lastly, apples historical roots largely targeted various types of digital artists. So I’m personally still holding out hope they add 3d artists under the umbrella of efforts they currently extend to musicians, photographers, graphic designers and video editors.
 
  • Like
Reactions: singhs.apps

singhs.apps

macrumors 6502a
Oct 27, 2016
660
400
This all reminds me of the early days of video streaming and people claiming that watching videos over the internet was too niche of an audience vs cable/blu ray and how impossible it would be to get a 1080p stream of viewable quality.

While I’m not personally sold on the future of spatial computing as currently imagined, there’s lots of chatter that apple is looking to release some version of an AR/VR device in the near future. Whether they do a direct analog of Rtx or some other technology, I doubt any of the major tech firms are sitting on their laurels with regards to visualizing and interacting with virtual objects these days. Even if their first use cases will be Memoji nonsense.

Lastly, apples historical roots largely targeted various types of digital artists. So I’m personally still holding out hope they add 3d artists under the umbrella of efforts they currently extend to musicians, photographers, graphic designers and video editors.
They have been targeting the 3D folks for a while now. One of the earliest hype for AS was showcasing a scene in the Maya viewport. And if I remember correctly, the iMac Pro launch too showcased some 3D VFX.
The tcMP also showcased Mari way back a decade ago.

Apple’s GPU effort is partly aimed at it AR/VR ambitions and I think part of it relies on accurate and believable graphics, where RT would show its worth. They wouldn’t invest in their own metal API if graphics wasn’t a high area of focus for Apple.

Adobe itself wouldn’t be investing in 3D if it wasn’t an area they see growth in.
The real world is 3D, that’s what humans and others have evolved to experience.

Think of what AR/VR would do to the world of education.
 

theorist9

macrumors 68040
May 28, 2015
3,880
3,059
They like things to synergise and work together and they have been merciless in purging everything superfluous from their hardware. They have almost no traditional fixed-function rendering hardware (save for the rasterizer and texture filtering), no FP64 — almost all the cruft has been culled.
I don't mean to take the thread too OT, but I wouldn't describe FP64 as "cruft" generally—it's still central to a lot of scientific HPC (even with the occasional moves to FP32, e.g., https://www.irishtimes.com/news/sci...r-predictions-by-reducing-precision-1.4548084).

Then again, while Macs are particularly popular among scientists for their personal/professional use, and many scientists thus use Macs to do development work for programs they send to computer clusters*, I suppose Apple doesn't see scientific computing as the main market for its upper-tier machines. [*This would be more for CPU computing than GPU computing. For the latter, they'll probably use NVIDIA/CUDA.]
 

jujoje

macrumors regular
May 17, 2009
247
288
They have been targeting the 3D folks for a while now. One of the earliest hype for AS was showcasing a scene in the Maya viewport. And if I remember correctly, the iMac Pro launch too showcased some 3D VFX
I think a majority of the iMac Pro promo videos were done in Houdini; played no small part in me getting an iMac Pro :)

I suspect that was the point (or a year or so earlier) where they started the push for 3D Apps on MacOS (also being around the time they committed to a new Mac Pro and formed a more structured pro apps team). Houdini on AS has been surprisingly great - the typical 20-4% increase from moving away from Rosetta and rock stable. Maya unfortunately is still MIA with Autodesk claiming it is looking into a native version but not committing. Autodesk and The Foundry are really the two big holdouts from a VFX perspective.

GPU wise, other than raytracing I think that that the main thing that would benefit pro apps is compatibility with Vulkan - depressingly I suspect when/if Houdini/Maya et al move away from OpenGL viewports it will be to Vulkan across the board rather than a Mac specific metal viewport.

Apple’s GPU effort is partly aimed at it AR/VR ambitions and I think part of it relies on accurate and believable graphics, where RT would show its worth. They wouldn’t invest in their own metal API if graphics wasn’t a high area of focus for Apple.

Adobe itself wouldn’t be investing in 3D if it wasn’t an area they see growth in.
The real world is 3D, that’s what humans and others have evolved to experience.

Think of what AR/VR would do to the world of education.

Definitely agree that Apples AR ambitions are driving their investment in 3D - all that content has to be generate somewhere. I'm not particularly convinced by VR (still get a 3D TV vibe; prove me wrong Apple), but I think that AR has a lot of potential, as you say, particularly for education, but also across a fair few industries. And, as mentioned before, improved lighting models is central to making it believable, which is where the raytracing comes in. Although, like the year of the linux desktop, the year of AR always seems to be a year of two in the future...
 

theorist9

macrumors 68040
May 28, 2015
3,880
3,059
Maya unfortunately is still MIA with Autodesk claiming it is looking into a native version but not committing. Autodesk and The Foundry are really the two big holdouts from a VFX perspective.
Naive question: Is part of the reason we don't yet see Autodesk on AS that Autodesk incorporates tools like Moonray*, which is designed for Intel Advanced Vector Exensions, making it challenging to create an optimized port?


"In our shading system, each vector lane represents an intersection point to be shaded. For example, on AVX2 hardware, 8 different intersection points are passed into the shade function by the renderer...."

[*The author says that Maya uses Moonray, so I assume that includes Autodesk, but he wasn't specific.]
 
Last edited:

jujoje

macrumors regular
May 17, 2009
247
288
I might be wrong, but from a quick Google I think that Moonray is DreamWorks propriety renderer that plugs into Maya, rather than integrated into Maya (first time I'd heard of Moonray so might be wrong - the pdf was pretty interesting :)).

I think Renderman has a whole lot of x86 / AVX2 optimisations as well - I'm not sure how much that affects render speed, but must be a consideration when porting things to Apple Silicon. As a side note, Renderman xPU is heavily Nvidia focused, so I'm pretty sure we're not going to see that any time soon. Given Pixar's history it sucks somewhat.

The other thing that might be holding up Autodesk is third party libraries. From what I understand Houdini uses a fair few open source libraries so part of the delay in getting the Apple Silicon port was getting those libraries to be built for Arm. I suspect that Maya has a lot more technical debt and legacy to content with as well...
 
  • Like
Reactions: theorist9

throAU

macrumors G3
Feb 13, 2012
9,198
7,346
Perth, Western Australia
  1. The vast majority of apple computers sold are ultraportable laptops
  2. Ray tracing is not computationally cheap
Metal ray tracing is coming but the hardware still isn't there to make it worthwhile in a 30-100 watt ultra-portable from any vendor just yet. Especially when rasterization will give you much better results at the low end (in terms of frame rate and power consumption). I've got RT hardware and played games with RT on and whilst its nicer... its a big frame rate hit even on high end hardware.


Edit:
not saying RT is bad or we won't have it in coming years, but the power budget required just yet is too much to do a good job in a portable. it's still a struggle on 200-300 watt desktop class discrete GPUs. Even Nvidia are still relying on hackery with image upscaling and limited use of RT to get it over the line on the desktop.

Its coming, just not yet.
 
Last edited:
  • Like
Reactions: jujoje

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,627
1,101
I suspect when/if Houdini/Maya et al move away from OpenGL viewports it will be to Vulkan across the board rather than a Mac specific metal viewport.

Blender's experience can help us predict what other programs can do. Although the Blender developers wanted to modernize the viewport using Vulkan, it looks like they will use Metal and OpenGL 4 for now because developing a Vulkan backend is difficult.

Unlike Open GL, Metal and Vulkan have ray tracing capabilities. What does this mean for the viewport?
 

exoticSpice

Suspended
Original poster
Jan 9, 2022
1,242
1,952
Thanks for the replies so far guys. Hopefully Apple adds RT to the Pro and Max chips either for the M2 Gen or M3 Gen.
 

leman

macrumors Core
Oct 14, 2008
19,518
19,668
I don't mean to take the thread too OT, but I wouldn't describe FP64 as "cruft" generally—it's still central to a lot of scientific HPC (even with the occasional moves to FP32, e.g., https://www.irishtimes.com/news/sci...r-predictions-by-reducing-precision-1.4548084).

Then again, while Macs are particularly popular among scientists for their personal/professional use, and many scientists thus use Macs to do development work for programs they send to computer clusters*, I suppose Apple doesn't see scientific computing as the main market for its upper-tier machines. [*This would be more for CPU computing than GPU computing. For the latter, they'll probably use NVIDIA/CUDA.]

FP64 support requires non-trivial precious transistor budget, so Nvidia/AMDs solution has been to implement it as part of the special function unit, with dramatically lower throughput compared to the normal ALUs. In other words, it’s very very slow unless you get a specialized data enter GPU like the A100, but that’s a completely different category.

If you need extended precision and want to use the GPU, you are almost always better off using numerical precision-extending algorithms like the double-float technique. It’s going to be faster than FP64 on GPUs and you can tweak it to suit your numerical needs. And Metal is C++, so creating custom types with overloaded operators and compile-type transformations is easy.
 

tmoerel

Suspended
Jan 24, 2008
1,005
1,570
AMD, Nvidia and Intel have it on their GPUs and now ARM itself is coming out with a HW RT GPU next year.
Hopefully A16 has it otherwise Apple is going to lag behind.

HW RT is really useful in Blender and games.
Blender is only used by a small minority and gaming is irrelevant. That is why RT is not implemented.
 
  • Like
Reactions: diamond.g
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.