Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

iPadified

macrumors 68020
Apr 25, 2017
2,014
2,257
Regarding energy: currently, a totl Threadripper consumes around 280w under all-core load while a 3090 consumes around 360-370. yet the 3090 can only best the Threadripper by a thin margin (based on Vray’s Cuda port to CPUs tests).
But price gouging notwithstanding, you can get almost 3 3090s for the price of a Threadripperwx…and get 3x the rendering speed (assuming your render job can fit on the 3090 ) with 4x the power consumption
Prices of CPUs are horrible. Perhaps I am stupid, but the price for fabrication of a xeon/threadripper and a GPU should be the same as long as they have the same physical size and same node so either it is IP costs (licensing) or just poor competition in the CPU market that explains the price. I think it is the latter.
 

singhs.apps

macrumors 6502a
Oct 27, 2016
660
400
Prices of CPUs are horrible. Perhaps I am stupid, but the price for fabrication of a xeon/threadripper and a GPU should be the same as long as they have the same physical size and same node so either it is IP costs (licensing) or just poor competition in the CPU market that explains the price. I think it is the latter.
It’s all round horrible in tech ATM, not just CPUs and GPUs. But yeah, once more factories come online, things might ease up..might. Because you can never have enough compute power..not the way data is exploding exponentially
 

jmho

macrumors 6502a
Jun 11, 2021
502
996
Things are not exactly like this:)
About path tracing, well.. thanks God modern engines uses every sort of trickery to avoid that the light emitted by a light bulb travels kilometers away, that will result in immense computing time. Once a ray reach a certain threshold and do not contribute to the scene significantly is cutted away.
About memory usage, only progressive rendering needs to keep all the assets loaded on memory every time, Vray bucket sampler for example can split the scene into smaller parts and assign every part to a CPU thread or a GPU, this will save RAM since not all assets needs to be on memory constantly and can also be loaded and unloaded on demand depending on what the bucket is rendering. Incidentally bucket rendering not only consumes less RAM, is also a bit faster than progressive rendering.

Buckets are ways of batching individual samples that have absolutely nothing to do with splitting up the scene itself (but may aid cache coherence because rays that are coherent when fired are likely to remain somewhat coherent)

The amount of memory used at this stage is trivial compared to the scene itself.

It's like I said, if you can fit the scene in memory, then the task becomes embarrassingly parallel and you can delegate individual samples to as many GPUs or CPUs as you like. The problem comes when the scene itself doesn't fit in memory.
 

jujoje

macrumors regular
May 17, 2009
247
288
Buckets are ways of batching individual samples that have absolutely nothing to do with splitting up the scene itself (but may aid cache coherence because rays that are coherent when fired are likely to remain somewhat coherent)

Well they can do (hello micropolygon renderers) but I guess probably a thing of the past :)

Much of the time the difference between fitting things in memory or having epic render times comes down to fairly trivial optimisations. You can be rendering billions of polygons, thousands of lights and hundreds of textures on relatively little memory just through some good optimisation - had render times for some shots go from 24hr a frame to ~1hr just by giving the lighting to someone who knew how to optimise well for the render engine (bit of instancing, mipmapped textures, optimising ram usage etc).

The goal is real-time (always has been) illusion of reality for CG. Ray tracing is currently the best option to mimic light behaviour which is where the dumb GPU cores, but orders of magnitude in number, can brute force the vectors - camera based or bi directional tracing.

So cheating is inbuilt into CG….but it can get difficult if you aren’t ‘rendering’ a scene for fixed, non controlled POV ( that is games, VR etc)

Speaking of real time, always been impressed and moderately perplexed by the optimisations that go in game dev and getting everything working in real time from lighting to asset prep - it's a really next level form of cheating.

I went to WWDC in 2018 and they had a talk by one of the artists from Pixar who had worked on Coco, where the backdrop to most scenes was a gigantic wall of thousands of houses, each one covered in hundreds of tiny coloured decorative lights.

Remember seeing some of a presentation on that; was pretty crazy and the scene looked awesome. Filed it under the Pixar's gonna Pixar by doing something crazy just because they can, much like deciding to render all the grains in Piper as instances with SSS. It looked amazing, but still…crazy.
 
  • Like
Reactions: singhs.apps

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,627
1,101
What is the use case for Manifold Next Event Estimation? It seems Blender could get it soon.
 

jmho

macrumors 6502a
Jun 11, 2021
502
996
What is the use case for Manifold Next Event Estimation? It seems Blender could get it soon.
There are essentially two ways of path-tracing: "Forward tracing" which is similar to how things work in reality. You fire rays from lights, bounce them around the scene, and when they hit the "film" of your camera, you mark the light contribution.

The issue with this is that as you can imagine, 99% of rays don't hit the camera which is a huge waste, which is why the standard way of doing things is backward tracing, where you start from the camera and bounce around the scene until you hit a light.

This suffers from the opposite problem, where most rays don't actually hit the light. This isn't a huge problem for surfaces because you can use probability distributions, but for stuff like caustics it's a huge problem.

You can try to do caustics by tracing in both directions and then combining the result, but it tends to be slow and noisy. An algorithm like MNEE, tries to give the light rays a helping hand a bit like an aim-bot, so if the ray almost hits something interesting like a light, MNEE will "fix" its aim and shoot the bounce directly at the light.
 

jujoje

macrumors regular
May 17, 2009
247
288
Building refractive caustics:
https://www.ics.uci.edu/~yug10/projects/translucent/papers/Hanika_et_al-2015-Computer_Graphics_Forum.pdf Is one of the many attempts to compute this class of rays quickly, my experience tells me that no matter what you use, easy refractive caustics methods are always slow when it comes to render real scenes

Still feel that baking caustics is probably the way to go. I don't think any renderer really has good non-baked caustics that renders reasonably (although I think the new version of Octane was boasting massive improvements there).

What is the use case for Manifold Next Event Estimation? It seems Blender could get it soon.

The practice use case for MNEE is blood sweat and tears (usually what rendering caustics leads to :D); it seems primarily designed for the small scale refractive caustics you get in droplets of liquid, or the meniscus at the corner of the eye.

I think Renderman has had it since v22 (I vaguely ILM talking about it being used in one of the Star Wars films).
 
  • Like
Reactions: Xiao_Xi

Boil

macrumors 68040
Oct 23, 2018
3,477
3,173
Stargate Command

Beat me to it...! ;^p

Am I wrong, or are these three the apps that have been assisted by Apple in some way to become ASi / Metal native...?

I guess all the other 3D DCC apps are taking a wait and see approach to optimizations...?
 

chouki

macrumors member
Oct 24, 2015
35
5
If I'm not wrong it's not native yet.
From Beppe (of Otoy) on the Octane X forum, december 08:
"Hi,
PR13 is working here, under macOS 12.0.1, and C4D R25, if Open with Rosetta is enabled."

And it's not really stable, also denoiser still not working on M1.

I just received my M1Max MBP, I haven't installed Octane X yet, I'll see how it behaves, but I'm not very confident, Octane X has just been a mess till now...
 
Last edited:

jmho

macrumors 6502a
Jun 11, 2021
502
996
It depends what you mean by native I guess. Vray appears to be compiled for ARM but doesn't support GPU rendering on macOS. Octane X appears to be using Rosetta, but uses Metal / GPU rendering.
 
  • Like
Reactions: Macintosh IIcx

sirio76

macrumors 6502a
Mar 28, 2013
578
416
Native means native, no matter if it uses CPU or GPU. If someone is interested in GPU rather that CPU computing that’s a very different story. Vray runs native ARM code stable and quite fast for a laptop, tested on M1Pro.
 

Xiao_Xi

macrumors 68000
Oct 27, 2021
1,627
1,101
Am I the only one tired to see this sterile YouTubers regurgitating always the same benchmarks? Why nobody shows real world tests on real production scenes?
Out of curiosity, what kind of benchmarks would you like to see?

The viewport performance is as important as the rendering performance, but I can imagine it is much harder to test it.
 

jmho

macrumors 6502a
Jun 11, 2021
502
996
The problem is that unlike most manufacturers Apple doesn't send out review units (except maybe to a handful of mega YouTubers). That means specialist 3D YouTubers don't get review units, and they aren't going to buy them with their own money because a) They aren't very good for 3D, and b) their audience isn't going to watch multiple Mac videos.

The only YouTubers who are buying M1 Max / Pro machines for making videos will be tech channels with an audience of Apple fans who can make 25 different videos with a single machine.
 

jujoje

macrumors regular
May 17, 2009
247
288
Out of curiosity, what kind of benchmarks would you like to see?

The AL labs test scene would be a good one. Opens in most dcc apps from Blender to Maya and uses production assets.

Not going to work for GPU renders though :p will render in anything that supports usd and is a good viewport benchmark.
 

Lone Deranger

macrumors 68000
Apr 23, 2006
1,899
2,141
Tokyo, Japan
Found this on the substance painter forums
Nice thanks for sharing.

I was trying to find out some more about any possibility of Metal API support for the Substance apps on macOS the other day, so I tagged Sebastien DeGuy, Adobe VP of 3D and founder of Allegorithmic in a Tweet. Happily he replied saying: "Hehe sorry about the lack of info. We are indeed working on it. I have seen things ;) a little bit more time and we should be able to share native / close enough versions soon!"
Very exciting!

 

BootLoxes

macrumors 6502a
Apr 15, 2019
749
897
Nice thanks for sharing.

I was trying to find out some more about any possibility of Metal API support for the Substance apps on macOS the other day, so I tagged Sebastien DeGuy, Adobe VP of 3D and founder of Allegorithmic in a Tweet. Happily he replied saying: "Hehe sorry about the lack of info. We are indeed working on it. I have seen things ;) a little bit more time and we should be able to share native / close enough versions soon!"
Very exciting!

They release a native version with metal support and I would upgrade my steam license that day
 

iPadified

macrumors 68020
Apr 25, 2017
2,014
2,257
The AL labs test scene would be a good one. Opens in most dcc apps from Blender to Maya and uses production assets.

Not going to work for GPU renders though :p will render in anything that supports usd and is a good viewport benchmark.
Something with lots of glass (transparent with different refraktions indexes) and metal reflections. Also lots of objects (transparent of course). Looking forward to benchmark an M1 Pro towards my 2020 iMac.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.