Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Looking good.
wish I could understand what he was saying. It doesnt seem like the rtx pc is using the gpu..I would expect much better results. when I compared the classroom render between my 3070 laptop and m1, the 3070 blew it away with gpu cycles. Apple didnt even compare to a DESKTOP 3080..
the c4d example definitely shows a huge difference in user experience and lag.
Even in blender, its not obvious from my videos..but the user experience is much better than my g15 laptop, everything is so snappy and smooth. I never noticed that the windows laptop ux wasnt great until I started using blender on the mac
 
Last edited:

Looking good.

Something fishy with his results. Desktop 3080 should be a lot faster than laptop 3060 70W and should be under a minute.

https://openbenchmarking.org/test/pts/blender&eval=6f888dc8f9cee25018f6623e5cdbd3676850f1c9#metrics

And, his M1 Max results are much faster than official Blender benchmark results with the fastest officially recorded at 580 seconds or 9:40.

https://opendata.blender.org/benchmarks/query/?device_name=Apple M1 Max&benchmark=classroom&blender_version=2.93

His Classroom 2:58
1635524118825.png


RTX3060 70W 3:10.60
ClassroomCycles3060GPU70W.png



His Junk Shop 1:55
1635523906109.png


RTX3060 70W 00:47.42
JunkShopCyclesExperimental70W3060.png
 
Last edited:
Some feel the memory bandwidth will somehow magically double with a dual SoC model, and quadruple with a quad SoC model; but that doesn't seem like the way it would work? I don't have knowledge of how these things work, so I remain skeptical of 800GB/s & 1.6TB/s memory bandwidths in the M1 Max Duo & M1 Max Quadro systems.
There's nothing magic about it. If you build a large system by connecting two SoC die which are individually similar to M1 Max, each die will have its own 512-bit DDR5 (LPDDR5?) memory interface, doubling memory bandwidth.

The real question is what they're going to do for interconnect between the die. That interface needs to be very fast to take advantage of the memory BW provided by a remote node.

I would think nearly half of that US$6k figure would be the WAY overpriced 8TB SSD and the "laptop stuff" (chassis / display / keyboard / trackpad / batteries / etc.).

So a M1 Max Duo Mac Pro might start at the same price as the 2019 Mac Pro, about six grand; and a M1 Max Quadro Mac Pro might be about ten grand or so?
They might actually reduce the price of the base machine, believe it or not. There's a real opportunity for them to do so; Intel's Xeon W series is expensive and if they're truly all-in on Apple GPU there's no need to design the chassis to handle a couple huge 400W MPX modules.
 
  • Like
Reactions: Macintosh IIcx
There's nothing magic about it. If you build a large system by connecting two SoC die which are individually similar to M1 Max, each die will have its own 512-bit DDR5 (LPDDR5?) memory interface, doubling memory bandwidth.

The real question is what they're going to do for interconnect between the die. That interface needs to be very fast to take advantage of the memory BW provided by a remote node.


They might actually reduce the price of the base machine, believe it or not. There's a real opportunity for them to do so; Intel's Xeon W series is expensive and if they're truly all-in on Apple GPU there's no need to design the chassis to handle a couple huge 400W MPX modules.

So we could see 800GB/s for the M1 Max Duo, and (a mind-boggling) 1.6TB/s for the M1 Max Quadro...!?!

I would LOVE to see a US$5k M1 Max Duo (20c CPU / 64c GPU / 32c NPU / 128GB RAM / 1TB SSD / Dual 10Gb Ethernet) Mac Pro Cube...!

But I would be more than happy with a US$3k M1 Max (10c CPU / 32c GPU / 16c NPU / 64GB RAM / 1TB SSD / 10Gb Ethernet) Mac mini...!
 
When do you all think we'll see Blender 3.1 & Cycles with metal support? I was super annoyed that absolutely none of the rendering apps make use of the GPU yet, but I realized the CPU rendering on my M1 Max is still faster than the GPU rendering on my 2019 5500M Intel MBP 🤣. I just want to make the most of the 32 core noooOOooowWwww
 
When do you all think we'll see Blender 3.1 & Cycles with metal support? I was super annoyed that absolutely none of the rendering apps make use of the GPU yet, but I realized the CPU rendering on my M1 Max is still faster than the GPU rendering on my 2019 5500M Intel MBP 🤣. I just want to make the most of the 32 core noooOOooowWwww

Maybe we get a Blender 3.1 (Cycles Metal renderer & MoltenVK viewport) demo at the (possible) Apple Spring 2022 event (featuring the M1 Pro / Max SoC powered all-new Mac mini & all-new 27" iMac)...?
 
When do you all think we'll see Blender 3.1 & Cycles with metal support? I was super annoyed that absolutely none of the rendering apps make use of the GPU yet, but I realized the CPU rendering on my M1 Max is still faster than the GPU rendering on my 2019 5500M Intel MBP 🤣. I just want to make the most of the 32 core noooOOooowWwww
Agree, we are waiting for Metal support. My i7 8 core iMac from 2020 runs equally fast on the CPU as on the GPU (5700). You need a really beefy GPU to se anything form CPU on Macs. 3.0 should be released in December so early spring March/April for 3.1.
 
There's nothing magic about it. If you build a large system by connecting two SoC die which are individually similar to M1 Max, each die will have its own 512-bit DDR5 (LPDDR5?) memory interface, doubling memory bandwidth.

The real question is what they're going to do for interconnect between the die. That interface needs to be very fast to take advantage of the memory BW provided by a remote node.


They might actually reduce the price of the base machine, believe it or not. There's a real opportunity for them to do so; Intel's Xeon W series is expensive and if they're truly all-in on Apple GPU there's no need to design the chassis to handle a couple huge 400W MPX modules.
Why did they design the MPX modules in the first place? I think they will have MPX modules containing M1 Max chips ( or duo or quarto configs) for scaling. Especially for highly parallel work load like 3D rendering, this would be useful. Multiple GPUs scales scales quite well in performance. A quadro M1 Max would be able to fit a 400 W envelope of the MPX module (barely).
 
Why did they design the MPX modules in the first place? I think they will have MPX modules containing M1 Max chips ( or duo or quarto configs) for scaling. Especially for highly parallel work load like 3D rendering, this would be useful. Multiple GPUs scales scales quite well in performance. A quadro M1 Max would be able to fit a 400 W envelope of the MPX module (barely).
The MPX modules' bandwidth is too narrow compared to what the M1 Pro has (32GB/s vs 200GB/s). I don't think this is going to happen.
 
Agree, we are waiting for Metal support. My i7 8 core iMac from 2020 runs equally fast on the CPU as on the GPU (5700). You need a really beefy GPU to se anything form CPU on Macs. 3.0 should be released in December so early spring March/April for 3.1.
5-6 months? What the 🤬? Will ANY render programs support Metal and fully take advantage of the GPU before next spring?
 
The MPX modules' bandwidth is too narrow compared to what the M1 Pro has (32GB/s vs 200GB/s). I don't think this is going to happen.
No so sure. 3D render can be split into buckets each bucket being relative small. A render farm works in that way and they do certainly not have 200 GB/s between nodes. The same logic applies to multiple GPU cards via PCI their respective port.
 
No so sure. 3D render can be split into buckets each bucket being relative small. A render farm works in that way and they do certainly not have 200 GB/s between nodes. The same logic applies to multiple GPU cards via PCI their respective port.
Don't render farm GPU usually use bridges (NVLink/Infinity Fabric) between them allowing huge bandwidth not possible over PCIe?
 
Redshift looks like a cost-effective option too as the monthly subscription allows for 8 GPUs as far as I can see… where Octane costs extra per node.
 
Also Octane X is not reliable yet, not stable enough for production. If you're not already into Octane and wonder what to choose, go Redshift.
I love Octane, but Octane X is not ready
 
I’ve been trying to work out what might be best value for multiple Macs for a single user in terms of hardware - and it looks like matching a 32 core M1 Max MBP with a few M1 Minis is the best option 😁 (or even multiple MBPs for the more affluent).

If only you could use a second M1 MBP as a second monitor (single application running over the displays of multiple Macs) while keeping the CPU or GPU usable as a render node… kind of like the old redundant Target Display Mode but going further… maybe in a future universal control upgrade…
 
So this guy uploaded a video comparing the m1 max in octane to a 1080ti desktop gpu, 2080ti desktop gpu, and 3080ti desktop gpu. His results show the m1 max performing in line with the 1080ti.
So in his video and in reply to many of his comments he keeps saying apple lied in their marketing and its nowhere near as powerful as they said. But the 1080ti is equivalent in performance to a 3080 laptop gpu...so his results back up apples claims directly, unfortunately he doesnt even understand this and blamed me in the comments for "spreading false information"..
smh

 
So this guy uploaded a video comparing the m1 max in octane to a 1080ti desktop gpu, 2080ti desktop gpu, and 3080ti desktop gpu. His results show the m1 max performing in line with the 1080ti.
So in his video and in reply to many of his comments he keeps saying apple lied in their marketing and its nowhere near as powerful as they said. But the 1080ti is equivalent in performance to a 3080 laptop gpu...so his results back up apples claims directly, unfortunately he doesnt even understand this and blamed me in the comments for "spreading false information"..
smh

Yeah, I saw that. Not sure what to say about that guy. 😏
But if you step a bit back and look at things, I’m actually impressed that the Apple GPU performs like a desktop 1080ti in regards to compute performance in Octane. Raw compute performance isn’t the top strength of the Apple GPU architecture in my book and I have no idea why some people think it would be.

FYI @leman
 
  • Like
Reactions: JMacHack
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.