You feel like what?I feel like I'm getting deja vu of Apple claiming that the M1 Ultra is also faster than a 3090*.
* in one particular benchmark **
** when the moon is aligned correctly
You feel like what?
You feel like what?I feel like I'm getting deja vu of Apple claiming that the M1 Ultra is also faster than a 3090*.
* in one particular benchmark **
** when the moon is aligned correctly
Like I said...we built a house. He just got here and kicked in the door, demanding answers to questions nobody asked LOL. I'm gonna let him explore the rooms. We'll be at the dining table waiting to serve dinner 😂All I know is (and I'm waiting for him to post it again) -- but his W6800 Duo renders Octane X in 1 second flat, so that "estimated" render time of 14 seconds above made me lol.
Granted, the 4090 is still very impressive (and a reminder to us of what apple is competing against -- don't forget this, Mac Studio fanbois), but so is that 1 second flat render time on the Duo.
Actually hollup, I'm gonna let HIM cook! Even if we ignore real world usage and just go by the article he posted; My machine is 28% faster than TWO RTX A6000's LMFAO!All I know is (and I'm waiting for him to post it again) -- but his W6800 Duo renders Octane X in 1 second flat, so that "estimated" render time of 14 seconds above made me lol.
Granted, the 4090 is still very impressive (and a reminder to us of what apple is competing against -- don't forget this, Mac Studio fanbois), but so is that 1 second flat render time on the Duo.
I will definitely do so far ya once I am available to do so. As you know I was in Germany last couple weeks and now in Florida on vacation at Busch Gardens, Island of Adventure, and Disneyworld until next week. Remind me next Wednesday when I'm back home and I'll fire her up and let her have a go of it.@prefuse07 octane x have no official benchmark. The most used scene has been the super simple “chessboard scene” that has been shared in the octane forums at otoy.com. It is however extremely simple by modern standards and only give a rough estimation of perf.
This “trench scene “ is even simpler.
I have continually tried using octane on mac and have my own scenes. On one of them a 6900xt in egpu vs vega64 was almost exactly twice the speed.
Now with no support for egpu on m1, rendering speed for a single m1max is back at vega64 levels but with more vram.
On my pc with 3090, there is just no comparison. It feels 10x faster and makes totally different workflows available. Even large scenes is almost instant. I did some tests with my scenes early on but at that time, octane x always crashed in addition to being super slow so I just dropped it. And now octane x 2022 doesn’t even support Amd graphics anymore so you’re stuck on octane x 2021 (called pr14).
Now, just waiting for a m2 ultra or something better to be released to finally know if I could get back to using mac only. I doubt it but really hope to be surprised.
I will not buy anything that is slower than an 3090 in actual real scenes. Would of course prefer faster than that but with about that speed, I won’t be limited with what I do for the time being.
It would be great if @maikerukun could run that new octane x scene I linked to earlier in this thread (the volume scene on the forum). At least we have up to date results for that one.
24% off and as I mentioned in my post, you can find them for cheap if you look for them, I literally said that, but I appreciate you confirming it for me <3A6000's are $4K, on the first US site I looked at, Amazon.
Also, what's not 'pro' enough about the RTX4090 that a 6800X Duo has? Neither have ECC RAM or Genlock support, and macOS's OGL drivers are ancient.
I didn't go looking for a great deal, I just went straight to the most mainstream store. Whereas you appear to have gone on a mission to find the most expensive A6000 you could. Thanks for conceding that better deals might be available though!24% off and as I mentioned in my post, you can find them for cheap if you look for them, I literally said that, but I appreciate you confirming it for me <3
OK, but what is 'ENTERPRISE' about the 6800X Duo? It's not the features I mentioned, so why would it disqualify the 4090? Either neither are enterprise cards, or both are.And I didn't say "PRO", I said "ENTERPRISE"...that's the difference between the A6000 and the 4090.
Isn't it a few days early? I would expect it more like Thursday or Friday of this week.Wouldn’t it be great if apple sent an invite to the press today about an event next Tuesday? 😂
Actually hollup, I'm gonna let HIM cook! Even if we ignore real world usage and just go by the article he posted; My machine is 28% faster than TWO RTX A6000's LMFAO!
My GPU setup = $10k
TWO RTX A6000's? = $20k
SO...BY HIS LOGIC, COMPARING APPLES TO APPLES, TWO ENTERTPRISE RATED GPU's "3090's and 4090's are NOT", you have to spend $20,000 MINIMUM to match my Mac Pro...That's not even including the rest of the system...don't forget power bricks to handle that wattage, and very likely a nice $6k rig to put them in. You're looking at $30k or so all said and done. In the meant time on the Apple side, you can grab a 16 core Mac Pro for $8K right now and 2 w6800x duo's for $8k and walk away with a system 33% faster than an equivalent enterprise PC for just $16k...a full $4k LESS than just the GPU's in the PC LOLOLOL.
Now to be fair you can get them used or find them for around $5k via 3rd parties if you want and can possibly get yourself a system near $16k, but not direct from the manufacturer.
I could have another here on Monday 🥰
But this of course is all according to HIM...so I'm not gonna say anything because he kind of just said it all for me
Except:Actually sounds pretty awesome, and potentially a unique solution to the RAM and GPU limitations.
So the rumours are now saying:
Tower with M2 Ultra integrated GPU, then additional M2 Max or M2 Ultra 'Compute' Cards taking up MPX slots, which include both CPU and GPU cores for additional compute performance and additional RAM. Actually sounds pretty awesome, and potentially a unique solution to the RAM and GPU limitations.
Except:
a) To access the RAM on different cards, you'd need to go through the PCIe bus. This has much lower speed / higher latency than a traditional pool of RAM.
b) You'd always add extra CPU cores when you add GPU cores. So adding powerful GPU capability would require paying for a ridiculous number of CPU cores.
Expensive for us, but quite a good re-use of existing hardware for Apple. Simplifies their chip architecture range by just re-using the same chips as their main boards for other Macs. No need to manufacture or design unique cores or layouts etc since it's literally just chucking an M2 Max/Ultra onto a card board.Sounds like an expensive way of re-implementing a traditional PC tower + PCIe GPUs…
Except we have no way of knowing if this type of interfacing is actually doable with current Apple Silicon.Expensive for us, but quite a good re-use of existing hardware for Apple. Simplifies their chip architecture range by just re-using the same chips as their main boards for other Macs. No need to manufacture or design unique cores or layouts etc since it's literally just chucking an M2 Max/Ultra onto a card board.