Could somebody explain to me the benefits of getting 2 Vegas vs getting the Vega Duo? Struggling to grasp the difference?
90% sure they can be used for anything. That's part of why the GPUs come in an MPX module and not regular form, because there's hardware in there to enable use of the ports for regular TB duties.And the TB3s on a GPU can be used for anything, or just graphics / display related things.
https://support.apple.com/no-no/HT210392Is a single Vega suitable for 6k video workflows? Or do you think getting two or a duo is far more beneficial?
Assuming your talking about 6K footage and not running 6K display, I have the Same questions. I'm sure hoping so.Is a single Vega suitable for 6k video workflows? Or do you think getting two or a duo is far more beneficial?
No, this one is working just fine.Is everyone getting the XDR display too?
I think he's asking about 6K video editing, not running the display.
I think this is wrong. Infinity Fabric Link works both ways; attaching two Single Vega II's or one Duo Vega II. I don't think there would be a difference in communication speed. That's a big benefit in my mind going with a Vega II over the Radeon W5700X card that is coming out at some point. It doesn't sound like 2 x W5700X cards will be able to use infinity fabric link to optimize speed so that limits the upgrade path a bit there.I would say though the duo is a better choice in that it keeps the door open for getting another duo down the road, and communication between two GPUs on one card will be faster than between two cards.
I think this is wrong. Infinity Fabric Link works both ways; attaching two Single Vega II's or one Duo Vega II. I don't think there would be a difference in communication speed. That's a big benefit in my mind going with a Vega II over the Radeon W5700X card that is coming out at some point. It doesn't sound like 2 x W5700X cards will be able to use infinity fabric link to optimize speed so that limits the upgrade path a bit there.
Not sure how the Infinity Fabric would then be connected between two separate MPX modules, but sure. Even if that is the case though it'd still be (marginally) slower due to a greater physical distance for the data to travel, though likely not in any meaningful capacity if it's still over IF.
It's a little bridge widget that spans the two modules, there are pictures on support.apple.com
Could somebody explain to me the benefits of getting 2 Vegas vs getting the Vega Duo? Struggling to grasp the difference?
Is a single Vega suitable for 6k video workflows? Or do you think getting two or a duo is far more beneficial?
I think this is wrong. Infinity Fabric Link works both ways; attaching two Single Vega II's or one Duo Vega II. ...
I think its because the Duo still shows up as two GPU's in your system, not just one super powerful one. In Geekbench 5 you can select which GPU you want to test from your system. The same thing happened with the 2013 Mac Pro that had dual GPU's. If someone has a duo please download geekbench 5 and test this.
I think its because the Duo still shows up as two GPU's in your system, not just one super powerful one. In Geekbench 5 you can select which GPU you want to test from your system. The same thing happened with the 2013 Mac Pro that had dual GPU's. If someone has a duo please download geekbench 5 and test this.
So your duo shows up as two GPU’s correct? (I have a single Vega II). I highly doubt they will update the software since dual GPU’s have been available for a long time and this is the fifth version of Geekbench. If your software can utilize both GPU’s well then you just duplicate your score of roughly 85,000 (mine is 86,200).That’s what I ended up going with. Expecting a Geekbench update. Vega II Duo here.
Holy pancakes, Max. I didn't realise you had an account on here. Love your YT content, and very, very often refer people to your videos when they have questions here. Great work, mate, though occasionally with some inaccuracies, though with such fast turnaround times for video output small mistakes are bound to happen and it's never something that harms your overall points. Great to see you on MR.
Oh and regarding what you posted right here, I agree with the assessment that that is the most likely explanation.
I will also add that GeekBench in general isn't really that good an estimator of GPU performance. GPUs have many, many "types of performance". GeekBench is a narrow testing platform, and especially the Metal tests aren't very well made actually. Metal allows for a lot of low level optimisations, if you know how the GPU you're working with is built and how its characteristics are. But these optimisations are in the hands of the developers. With something like OpenCL, more, though not all, of this was handled by the drivers.
The issue with optimisations in a benchmarking tool is that you don't want to over-optimise for any product, but you also don't want to under-optimise anything. Everything also needs to be controlled, so where most developers would probably use Apple's pre-optimised Metal Performance Shaders for their calculation kernels, GeekBench needs to write their own for consistency between tests. And though they of course try and avoid it, these will be written in a way that benefits certain designs over others, even if those designs aren't, on a raw performance level, faster. Without digging too large a GPU programming rabbit hole, their Metal shader kernels are generally quite un-optimised for everything, which is also shown in the fact that they often report worse scores than OpenCL, even though Metal has much greater performance headroom.
But again, given how complex GPUs are, with many different performance metrics depending on what is important, "real world" testing in the relevant applications is the best approach. A Vega 64 is an amazing option for quickly smashing through a huge amount of hash-functions. But in 3D rendering, a lot of the shaders will be idle, because the geometry engine can't keep up with all 64 shaders (also why the Radeon VII with only 60 CUs beats the Vega 64 in most tasks. The extra clock speed means more than the extra shaders, when the extra shaders are so often doing nothing). But Navii cards like the 5700 XT, has overcome the geometry engine roadblock, and thus performs much, much better in games, but doesn't perform as well as high CU Vegas in pure compute - which is also why the Vega II is still so very, very strong even against the newer Navi architecture. Vega is incredible in pure compute, even if it falters in geometry. And with the massive HBM2 bandwidth the Vega II has it's even more of a beast. Plus, if memory serves, it's a 64 CU edition of the second generation Vega die. Like if the Radeon VII got 4 more compute units, faster memory bandwidth, and being able to maintain the max boost frequency better. Ultimately, giving Vega II 500GFLOPs faster FP32 performance on average.
But now I'm really out on a tangent. Anyways, cool videos, Max. Love from Denmark
So your duo shows up as two GPU’s correct? (I have a single Vega II). I highly doubt they will update the software since dual GPU’s have been available for a long time and this is the fifth version of Geekbench. If your software can utilize both GPU’s well then you just duplicate your score of roughly 85,000 (mine is 86,200).
no tangent, I love reading about this stuff. I had the same base understanding between Vega And Navi but you explained it well.
Not sure if you’ve seen the new upgrade video on Max tech: but towards the end we tested the 5700XT, Vega 64, and Radeon VII in the Mac Pro and the GB5 Metal tests are disappointing with the VII. I think it’s drivers..
Yup, shows up as two units of 32GB each. If that’s the case, Geekbench’s Metal Benchmark Chart scores are inaccurate as they list both GPUs as equal in performance, which is absurd.