Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Shadow Puppets

macrumors regular
Original poster
Nov 28, 2016
153
78
Could somebody explain to me the benefits of getting 2 Vegas vs getting the Vega Duo? Struggling to grasp the difference?
 

shokunin

macrumors regular
Jun 7, 2005
218
48
Better cooling, less noise and more TB3 ports? Each MPX has it's own massive heatsink and a slice of the fan's airflow. A single DUO will have to share the thermal load and it's possible the fans will have to ramp harder to provide adequate cooling.
 

casperes1996

macrumors 604
Jan 26, 2014
7,599
5,771
Horsens, Denmark
And the TB3s on a GPU can be used for anything, or just graphics / display related things.
90% sure they can be used for anything. That's part of why the GPUs come in an MPX module and not regular form, because there's hardware in there to enable use of the ports for regular TB duties.

I would say though the duo is a better choice in that it keeps the door open for getting another duo down the road, and communication between two GPUs on one card will be faster than between two cards. - But there are valid reasons for both still
 

Shadow Puppets

macrumors regular
Original poster
Nov 28, 2016
153
78
Is a single Vega suitable for 6k video workflows? Or do you think getting two or a duo is far more beneficial?
 
  • Like
Reactions: moab1

thisisnotmyname

macrumors 68020
Oct 22, 2014
2,439
5,251
known but velocity indeterminate
I bought all three permutations (dual solo, single duo, and dual duo). I bought dual solo for myself because I wanted the extra ports and won't have a need for more GPU (at least not as long as I'll keep this, I don't try to extend hardware for a decade, I'll replace when there's a refresh). I bought single duo for those who don't need maximum GPU and now they'll have an empty MPX slot in case some day they need more. I bought dual duo for those who need a lot of GPU grunt.
 

moab1

macrumors member
Dec 12, 2019
56
33
Is a single Vega suitable for 6k video workflows? Or do you think getting two or a duo is far more beneficial?
Assuming your talking about 6K footage and not running 6K display, I have the Same questions. I'm sure hoping so.
I ordered a Single Vega and own a RED Dragon 6K camera, so hoping it's enough. I was on the fence and would have liked to go with a single Duo, but figured I'd try this first and if needed I can go add another single Vega I have the room.
Benefit of the Single DUO now is you have a whole MPX open for other expansion while still having 2 graphics cards.
 
  • Like
Reactions: OkiRun

AidenShaw

macrumors P6
Feb 8, 2003
18,667
4,677
The Peninsula
Is everyone getting the XDR display too?
No, this one is working just fine.

6b481570-5cd7-11e6-8633-f6162fb1ddd6[1].jpg
 

moab1

macrumors member
Dec 12, 2019
56
33
I think he's asking about 6K video editing, not running the display.
[automerge]1576201730[/automerge]
I would say though the duo is a better choice in that it keeps the door open for getting another duo down the road, and communication between two GPUs on one card will be faster than between two cards.
I think this is wrong. Infinity Fabric Link works both ways; attaching two Single Vega II's or one Duo Vega II. I don't think there would be a difference in communication speed. That's a big benefit in my mind going with a Vega II over the Radeon W5700X card that is coming out at some point. It doesn't sound like 2 x W5700X cards will be able to use infinity fabric link to optimize speed so that limits the upgrade path a bit there.
 
Last edited:

casperes1996

macrumors 604
Jan 26, 2014
7,599
5,771
Horsens, Denmark
I think this is wrong. Infinity Fabric Link works both ways; attaching two Single Vega II's or one Duo Vega II. I don't think there would be a difference in communication speed. That's a big benefit in my mind going with a Vega II over the Radeon W5700X card that is coming out at some point. It doesn't sound like 2 x W5700X cards will be able to use infinity fabric link to optimize speed so that limits the upgrade path a bit there.

Not sure how the Infinity Fabric would then be connected between two separate MPX modules, but sure. Even if that is the case though it'd still be (marginally) slower due to a greater physical distance for the data to travel, though likely not in any meaningful capacity if it's still over IF.
 

thisisnotmyname

macrumors 68020
Oct 22, 2014
2,439
5,251
known but velocity indeterminate
Not sure how the Infinity Fabric would then be connected between two separate MPX modules, but sure. Even if that is the case though it'd still be (marginally) slower due to a greater physical distance for the data to travel, though likely not in any meaningful capacity if it's still over IF.

It's a little bridge widget that spans the two modules, there are pictures on support.apple.com
 

casperes1996

macrumors 604
Jan 26, 2014
7,599
5,771
Horsens, Denmark
It's a little bridge widget that spans the two modules, there are pictures on support.apple.com

I see. Well that is cool. Based on what it says on Apple's support page though, sounds like apps explicitly need to send data over IF instead of PCIe. There's probably an abstraction to help with that, but still.

In any case, it should still be slower to bridge like that than having them on the same card, even if the speed gap is perhaps not anywhere near noticeable. But it is nice to see that there clearly will be an option for extremely high bandwidth inter-GPU communication.
 

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
Could somebody explain to me the benefits of getting 2 Vegas vs getting the Vega Duo? Struggling to grasp the difference?

2 Vega II 'solo' MPX modules soak up twice as much room. If you don't have anything else to put into the PCI-e slots that might not matter much.

But if you need to get 1-2 more x16 PCI-e Cards inside the system the DUO is better ( in MPX Bay 1 which frees up MPX Bay 2 for some x16 double wide cards. One higher power and perhaps a second a multiple SSD PCI-e card holder. ). Similar if want the 4 HDD storage MPX module. Once you stuff two full sized MPX modules into the Mac Pro there is only a single width x16 slot left. That single width may not work for the card you need.

Conceptually each Solo MPX module with pretty much a dedicated fan each could run cooler on longer runs.

As far as having two Vega class GPU hooked together via Infinity Fabric, there is zero difference. (other than physical space used). The DUO does allow to get to the 4 GPU zone. If think that is where heading to eventually it doesn't make sense to temporarily get a Solo only to toss it later.


The DUO fully enables the video provisioning to the default host 4 TBv3 ports. The 'Solo' shorts those TB controllers by only providing on DisplayPort stream. ( hence can't use any > 4K monitor on the regular system ports ). The DUO's 2nd GPU fills in that shortfall.
( for example can hook one XDR to card edge, one XDR to I/O card TBv3 port , and still use the HDMI socket on the DUO card (and possibly get another sub 4k video on a TB bus 0 port). With a solo you'll need a DP-to-HDMI dongle because the Solo's HDMI will be dead because all the video feeds are consumed running two locally attached XDRs. ), The DUO does more 5K and XDR monitors by itself.

Two Solo's can drive more total monitors. One XDR/5K on each along with 4 other smaller ones just off the card edges.
 
Last edited:
  • Like
Reactions: OkiRun and Zwhaler

deconstruct60

macrumors G5
Mar 10, 2009
12,493
4,053
Is a single Vega suitable for 6k video workflows? Or do you think getting two or a duo is far more beneficial?

Do the GPUs have to do all the grunt work of RAW transcoding and/or decoding? If yes then two is better than one.

If already in some form of ProRes than AFterburner may be better incremental spend.

Additionally if have older and/or hobbled software that can't see or take advantage of two GPUs. Two won't help.
[automerge]1576222058[/automerge]
I think this is wrong. Infinity Fabric Link works both ways; attaching two Single Vega II's or one Duo Vega II. ...

It also works in attaching two DUOs. ( The DUO has two connectors so have 4 way sharing with two hops max to the most remote one. )

I suppose the solo could also attach to just one of the connectors on the duo ( a 3 GPU lash up). Not sure that would one of the normally mapped modes though. But if put the primary 'display' GPU in the 'middle' could have two other GPUs to farm work out to.
 
Last edited:

Romanesco

macrumors regular
Jul 8, 2015
126
65
New York City
If Vega II Duo is supposed to double the performance, why are Geekbench scores reporting the same values for both Vega II and Vega II Duo? (~84,000)
 
  • Like
Reactions: OkiRun

bsbeamer

macrumors 601
Sep 19, 2012
4,313
2,713
Because the drivers are not (yet) optimized in Catalina and the GB5 tests are horribly inaccurate.
 
  • Like
Reactions: OkiRun

MaxYuryev

macrumors member
Oct 25, 2015
40
134
I think its because the Duo still shows up as two GPU's in your system, not just one super powerful one. In Geekbench 5 you can select which GPU you want to test from your system. The same thing happened with the 2013 Mac Pro that had dual GPU's. If someone has a duo please download geekbench 5 and test this.
 

Romanesco

macrumors regular
Jul 8, 2015
126
65
New York City
I think its because the Duo still shows up as two GPU's in your system, not just one super powerful one. In Geekbench 5 you can select which GPU you want to test from your system. The same thing happened with the 2013 Mac Pro that had dual GPU's. If someone has a duo please download geekbench 5 and test this.

That’s what I ended up going with. Expecting a Geekbench update. Vega II Duo here.
 
  • Like
Reactions: OkiRun

casperes1996

macrumors 604
Jan 26, 2014
7,599
5,771
Horsens, Denmark
I think its because the Duo still shows up as two GPU's in your system, not just one super powerful one. In Geekbench 5 you can select which GPU you want to test from your system. The same thing happened with the 2013 Mac Pro that had dual GPU's. If someone has a duo please download geekbench 5 and test this.

Holy pancakes, Max. I didn't realise you had an account on here. Love your YT content, and very, very often refer people to your videos when they have questions here. Great work, mate, though occasionally with some inaccuracies, though with such fast turnaround times for video output small mistakes are bound to happen and it's never something that harms your overall points. Great to see you on MR.

Oh and regarding what you posted right here, I agree with the assessment that that is the most likely explanation.

I will also add that GeekBench in general isn't really that good an estimator of GPU performance. GPUs have many, many "types of performance". GeekBench is a narrow testing platform, and especially the Metal tests aren't very well made actually. Metal allows for a lot of low level optimisations, if you know how the GPU you're working with is built and how its characteristics are. But these optimisations are in the hands of the developers. With something like OpenCL, more, though not all, of this was handled by the drivers.
The issue with optimisations in a benchmarking tool is that you don't want to over-optimise for any product, but you also don't want to under-optimise anything. Everything also needs to be controlled, so where most developers would probably use Apple's pre-optimised Metal Performance Shaders for their calculation kernels, GeekBench needs to write their own for consistency between tests. And though they of course try and avoid it, these will be written in a way that benefits certain designs over others, even if those designs aren't, on a raw performance level, faster. Without digging too large a GPU programming rabbit hole, their Metal shader kernels are generally quite un-optimised for everything, which is also shown in the fact that they often report worse scores than OpenCL, even though Metal has much greater performance headroom.

But again, given how complex GPUs are, with many different performance metrics depending on what is important, "real world" testing in the relevant applications is the best approach. A Vega 64 is an amazing option for quickly smashing through a huge amount of hash-functions. But in 3D rendering, a lot of the shaders will be idle, because the geometry engine can't keep up with all 64 shaders (also why the Radeon VII with only 60 CUs beats the Vega 64 in most tasks. The extra clock speed means more than the extra shaders, when the extra shaders are so often doing nothing). But Navii cards like the 5700 XT, has overcome the geometry engine roadblock, and thus performs much, much better in games, but doesn't perform as well as high CU Vegas in pure compute - which is also why the Vega II is still so very, very strong even against the newer Navi architecture. Vega is incredible in pure compute, even if it falters in geometry. And with the massive HBM2 bandwidth the Vega II has it's even more of a beast. Plus, if memory serves, it's a 64 CU edition of the second generation Vega die. Like if the Radeon VII got 4 more compute units, faster memory bandwidth, and being able to maintain the max boost frequency better. Ultimately, giving Vega II 500GFLOPs faster FP32 performance on average.

But now I'm really out on a tangent. Anyways, cool videos, Max. Love from Denmark
 
  • Like
Reactions: OkiRun

MaxYuryev

macrumors member
Oct 25, 2015
40
134
That’s what I ended up going with. Expecting a Geekbench update. Vega II Duo here.
So your duo shows up as two GPU’s correct? (I have a single Vega II). I highly doubt they will update the software since dual GPU’s have been available for a long time and this is the fifth version of Geekbench. If your software can utilize both GPU’s well then you just duplicate your score of roughly 85,000 (mine is 86,200).
[automerge]1576896270[/automerge]
Holy pancakes, Max. I didn't realise you had an account on here. Love your YT content, and very, very often refer people to your videos when they have questions here. Great work, mate, though occasionally with some inaccuracies, though with such fast turnaround times for video output small mistakes are bound to happen and it's never something that harms your overall points. Great to see you on MR.

Oh and regarding what you posted right here, I agree with the assessment that that is the most likely explanation.

I will also add that GeekBench in general isn't really that good an estimator of GPU performance. GPUs have many, many "types of performance". GeekBench is a narrow testing platform, and especially the Metal tests aren't very well made actually. Metal allows for a lot of low level optimisations, if you know how the GPU you're working with is built and how its characteristics are. But these optimisations are in the hands of the developers. With something like OpenCL, more, though not all, of this was handled by the drivers.
The issue with optimisations in a benchmarking tool is that you don't want to over-optimise for any product, but you also don't want to under-optimise anything. Everything also needs to be controlled, so where most developers would probably use Apple's pre-optimised Metal Performance Shaders for their calculation kernels, GeekBench needs to write their own for consistency between tests. And though they of course try and avoid it, these will be written in a way that benefits certain designs over others, even if those designs aren't, on a raw performance level, faster. Without digging too large a GPU programming rabbit hole, their Metal shader kernels are generally quite un-optimised for everything, which is also shown in the fact that they often report worse scores than OpenCL, even though Metal has much greater performance headroom.

But again, given how complex GPUs are, with many different performance metrics depending on what is important, "real world" testing in the relevant applications is the best approach. A Vega 64 is an amazing option for quickly smashing through a huge amount of hash-functions. But in 3D rendering, a lot of the shaders will be idle, because the geometry engine can't keep up with all 64 shaders (also why the Radeon VII with only 60 CUs beats the Vega 64 in most tasks. The extra clock speed means more than the extra shaders, when the extra shaders are so often doing nothing). But Navii cards like the 5700 XT, has overcome the geometry engine roadblock, and thus performs much, much better in games, but doesn't perform as well as high CU Vegas in pure compute - which is also why the Vega II is still so very, very strong even against the newer Navi architecture. Vega is incredible in pure compute, even if it falters in geometry. And with the massive HBM2 bandwidth the Vega II has it's even more of a beast. Plus, if memory serves, it's a 64 CU edition of the second generation Vega die. Like if the Radeon VII got 4 more compute units, faster memory bandwidth, and being able to maintain the max boost frequency better. Ultimately, giving Vega II 500GFLOPs faster FP32 performance on average.

But now I'm really out on a tangent. Anyways, cool videos, Max. Love from Denmark

thanks for the kind words! We try our best :)

no tangent, I love reading about this stuff. I had the same base understanding between Vega And Navi but you explained it well.

Not sure if you’ve seen the new upgrade video on Max tech:
but towards the end we tested the 5700XT, Vega 64, and Radeon VII in the Mac Pro and the GB5 Metal tests are disappointing with the VII. I think it’s drivers..
 
Last edited:
  • Like
Reactions: JedNZ and OkiRun

Romanesco

macrumors regular
Jul 8, 2015
126
65
New York City
So your duo shows up as two GPU’s correct? (I have a single Vega II). I highly doubt they will update the software since dual GPU’s have been available for a long time and this is the fifth version of Geekbench. If your software can utilize both GPU’s well then you just duplicate your score of roughly 85,000 (mine is 86,200).

Yup, shows up as two units of 32GB each. If that’s the case, Geekbench’s Metal Benchmark Chart scores are inaccurate as they list both GPUs as equal in performance, which is absurd.
 
  • Like
Reactions: OkiRun

casperes1996

macrumors 604
Jan 26, 2014
7,599
5,771
Horsens, Denmark
no tangent, I love reading about this stuff. I had the same base understanding between Vega And Navi but you explained it well.

Not sure if you’ve seen the new upgrade video on Max tech: but towards the end we tested the 5700XT, Vega 64, and Radeon VII in the Mac Pro and the GB5 Metal tests are disappointing with the VII. I think it’s drivers..

Of course I did ;). I regularly check my subscription feed and watch anything on either of your channels the moment it pops up ;).

It's hard to tell the exact reason without having the source code, but drivers is an option.

However, I will state that, the Vega II should actually in pure compute, be almost 50% (about 46%) faster than the 5700 XT, before even considering memory bandwidth. That delta is calculated from AMDs own numbers. Between the Radeon VII and Vega II however, the performance difference should only be about 4-6% in favour of Vega II however. And of course Navi makes up for the lesser pure compute power in other aspects, like better utilisation of the hardware in most workloads, but if you fully utilise the grunt in the cards, the Vega II has the most punch to give, though a narrow margin to the Radeon VII.

Glad you enjoy my tangents ;).
[automerge]1576897304[/automerge]
Yup, shows up as two units of 32GB each. If that’s the case, Geekbench’s Metal Benchmark Chart scores are inaccurate as they list both GPUs as equal in performance, which is absurd.

Wait, what? Why would that be absurd? The two being equal in performance is exactly what I would expect, barring run-to-run margin of error.
 

bsbeamer

macrumors 601
Sep 19, 2012
4,313
2,713
RX 5700 XT in eGPU in Catalina is 15-20% less in GB5 scores vs RX580 in Mojave. Drivers need to be fixed/tweaked for sure, but GB5 is not a good METAL test. It’s inconsistent at best. You need to run multiple passes in timed intervals and take an average at this point, that’s how variable it can be.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.