Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

DrEGPU

macrumors regular
Original poster
Apr 17, 2020
192
82
TL;DR: Remove the main outer plastic shell of the Zotac Trinity RTX 3090 and it will fit in the 2019 Mac Pro 7,1

My Mac Pro recently came in and sure enough, it seems like almost every RTX 3090 out there is too big. I'm really more of a data science type, so I got 2x Zotac Trinity RTX 3090's because they were the easiest to get (seem like the least desirable for some reason) to use in my non-Mac Pro box. As I posted in the other thread, the Zotacs don't fit either. No surprise to anyone, I'm sure. Silly me. In the thread, someone suggested to shave off the black plastic rails in the Mac Pro to make more room. I was a little skittish about doing that on such a expensive machine. Instead, I wondered if it would be possible to reduce the size of the GPU instead. Sure enough, after careful examiniation you can see that there's a lot of plastic parts that don't seem to do much (maybe something to do with airflow/thermals?). So I started to taking the card apart to see what was going on. It turns out that removing the main outer plastic casing and the rear (ass end?) plastic coverings allow the card to fit, albeit a but snugly. I'll try to explain what I did.

IMG-3256.jpg


Figure 1. The red dots represent the screws you need to remove to release the plastic housing, fans, lighting from the heatsink. The fans and lighting are still conntected to the GPU circuit board for electrical power, so take care that you don't rip any wires.





IMG-3257.jpg

IMG-3258.jpg

Figure 2. Make sure to remove the screws that hold the housing to the heatsink and not the ones that hold the fans to the housing!



IMG-3259.jpg

Figure 3. After the plastic housing comes off, remove the 12 screws you see on the plastic housing (right). The fans are mounted to an inner plastic bracket. It will take some effort and fidgeting to get the two separated. There are plastic grooves and notches that help keep the inner bracket secured to the outer housing.


IMG-3260.jpg

Figure 4. From left to right: inner plastic fan bracket (very flimsy!); GPU with heatsink; rigid outer plastic housing. The very top shows the rear plastic covering that adds a few millimeters of length (it's held on by 2 screws). Carefully peel off the top LED lighting and unplug it (no window panels to see it anyway in Mac Pros!).


IMG-3255.jpg

Figure 5. Remember the 6 screws in Figure 1? Screw the flimsy inner bracket back onto the heatsink and you're done! The back plate has a plastic rear that's flexible. As you see here, if you insert it just right, the back plate will slide into the grooves meant for the MPX modules. The overhang lip of the backplate is juuuust high enough that it isn't obstructed by the plastic rails of the Mac Pro.



IMG-3265.jpg
Figure 6. My crazy/weird/stupid setup. Slots 1 & 3 have the 2x RTX 3090's (bonus NVlink bridge pics below). This allows them 16x PCIe lanes each. Slot 5 houses my Highpoint 4x NVME card for my fast RAID scratch/working partition. Slot 6 has a 2x NVME drives, 1 for a linux partition and the other for more data (ie, games). Slot 7 has an old Nvidia GT 710 drive the display for MacOS and linux, since RTX GPU's aren't supported in MacOS and the RTX's can be freed up for compute purposes (Gnome/X takes up ~500MB of VRAM). Unfortunately, there wasn't a way to include the MPX RX580. Oh well! Anyway, above all that, you can see the Promise J2i thing, with some longer term data storage.

Some additional thoughts: It's been actually quite pleasant to work on the Mac Pro. I've built many, MANY PC's over the years, and the Mac Pro 7,1 is probably one of the most well-planned or thoughtful designs I've worked on. Sure, I would like it Apple had used a AMD threadripper or Epyc CPU and had PCIe 4.0 slots. Of course, who wouldn't? It's also somewhat of a pain to unplug everything from the back in order to get inside the Mac Pro. That said, once you're inside, the thumb screws, the easy PCIe lock switch and nice cable management are a joy to work with. My setup allows the RTX GPU's and the NVME drives all the PCIe lanes they need (yes, I'm aware of PLX switching and this statement isn't 100% accurate). Only the GT710 is somewhat hampered by an 8x electrical slot, but I doubt that's the limiting factor on performance for that GPU!


Bonus NVlink bridge pics! So the EVGA NVlink has a plastic casing that bumps into the power supply on the bottom GPU in slot 1. What to do? Time to take it apart too, lol!
IMG-3263.jpg

The top (black) part is just clipped on with the actual NVlink bridge (middle) taped to the bottom (gray) piece. Just carefully pop off the top and gently peel the NVlink bridge off.

IMG-3264.jpg

The NVlink bridge itself is surprisingly simple. All I see are some capacitor-like things and one or two small chips. I'm not sure what those copper rectangles are for. Possibly space for extra circuitry? Is that where the enterprise/server stuff goes? It seems like this could easily be incorporated into a flexible cable so we don't have to deal with this 3-slot, 4-slot, 2-slot nonsense IMHO.
 
TL;DR: Remove the main outer plastic shell of the Zotac Trinity RTX 3090 and it will fit in the 2019 Mac Pro 7,1

My Mac Pro recently came in and sure enough, it seems like almost every RTX 3090 out there is too big. I'm really more of a data science type, so I got 2x Zotac Trinity RTX 3090's because they were the easiest to get (seem like the least desirable for some reason) to use in my non-Mac Pro box. As I posted in the other thread, the Zotacs don't fit either. No surprise to anyone, I'm sure. Silly me. In the thread, someone suggested to shave off the black plastic rails in the Mac Pro to make more room. I was a little skittish about doing that on such a expensive machine. Instead, I wondered if it would be possible to reduce the size of the GPU instead. Sure enough, after careful examiniation you can see that there's a lot of plastic parts that don't seem to do much (maybe something to do with airflow/thermals?). So I started to taking the card apart to see what was going on. It turns out that removing the main outer plastic casing and the rear (ass end?) plastic coverings allow the card to fit, albeit a but snugly. I'll try to explain what I did.

View attachment 1709702

Figure 1. The red dots represent the screws you need to remove to release the plastic housing, fans, lighting from the heatsink. The fans and lighting are still conntected to the GPU circuit board for electrical power, so take care that you don't rip any wires.





View attachment 1709706
View attachment 1709707
Figure 2. Make sure to remove the screws that hold the housing to the heatsink and not the ones that hold the fans to the housing!



View attachment 1709716
Figure 3. After the plastic housing comes off, remove the 12 screws you see on the plastic housing (right). The fans are mounted to an inner plastic bracket. It will take some effort and fidgeting to get the two separated. There are plastic grooves and notches that help keep the inner bracket secured to the outer housing.


View attachment 1709717
Figure 4. From left to right: inner plastic fan bracket (very flimsy!); GPU with heatsink; rigid outer plastic housing. The very top shows the rear plastic covering that adds a few millimeters of length (it's held on by 2 screws). Carefully peel off the top LED lighting and unplug it (no window panels to see it anyway in Mac Pros!).


View attachment 1709721
Figure 5. Remember the 6 screws in Figure 1? Screw the flimsy inner bracket back onto the heatsink and you're done! The back plate has a plastic rear that's flexible. As you see here, if you insert it just right, the back plate will slide into the grooves meant for the MPX modules. The overhang lip of the backplate is juuuust high enough that it isn't obstructed by the plastic rails of the Mac Pro.



View attachment 1709723 Figure 6. My crazy/weird/stupid setup. Slots 1 & 3 have the 2x RTX 3090's (bonus NVlink bridge pics below). This allows them 16x PCIe lanes each. Slot 5 houses my Highpoint 4x NVME card for my fast RAID scratch/working partition. Slot 6 has a 2x NVME drives, 1 for a linux partition and the other for more data (ie, games). Slot 7 has an old Nvidia GT 710 drive the display for MacOS and linux, since RTX GPU's aren't supported in MacOS and the RTX's can be freed up for compute purposes (Gnome/X takes up ~500MB of VRAM). Unfortunately, there wasn't a way to include the MPX RX580. Oh well! Anyway, above all that, you can see the Promise J2i thing, with some longer term data storage.

Some additional thoughts: It's been actually quite pleasant to work on the Mac Pro. I've built many, MANY PC's over the years, and the Mac Pro 7,1 is probably one of the most well-planned or thoughtful designs I've worked on. Sure, I would like it Apple had used a AMD threadripper or Epyc CPU and had PCIe 4.0 slots. Of course, who wouldn't? It's also somewhat of a pain to unplug everything from the back in order to get inside the Mac Pro. That said, once you're inside, the thumb screws, the easy PCIe lock switch and nice cable management are a joy to work with. My setup allows the RTX GPU's and the NVME drives all the PCIe lanes they need (yes, I'm aware of PLX switching and this statement isn't 100% accurate). Only the GT710 is somewhat hampered by an 8x electrical slot, but I doubt that's the limiting factor on performance for that GPU!


Bonus NVlink bridge pics! So the EVGA NVlink has a plastic casing that bumps into the power supply on the bottom GPU in slot 1. What to do? Time to take it apart too, lol!
View attachment 1709734
The top (black) part is just clipped on with the actual NVlink bridge (middle) taped to the bottom (gray) piece. Just carefully pop off the top and gently peel the NVlink bridge off.

View attachment 1709735
The NVlink bridge itself is surprisingly simple. All I see are some capacitor-like things and one or two small chips. I'm not sure what those copper rectangles are for. Possibly space for extra circuitry? Is that where the enterprise/server stuff goes? It seems like this could easily be incorporated into a flexible cable so we don't have to deal with this 3-slot, 4-slot, 2-slot nonsense IMHO.
All that seems left is for you to post performance data.
 
  • Like
Reactions: DrEGPU
All that seems left is for you to post performance data.
What would you suggest? I think most performance estimates and benchmarks have already been done. Happy to try something out it doesn’t cost anything.

Edit: I will say that I can hear the GPU a lot better in the fairly open Mac Pro case than in the relatively closed Lian Li O11-XL, ie, it's a lot louder!
 
Last edited:
I'm sorry but how do you use the rtx for compute tasks in os x? If os x doesn't support the rtx gpus?
 
I'm sorry but how do you use the rtx for compute tasks in os x? If os x doesn't support the rtx gpus?
Sorry for the confusion. You’re right, the RTX will only work in Linux and windows. I don’t need powerful graphics for macOS.
 
Does anyone know how to disable or hide the RTX GPU's from MacOS? It would be nice when dual booting to not have to keep plugging and unplugging HDMI/DP cables. It's pretty easy in Windows to just disable the MacOS GPU in device manager (right-click and disable). I found something about compiling a custom SSDT, but that was for hackintoshes. Should I try that method?

EDIT: Never mind. As long as I remember to change the inputs on the monitor, it's fine. Even if I forget, all I get is a blank screen at which point changing the input solves my problem.
 
Last edited:
I ran the V-ray benchmarks but didn't fiddle with any settings. I used the nvidia 461 drivers. I'll play around with Vray some more later on, but this is what I got on a first pass. No errors of any kind. The fans barely came on, so I'm not sure if tweaking could get higher scores/performance.

vray.PNG

vray-cuda.PNG

vray-RTX.PNG
 
Last edited:
  • Like
Reactions: kittiyut and OkiRun
Good test.
I'd like to know if the Mac Pro 7.1 power supply is enough to power 2 x RTX3090s for several minutes or hours without shout down. Could you instal and test Blender? Please load one of the test scenes, the Classroom scene is ok, set a very high resolution, for example 8000x4000, set the render to RTX-Optix. I use my Mac pro 7.1 daily for 3d renders with two RTX2080Tis but I need more vram and more speed, I'm waiting for the blower editions from Gigabyte, MSI or Ausus.

thank you.
 
Assuming you don’t have other high power devices, the Mac Pro psu can easily handle 2x RTX 3090’s. I can tell you the 2x ran at near 100% utilization for ~14 hours straight on a deep learning test without a problem. Except the noise, of course!

I’ll try the blender test tomorrow.
 
Nice!
Don't you use one MPX gpu module for the boot? I also need to use MacOS. I've the default Radeon Pro 580X installed and no other pci devices. Probably the Mac Pro psu is enough to power 3 gpus.
 
If you look closely at all the PCIe devices I have in my machine, you can see that there's a GT710 to drive the the display in MacOS. It can draw a 4K screen OK, but playing home/youtube videos is painful. Since there wasn't much space left over from the 2x RTX's and all the nvme SSD's, I decided on a AMD Radeon Pro WX5100. It's a single slot GPU and doesn't require a PCIe power plug (all from the PCIe slot). It's roughly the power of a RX480, I believe. I thought about the WX7100 since it's closer to RX580 and can be easily powered by the 6-pin PCie on the Mac Pro, but it wasn't worth the extra cost for me.
 
I'd like to replace my 580X because 8GB aren't enough for 3ds Max and large scenes, I also use a 5K LG display then the main vram is filled rapidly by Windows 10 and the 3d apps. Fortunately I can use the two RTX2080Tis exclusively for Cuda or Optix render within Vray Next. I have to understand if the Mac pro psu can power one W5700X + two RTX3090s. I also have one GTX1080Ti on a Razer egpu and it works like a charm. As I posted last year, the Mac Pro can handle without any issues one main MPX gpu for the boot and MacOS, 2 internal gpus for Windows and 4 egpus, 2 plugged to the MPX module and 2 on top + back to have full 20 pci lines for each gpu.
That's why I purchased the Mac Pro 7.1, otherwise I could have bought a 4xGPUs workstation spending less money but with less expandibility.
I hope Apple will maintain the Xeon Mac Pros because Cuda and Optix are the best gpu kernels available today for the rendering engines. Also I don't think Apple is capable to build a gpu more powerful than a theoretical RTX4000.
We will see.
 
The W5700X has a TDP of 205W or something low like that. The Mac Pro PSU is fairly powerful at 1400w (more like 1280W due to efficiency rating). That easily leaves enough power left over for 2x RTX 3090's (probably).

I'm not familiar with your workflow, but I do know the Xeon CPU's only have 48 PCie lanes [edit: 64 lanes, actually!]. Slots 1 & 3 get a direct 16x connection each. Every other slot, including the "stock" TB3 ports go through a PLX switch, which gives each slot/device some PCIe lanes, but they're all limited somehow (I don't understand how the PCIe pool switching works) because they need to share the PCIe lanes the PLX chip has to the CPU. See page 11 of this document. Apple doesn't really describe how the the TB3 ports on their GPU's are connected other than each controller gets 4x PCIe lanes from somewhere (ie the MPX part).

That said, I do agree with you on the Mac Pro's excellent expansion capabilities and that Apple is unlikely to surpass Nvidia (or even AMD) on GPU technology or a software stack anytime soon.
 
Last edited:
yes, the MPX bus has 8 pci lanes + 16 for the gpu, 4 for each TB3 inputs. It's not clear how these 8 lines are handled by the MPX module, probably there is another internal switch. I could connect a second egpu on this bus because the top and rear TB3 i/o are handled by the PLX switch.
Yesterday I could buy 2 MSI Aero RTX3090s, unfortunately there isn't enough room inside the Mac Pro 7.1, the leght of this new MSI board is 300mm and would not fit, also the pci connectors are on the back.
I'm waiting for the Asus or Gigabyte Turbo versions and finally move the two RTX2080Tis on two separate Razer boxes.
 
Not really Mac related, but since my GPU's are in a Mac Pro... I guess those 300mm fans in the front are good at moving air from front to back, but RTX 3090's can put out some heat! I guess the back end of my machine was too close to the wall with a mess of cables back there too. Basically, all that heat melted the plastic insulation of a power cord going to my eGPU! Pretty obvious thing I should've forseen: all the heat has got to go somewhere! Just be aware!

IMG-3443.jpg
 
Not really Mac related, but since my GPU's are in a Mac Pro... I guess those 300mm fans in the front are good at moving air from front to back, but RTX 3090's can put out some heat! I guess the back end of my machine was too close to the wall with a mess of cables back there too. Basically, all that heat melted the plastic insulation of a power cord going to my eGPU! Pretty obvious thing I should've forseen: all the heat has got to go somewhere! Just be aware!

View attachment 1725052


From what I can find, that cable shouldnt start melting until it hits around 175 degrees F (75 degrees C).

If the air coming out of the back of your Mac is 175+ degrees F, you have a major, major problem, and need to worry about ALL the components inside your Mac. Think about this-- your power supply is INSIDE your Mac Pro case...which is also filled with tons of tiny electrical wires. If the air coming out of your case is that hot, then the air inside your case is hotter, you are at high risk of failure and fire.

75C is the temperature that an average CPU die gets under a pretty decent thermal load with a heat sink on it. Most of the exposed components inside your case are not designed to operate under those kinds of ambient temperatures, and a lot of plastics will start melting at a lower temperature than that. I'm actually not sure how the air inside your case could possibly be that hot, so I'd also be suspicious about that cable melting from the inside-out due to a power overload rather than hot air from your case. Or, if the cables were touching the case, and your 3090 heatsinks are also contacting the case, the heat could be directly transferring from heat-sink to case...which is also not good.

Unrelated, but messing with those 3090's like that is not going to be good for the thermals on them. You need to run a diagnostic tool in Windows and see what the temps are on both of those under load...

Unrelated part 2: If your workflow is such that installing 2 3090s into a Mac Pro is necessary, it also means you are working primarily in Windows, and buying a Mac Pro in the first place seems pretty ill advised :-S
 
Last edited:
The W5700X has a TDP of 205W or something low like that. The Mac Pro PSU is fairly powerful at 1400w (more like 1280W due to efficiency rating). That easily leaves enough power left over for 2x RTX 3090's (probably).

I'm not familiar with your workflow, but I do know the Xeon CPU's only have 48 PCie lanes [edit: 64 lanes, actually!]. Slots 1 & 3 get a direct 16x connection each. Every other slot, including the "stock" TB3 ports go through a PLX switch, which gives each slot/device some PCIe lanes, but they're all limited somehow (I don't understand how the PCIe pool switching works) because they need to share the PCIe lanes the PLX chip has to the CPU. See page 11 of this document. Apple doesn't really describe how the the TB3 ports on their GPU's are connected other than each controller gets 4x PCIe lanes from somewhere (ie the MPX part).

That said, I do agree with you on the Mac Pro's excellent expansion capabilities and that Apple is unlikely to surpass Nvidia (or even AMD) on GPU technology or a software stack anytime soon.
thank you for the information. However, I searched the Internet and found that it recommends a power of 1600W or higher to mount two RTX3090s.

Have you ever had a problem with your Mac Pro while writing two 3090s?
 
Does anyone know how to disable or hide the RTX GPU's from MacOS? It would be nice when dual booting to not have to keep plugging and unplugging HDMI/DP cables. It's pretty easy in Windows to just disable the MacOS GPU in device manager (right-click and disable). I found something about compiling a custom SSDT, but that was for hackintoshes. Should I try that method?

EDIT: Never mind. As long as I remember to change the inputs on the monitor, it's fine. Even if I forget, all I get is a blank screen at which point changing the input solves my problem.
This should be helpful for hiding 3090 when booting under MacOS.
 
thank you for the information. However, I searched the Internet and found that it recommends a power of 1600W or higher to mount two RTX3090s.

Have you ever had a problem with your Mac Pro while writing two 3090s?
No, no problems. Most RTX 3090's are 300-350W, so 2x of them would be 700W. We could allow for transient spikes and say 800W for the GPU's. So no, you don't a 1600W PSU. In fact, most homes (in the US, anyway), can't reliably handle more than that anyway. Even if you have a full load on a overclocked, power hungry CPU, 1600W is more than enough. My Mac Pro has a 1400W PSU (I think something like 1280W, after taking efficiency ratings into account) and I haven't had any problems running the CPU, 2x RT3090's, Radeon 580X, 4x NVME SSD's. and 3x SATA SSD's.
 
This should be helpful for hiding 3090 when booting under MacOS.
J
This should be helpful for hiding 3090 when booting under MacOS.
Joevt wrote a kext you can load that allows the mac pro sleep/wake to be functional.
 
  • Like
Reactions: erroneous
From what I can find, that cable shouldnt start melting until it hits around 175 degrees F (75 degrees C).

If the air coming out of the back of your Mac is 175+ degrees F, you have a major, major problem, and need to worry about ALL the components inside your Mac. Think about this-- your power supply is INSIDE your Mac Pro case...which is also filled with tons of tiny electrical wires. If the air coming out of your case is that hot, then the air inside your case is hotter, you are at high risk of failure and fire.

75C is the temperature that an average CPU die gets under a pretty decent thermal load with a heat sink on it. Most of the exposed components inside your case are not designed to operate under those kinds of ambient temperatures, and a lot of plastics will start melting at a lower temperature than that. I'm actually not sure how the air inside your case could possibly be that hot, so I'd also be suspicious about that cable melting from the inside-out due to a power overload rather than hot air from your case. Or, if the cables were touching the case, and your 3090 heatsinks are also contacting the case, the heat could be directly transferring from heat-sink to case...which is also not good.

Unrelated, but messing with those 3090's like that is not going to be good for the thermals on them. You need to run a diagnostic tool in Windows and see what the temps are on both of those under load...

Unrelated part 2: If your workflow is such that installing 2 3090s into a Mac Pro is necessary, it also means you are working primarily in Windows, and buying a Mac Pro in the first place seems pretty ill advised :-S
Sorry for the confusion, but the RTX 3090's and everything else inside has been absolutely fine. The problem with the slightly melty cables was because the Mac Pro is a relatively closed system and there's nowhere for the hot air to exhaust except for the back. In my case, I stupidly had the Mac Pro too close to the wall with the power cables wedged there too. With poor ventilation, the heat just kept building up. Under full load (for >24hrs), the bottom RTX never breaks above 60C and upper one never goes past 70C. The Mac Pro really does have good air moving capabilites, as long as you allow it to expel the hot air somewhere and not collect and build up.
 
After extended testing and attempts. I haven't been able to get SLI/NVlink working in windows boot camp. I think enabling this would make the WSL2 much more powerful, and possibly (maybe!) bypass the need to restart in linux for GPU accelerated tasks. NVlink is enabled when I boot into linux, though.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.