Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Status
The first post of this thread is a WikiPost and can be edited by anyone with the appropiate permissions. Your edits will be public.
With heat sinks on the 4 NVMe's and on the CPU, what is the difference? I can't put new heat sinks on and put on the enclosure.

The previous message I quoted didn't mention anything about adding heat sinks to the NVMe drives; based on that I assumed you weren't covering them. If you are covering them and putting one on the PCI-E switch, then you should be good to go.
 
If you are covering them and putting one on the PCI-E switch, then you should be good to go.

Thanks. I think my posts have been a little separated.

Just waiting for a larger heat sink for the PCIe switch, doing what Lenny did.

Also, What does the 3rd wire on the fan control? Speed Sensor? Locked Rotor?
Would there be any benefit to getting the more quiet fan, and having it next to the heat sink on the Switch?

Screen Shot 2020-01-14 at 9.13.52 AM.png
 
That's the PWM wire for fan speed. If you have a new enough board and are running the HighPoint drivers, then you should be able to kick the fan speed down manually via the web utility.
The board is from a few days ago, via Amazon sold by HighPoint. However, when I get into the local host, it does not show me the fan controls.
 
The board is from a few days ago, via Amazon sold by HighPoint. However, when I get into the local host, it does not show me the fan controls.

There are a LOT of older boards in circulation still. I bought two from Amazon in December and got two different versions. One with a 2-pin fan, the other with a 3-pin fan. Consistency: not a HighPoint trait.
 
There are a LOT of older boards in circulation still. I bought two from Amazon in December and got two different versions. One with a 2-pin fan, the other with a 3-pin fan. Consistency: not a HighPoint trait.

I also wonder about using a fan/heat sink on that switch. These fan/heat sinks I found from Wakefield-Vette have three wires coming out. The rep at Newark, said the third wire is PWM.
Screen Shot 2020-01-14 at 9.41.56 AM.png
 
I also wonder about using a fan/heat sink on that switch.

I think you're over-optimizing for little to no gain. If you put a decent copper heat sink on the PCI switch, that should do the deed. As I've written: I have the fan completely removed from both of my cards, but the shroud is still attached. The difference is that I've removed the backplate from the shroud, to create a tunnel of sorts, which I picked up from someone else here. The shroud is still acting as a giant heat sink for the switch and drives, and there's air flowing over/through the unit.
 
I think you're over-optimizing for little to no gain. If you put a decent copper heat sink on the PCI switch, that should do the deed. As I've written: I have the fan completely removed from both of my cards, but the shroud is still attached. The difference is that I've removed the backplate from the shroud, to create a tunnel of sorts, which I picked up from someone else here. The shroud is still acting as a giant heat sink for the switch and drives, and there's air flowing over/through the unit.
I ordered the card and still haven't received it, and I curious what is the size of heatsink I should I get for the PCI Switch. Besides that, what is the gap between the M.2 card and the shroud? I afraid the thickness of the M.2 heatsink that I'm going to order will be too big.

p-0047_eklaunch_ek-m.2_nvme_heatsink_silver_tl.fnl.jpg


1579014299923.jpeg
 
I think you're over-optimizing for little to no gain. If you put a decent copper heat sink on the PCI switch, that should do the deed. As I've written: I have the fan completely removed from both of my cards, but the shroud is still attached. The difference is that I've removed the backplate from the shroud, to create a tunnel of sorts, which I picked up from someone else here. The shroud is still acting as a giant heat sink for the switch and drives, and there's air flowing over/through the unit.


Thoughts on this one? 40X40X11 BNTECHGO
Any recomendations would be great. Lots of aluminum, but finding few copper options.

Screen Shot 2020-01-14 at 10.24.10 AM.png
 
As an Amazon Associate, MacRumors earns a commission from qualifying purchases made through links in this post.
I think you're over-optimizing for little to no gain. If you put a decent copper heat sink on the PCI switch, that should do the deed. As I've written: I have the fan completely removed from both of my cards, but the shroud is still attached. The difference is that I've removed the backplate from the shroud, to create a tunnel of sorts, which I picked up from someone else here. The shroud is still acting as a giant heat sink for the switch and drives, and there's air flowing over/through the unit.
The blades still need some form of heat dissapation. I tested with the shroud completely removed and no heat sinks on the blades. They quickly got near the 70C limit with heavy usage. Using your method (shroud on, fan removed, backplate of the shroud removed), the blades have stayed below 60C even with extended write testing. Read-only workloads seem to be easier on the blades - it's when you are doing write-heavy work that they get hot.
 
  • Like
Reactions: tommy chen
I got the 7101A today with the new firmware and v.2 board which enables fan speed control. No problems with installation on a Mac Pro 7,1, disabling secure boot etc.

With the fan at full the SSD temps are 89F - with the fan off, they rise to 105F.

I created a RAID0 using 3x 2TB drives (saving for a 4th!) - speeds below...
 

Attachments

  • Screen Shot 2020-01-14 at 2.03.34 PM.png
    Screen Shot 2020-01-14 at 2.03.34 PM.png
    307.7 KB · Views: 166
Last edited:
I got the 7101A today with the new firmware and v.2 board which enables fan speed control. No problems with installation on a Mac Pro 7,1, disabling secure boot etc.

With the fan at full the SSD temps are 89F - with the fan off, they rise to 105F.

I created a RAID0 using 3x 2TB drives (saving for a 4th!) - speeds below...

Did you try taking the end panel off by the fan to allow the mac pro to blow air into the unit? Curious what affect that has when the fan is turned off.
 
Did you try taking the end panel off by the fan to allow the mac pro to blow air into the unit? Curious what affect that has when the fan is turned off.
Actually I have the shroud completely removed - the preinstalled heatsinks on the SSDs are too big and I'd rather keep them than install the HighPoint heatsink, so those fan off temps are with the Mac Pro fans blowing air over. The card is in slot 4.
 
Actually I have the shroud completely removed - the preinstalled heatsinks on the SSDs are too big and I'd rather keep them than install the HighPoint heatsink, so those fan off temps are with the Mac Pro fans blowing air over. The card is in slot 4.

My concern about that would be the PLX switch overheating; it does use the shroud as a heatsink. You can't monitor the switch temperature either as it doesn't have a monitor on it AFAIK.
 
My concern about that would be the PLX switch overheating; it does use the shroud as a heatsink. You can't monitor the switch temperature either as it doesn't have a monitor on it AFAIK.
Yes - I've got the fan on medium as a precaution at the moment until I can find a suitable heatsink for the switch. Hopefully then I can just switch the fan off, since the SSD temps are well below 140F.
 
I got the 7101A today with the new firmware and v.2 board which enables fan speed control. No problems with installation on a Mac Pro 7,1, disabling secure boot etc.

With the fan at full the SSD temps are 89F - with the fan off, they rise to 105F.

I created a RAID0 using 3x 2TB drives (saving for a 4th!) - speeds below...

Did you get the card from Highpoint estore or somewhere else?
 
Did you get the card from Highpoint estore or somewhere else?
Direct from HighPoint to have the best chance of getting the latest model.

I'm seeing slightly faster speeds with the RAID formatted as HFS instead of AFS. Any benefits to AFS?

DiskSpeedTest.png
 
Direct from HighPoint to have the best chance of getting the latest model.

I'm seeing slightly faster speeds with the RAID formatted as HFS instead of AFS. Any benefits to AFS?

Are you formatting with encryption on the APFS? If so, don't, it slows it down. APFS is newer and long term support for HFS will get phased out so I just personally avoid it.

Also, did you set up the RAID array with the HighPoint software?

What NVMe cards are you using?
 
  • Like
Reactions: chfilm
My concern about that would be the PLX switch overheating; it does use the shroud as a heatsink. You can't monitor the switch temperature either as it doesn't have a monitor on it AFAIK.

Wouldn't the cooling of the mac pro's fans be as effective or more so than the built in 7101a fan or heatsink? Or do you think it's ok to turn off the fan because the cooling of the shroud by the mac pro's fans helps cool the switch / blades. Just curious because i'm not knowledgable on this stuff and my card arrives in a few days!
 
Are you formatting with encryption on the APFS? If so, don't, it slows it down. APFS is newer and long term support for HFS will get phased out so I just personally avoid it.

Also, did you set up the RAID array with the HighPoint software?

What NVMe cards are you using?
No encryption. The RAID was setup using the HighPoint UI. I did test a simple Apple RAID (with Disk Utility) before I installed the HighPoint drivers and it was about 20% slower.

The SSDs are Gigabyte AORUS NVMe Gen4 M.2 2TB PCI-Express 4.0 which I transferred over from a PC. Individually they are capable of read/write speeds up to 5GB/s 4.4GB/s, but the PCIe 3.0 interface caps them at around 3GB/s.
 
As an Amazon Associate, MacRumors earns a commission from qualifying purchases made through links in this post.
Wouldn't the cooling of the mac pro's fans be as effective or more so than the built in 7101a fan or heatsink? Or do you think it's ok to turn off the fan because the cooling of the shroud by the mac pro's fans helps cool the switch / blades. Just curious because i'm not knowledgable on this stuff and my card arrives in a few days!

The shroud has heat sinks built into it to cool both the NVMe cards and the PLX switch; the switch is going to run hot and can be damaged above 70C. Without the shroud there's nothing to draw heat from the PLX switch and dissipate it.

If you remove the front of the shroud as I showed in an earlier post, it allows the Mac Pro fans to pull airflow through the card from end to end, similar to how the MPX modules are designed, although not as efficient. That said, it keeps the card cool enough that it should be fine and in limited testing it seems to consistently drop the temperature about 5 C more than leaving the end of the shroud on.
[automerge]1579047146[/automerge]
No encryption. The RAID was setup using the HighPoint UI. I did test a simple Apple RAID (with Disk Utility) before I installed the HighPoint drivers and it was about 20% slower.

The SSDs are Gigabyte AORUS NVMe Gen4 M.2 2TB PCI-Express 4.0 which I transferred over from a PC. Individually they are capable of read/write speeds up to 5GB/s 4.4GB/s, but the PCIe 3.0 interface caps them at around 3GB/s.

OK, they may be PCI 4, but they appear to be slower than the Samsung EVO Plus drives. It's not just the bus speed...card design, etc. matters a lot. With my HighPoint card I'm getting 12GB/s with four Samsung 970 EVO Plus cards in RAID 0, using the HighPoint software and APFS.
 
Last edited:
As an Amazon Associate, MacRumors earns a commission from qualifying purchases made through links in this post.
The shroud has heat sinks built into it to cool both the NVMe cards and the PLX switch; the switch is going to run hot and can be damaged above 70C. Without the shroud there's nothing to draw heat from the PLX switch and dissipate it.

If you remove the front of the shroud as I showed in an earlier post, it allows the Mac Pro fans to pull airflow through the card from end to end, similar to how the MPX modules are designed, although not as efficient. That said, it keeps the card cool enough that it should be fine and in limited testing it seems to consistently drop the temperature about 5 C more than leaving the end of the shroud on.

So just to clarify you think taking the front of the shroud off is ok/beneficial. This is all i had planned to do personally, and then i was going to reduce the fans to silent or low speed. I'll mostly be playing 4k / 8k video files so i won't be running the cards very hot with high GB/s transfers etc.
 
So just to clarify you think taking the front of the shroud off is ok/beneficial. This is all i had planned to do personally, and then i was going to reduce the fans to silent or low speed. I'll mostly be playing 4k / 8k video files so i won't be running the cards very hot with high GB/s transfers etc.

No, I suggest leaving the shroud ON but simply take off the front plate near the fan. It's held on with a few screws. That keeps the heatsinks on the cards and PLX switch, but allows air to flow through the shroud from the Mac Pro's fans. Make sense?

EDIT: See this post I made earlier for a picture of the shroud removed, showing the plate on the end that's held in with some screws. Remove that plate.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.