Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Diamond Dave

macrumors member
Original poster
Nov 25, 2008
76
12
Edinburgh, Scotland, UK.
I recently got my new Mac Pro up & running (a Mid 2012), and because in the short term I've moved my hard drives over from my previous Mac Pro (a Mid 2009), I currently have the following installed:

2 x 1TB Hard Drives, configured as a single 2TB volume via RAID 0. This volume is the boot volume, and also contains all my other files. It's around 1.1TB full. Current OS is 10.11.6 (El Capitan).

1 x 8TB Hard Drive, split into a 2TB volume & a 6TB volume. The 2TB volume is used as a cloned, bootable backup of the boot volume via Carbon Copy Cloner. The 6TB volume is empty.

1 x 512MB SATA SSD. This came with the new Mac, and is fitted in the lower optical drive bay. Currently unused, but I upgraded the OS on it to 10.14.1 (Mojave) and it boots the Mac without issue.

My plan is to buy an NVMe PCIe card and a compatible 2TB SSD blade (or 2 or more smaller blades that total 2TB in capacity) and to transfer my entire 1.1TB of OS, Apps & data to it, and to boot from this set up. I also plan to upgrade the OS that I actually use from El Capitan to Mojave.

I don't reckon I'll need more than 2TB of total storage in the short or medium term, and I plan to keep this Mac Pro for very many years. Either until it breaks & can't be repaired, and/or until Mojave is too old to function with current browsers / security threats, and/or until the upcoming modular Mac Pro 7,1 has been out for several years & second hand ones are reasonably priced.

I'll keep the 8TB HD to back up to, but with all my data on the new NVMe system, I can then move the 2 x 1TB HDs back to the older Mac Pro, along with the 512GB SATA SSD and sell that Mac.

As part of upgrading the existing SSD to Mojave, I now have Boot ROM Version 140.0.0.0.0, so my understanding is that I now have native NVMe boot capability.

Having read a lot of threads like these:

https://forums.macrumors.com/threads/highpoint-7101a-pcie-3-0-ssd-performance-for-the-cmp.2124253

https://forums.macrumors.com/threads/pcie-m-2-nvme-on-macpro.2030791

as well as others on the subject, the consensus seems to be that the best / fastest NVMe PCIe card that's available, as well as being cMP compatible and bootable, is the HighPoint SSD7101A-1:

http://highpoint-tech.com/USA_new/series-ssd7101a-1-overview.htm

which can be had here in the UK for around £420:

https://www.scan.co.uk/products/hig...-x16-raid0-1-5-10-controller-up-to-25000-mb-s

Am I correct is thinking that if I coupled this with a Samsung 970 EVO SSD:

https://www.samsung.com/semiconductor/minisite/ssd/product/consumer/970evo

which sells for around £508 for the 2TB capacity:

https://www.amazon.co.uk/dp/B07CJ58654

that would give me pretty much the best, fastest set up that money can buy? Short of spending ridiculous amounts on the likes of this I mean?

https://www.amazon.co.uk/dp/B01M210JMG

I don't have spare money to burn, but I'd rather invest in having all my data accessible in the fastest way possible, and also keep it all on one volume (backed up of course) than faff about with part SSD / part HD storage, or fusion drives, or buy new storage that uses out of date / un-optimal tech, etc.

Also given that the rest of the Mac (graphics card apart) is pretty much upgraded to the maximum it can be (dual 3.46GHz processors & 96GB of RAM) I don't want any of my data to be stored on hardware that's in any way slower than it needs to be and thus cause a bottleneck.

I'm very aware that what I'm proposing will seem extravagant & a waste of money to many, but I just can't abide having my OS, Apps & data on more than one volume.

I have to put up with accessing data on 8 different remote volumes at my work (as well as having GBs of data on the Mac's internal HD), and many years ago when I had an MDD G4 at home, everything was scattered across 3 different drives & that drove me crazy as well.

I've decided to put everything - all 2TB capacity mounted as one bootable volume - on an NVMe system. I just need advice as to what combination of host card & storage module(s) would be best.

Many thanks for any insight anyone can provide.

P.S.

I'm aware that a newer, similar but RAID-capable version of the Highpoint is imminent:

http://highpoint-tech.com/USA_new/series-ssd7102-overview.htm

However according to this post:

https://forums.macrumors.com/thread...nce-for-the-cmp.2124253/page-10#post-26700613

it won't be Mojave compatible, at least not initially.
 
As an Amazon Associate, MacRumors earns a commission from qualifying purchases made through links in this post.
I am somewhat overwhelmed by the various options of SSD upgrades for a Mac Pro 5,1. In my current, limited understanding, I can either purchase a SATA SSD that would go in one of the four SATA-II bays, or one or more mSATA SSDs that would go on a PCI card, or one or more M2 SSDs that would go on a PCI card. I take it for granted that a SATA SSD would be faster than an HDD on my computer, but not as fast as an mSATA or M2 solution, but I can't really figure out if the investment in PCI solutions would actually be worth it in terms of reliability, durability and speed. For instance, does TRIM work on all these solutions? I also take it for granted that an array of M2 SSDs on a Highpoint 7101A would be blazingly fast in comparison with a SATA SSD and even in comparison with a cheaper mSATA solution, what does that extra speed entail in practice? Would the M2 solution imply that the computer can boot, say, in twenty seconds, whereas a SATA SSD would take one or two minutes to boot? Once we reach the Desktop and the usual applications have started, is the workflow markedly different between the various solutions? If I want a 2TB SSD solution that can boot Mojave (my BootRom is 140.0.0.0.0), what would a realistic price estimate be for SATA, mSATA and M2 solutions, including the relevant PCI cards? Is it foreseeable that M2 prices will fall in the near future?
 
If you want the best, 970 Pro + Highpoint 7101A should be the best at this moment.

Thanks h9826790, I suspected as much. Good to have reassurance from someone with your expertise though.
[doublepost=1543148454][/doublepost]
I am somewhat overwhelmed by the various options of SSD upgrades for a Mac Pro 5,1. In my current, limited understanding, I can either purchase a SATA SSD that would go in one of the four SATA-II bays, or one or more mSATA SSDs that would go on a PCI card, or one or more M2 SSDs that would go on a PCI card. I take it for granted that a SATA SSD would be faster than an HDD on my computer, but not as fast as an mSATA or M2 solution, but I can't really figure out if the investment in PCI solutions would actually be worth it in terms of reliability, durability and speed. For instance, does TRIM work on all these solutions? I also take it for granted that an array of M2 SSDs on a Highpoint 7101A would be blazingly fast in comparison with a SATA SSD and even in comparison with a cheaper mSATA solution, what does that extra speed entail in practice? Would the M2 solution imply that the computer can boot, say, in twenty seconds, whereas a SATA SSD would take one or two minutes to boot? Once we reach the Desktop and the usual applications have started, is the workflow markedly different between the various solutions? If I want a 2TB SSD solution that can boot Mojave (my BootRom is 140.0.0.0.0), what would a realistic price estimate be for SATA, mSATA and M2 solutions, including the relevant PCI cards? Is it foreseeable that M2 prices will fall in the near future?

From what I've read, your understanding is correct. There's a sliding scale between slow HDDs, relatively slow SATA SSDs connected via the optical drive cables and housed in empty optical drive bays, then SATA SSDs connected via PCIe cards, and then finally M2 SSDs connected via PCIe cards. I'm no expert though - I'm just going by what I've read in these Forums.

One of the threads I mentioned in my original post:

https://forums.macrumors.com/threads/highpoint-7101a-pcie-3-0-ssd-performance-for-the-cmp.2124253

has loads & loads of speed comparison images & links, so you might do worse than to study them all.

I hope that helps.
[doublepost=1543149585][/doublepost]I've been browsing alternatives to the HighPoint SSD7101A-1 just now on Amazon, and I'm shocked by how much cheaper the alternatives are. Or at least cards that look (to my uneducated eyes) to be very similar.

Taking this one as an example - it's less than a 10th of the price of the Highpoint at £36.99:

https://www.amazon.co.uk/dp/B0757XFMJM

Would it be compatible with the cMP and allow native booting as I'm looking to have?

I appreciate it'll surely be much slower, but it'll also surely be much faster than a relatively slow SATA SSD connected via an optical drive cable.

According to the official specifications:

https://www.asus.com/uk/Motherboard-Accessories/HYPER-M-2-X16-CARD/specifications

It has "data transfer rates up to 128 Gbps.", but then that's presumably for some sort of all-4-SSDs-at-once RAID type setup. There's no mention I can find of the per-SSD rate.

Conversely, according to Highpoint's website:

http://highpoint-tech.com/USA_new/series-ssd7101a-1-specification.htm

their card has "Data Transfer Rates" of "8GT/ 8Gbps per lane".

How are you supposed to compare these 2 figures when they're essentially apples vs oranges? I've absolutely no idea!

Thanks.
 
Last edited:
As an Amazon Associate, MacRumors earns a commission from qualifying purchases made through links in this post.
  • Like
Reactions: PeterHolbrook
I've been browsing alternatives to the HighPoint SSD7101A-1 just now on Amazon, and I'm shocked by how much cheaper the alternatives are. Or at least cards that look (to my uneducated eyes) to be very similar.

Taking this one as an example - it's less than a 10th of the price of the Highpoint at £36.99:

https://www.amazon.co.uk/dp/B0757XFMJM

This card is actually expensive if use on cMP, because it can only accommodate one NVMe and max at ~1500MB/s, same as DT120.
 
As an Amazon Associate, MacRumors earns a commission from qualifying purchases made through links in this post.
Cards that do more than one NVME, or go faster than 1500mbs need to have a PLX (I think) switch on them to be able to convert themselves from pcie2 16x to pcie3 4x. If you're only doing a single NVME and 1500 is fast enough (probably going to do this one myself soon) get a cheap (£20ish) lycom or some such, think the angel bird px1 is cheapish, but is nicer put together with heatsinks and so on
 
  • Like
Reactions: zoltm and h9826790
This card is actually expensive if use on cMP, because it can only accommodate one NVMe and max at ~1500MB/s, same as DT120.

Thanks for pointing out something so obvious but that I'd missed! It hadn't occurred to me that partly the reason the Highpoint is so pricey is that it takes 4 SSDs. I could save a fortune by buying a card that just takes the one SSD.

Cards that do more than one NVME, or go faster than 1500mbs need to have a PLX (I think) switch on them to be able to convert themselves from pcie2 16x to pcie3 4x. If you're only doing a single NVME and 1500 is fast enough (probably going to do this one myself soon) get a cheap (£20ish) lycom or some such, think the angel bird px1 is cheapish, but is nicer put together with heatsinks and so on

The build quality of the Angelbird Wings PX1 looks good:

https://www.angelbird.com/prod/wings-px1-1117

and it's £68 on Amazon here in the UK:

https://www.amazon.co.uk/dp/B01BLGO6N8

as opposed to £372 (the cheapest I've now found the Highpoint for).

I guess my next question is whether it's possible to get an NVMe card that only takes one SSD (to keep the cost down) but has the PLX switch or whatever it is that allows the conversion from PCIe 2.0 16x to PCIe 3.0 4x, thereby giving the sort of speed that the Highpoint manages.

Does anyone know if such a thing exists?

Thanks.
 
As an Amazon Associate, MacRumors earns a commission from qualifying purchases made through links in this post.
Unless you're doing very large, continuous writes or very heavy mixed(read+write) I/O, the difference between 970 Pro and 970 Evo is only on paper.

SI_PEX40129 is essentially one half of an 7101A1. Half the price with 2 m.2 slots and maxes out around 3000MB/s. It will allow full throughput from any single m.2 drive.
 
  • Like
Reactions: Flocarino
Unless you're doing very large, continuous writes or very heavy mixed(read+write) I/O, the difference between 970 Pro and 970 Evo is only on paper.

Thanks - useful to know. The cheapest I've been able to find the 2TB 970 EVO here in the UK is £505.47 including delivery:

https://www.scan.co.uk/products/2tb...-mlc-v-nand-3500mb-s-read-2500mb-s-write-500k

whereas the 970 PRO doesn't come in a 2TB capacity anyway as far as I know.

Alternatively I could go for the Intel 660p, which is significantly cheaper at £388.47 including delivery:

https://www.scan.co.uk/products/2tb...qlc-3d-nand-1800mb-s-read-1800mb-s-write-220k

but then it looks to have significantly slower performance. You get what you pay for I suppose!

SI_PEX40129 is essentially one half of an 7101A1. Half the price with 2 m.2 slots and maxes out around 3000MB/s. It will allow full throughput from any single m.2 drive.

That's ideal - many thanks. As it takes 2 SSDs rather than just one, I take it this means that I could (in theory) fit 2 1TB SSDs and (software) RAID 0 them into a single 2TB volume?

If this is possible, can this be done from Disk Utility or would I need to buy SoftRAID?

2 1TB EVO 970s come in at £210.00 each:

https://www.amazon.co.uk/dp/B07CGJNLBB

so the £420.00 total is about £85 less than the single 2TB EVO 970 from scan.co.uk.
 
As an Amazon Associate, MacRumors earns a commission from qualifying purchases made through links in this post.
You could certainly run them in RAID 0.

Downside is it isn’t bootable in RAID, and fewer future expansion options. It’s the equivalent of having all RAM slots full.
 
You could certainly run them in RAID 0.

Downside is it isn’t bootable in RAID, and fewer future expansion options. It’s the equivalent of having all RAM slots full.

Thanks. If it's not bootable in RAID though then it's no good for my purposes.

I guess I'm now inclined to go for the SI-PEX40129 which from newegg.com with VAT & shipping comes to £204.78:

https://www.newegg.com/global/uk-en/Product/Product.aspx?Item=9SIA6ZP8FG1915

(unless anyone can find it cheaper elsewhere?)

along with the 2TB Samsung 970 EVO for £505.47 including delivery:

https://www.scan.co.uk/products/2tb...-mlc-v-nand-3500mb-s-read-2500mb-s-write-500k

again unless anyone's aware of it being cheaper elsewhere.

Many thanks for everyone's very helpful comments so far.
 
Been trying to keep my eye out for suppliers of most of the cards in the UK, really not coming across any though unfortunately, which is why I'm still considering the likes of the PX1, would be nice to go for something that'll do the higher speeds on a single drive, and I've spinning drives RAIDed already, so 1tb would do for boot/current fcpx projects, then move them off when done so a fair bit cheaper than the 2tb ones
 
Been trying to keep my eye out for suppliers of most of the cards in the UK, really not coming across any though unfortunately, which is why I'm still considering the likes of the PX1, would be nice to go for something that'll do the higher speeds on a single drive, and I've spinning drives RAIDed already, so 1tb would do for boot/current fcpx projects, then move them off when done so a fair bit cheaper than the 2tb ones


I don’t understand why so many people like the PX1. It offers no functionality or performance beyond what the cheap Luxor adapters do. It’s a pass-thru adapter for the PCIe power and signals. Aside from LED’s and a pretty heatsink, it doesn’t do anything. Buy the cheap Luxor adapter and stick a $2 heatsink on the SSD controller. You have a PX1 with no lights.

*Correction Lycom, not Luxor. Thanks, Siri...*
 
Last edited:
I don’t understand why so many people like the PX1. It offers no functionality or performance beyond what the cheap Luxor adapters do. It’s a pass-thru adapter for the PCIe power and signals. Aside from LED’s and a pretty heatsink, it doesn’t do anything. Buy the cheap Luxor adapter and stick a $2 heatsink on the SSD controller. You have a PX1 with no lights.

Happy to hear recommendations of other readily available options that I don't have to order in from China and have at least some nods to heat control since we pretty much need to be putting these right by our hot GPUs
 
Happy to hear recommendations of other readily available options that I don't have to order in from China and have at least some nods to heat control since we pretty much need to be putting these right by our hot GPUs

PCIe NVMe Adapter M.2 NVMe SSD to PCI-e X16 Converter Card with Heat Sink https://www.amazon.com/dp/B07GFDVXVJ/

EZDIY-FAB PCI Express M.2 NGFF PCI-E SSD to PCIe 3.0 x4 Host Adapter Card with Heatsink Cooler https://www.amazon.com/dp/B078WRG94P/

You’re not dissipating that much heat with the newer drives. Samsung 960/970 Evo/Pro will never throttle even with a 25mm x 10mm x 3mm heatsink.
 
As an Amazon Associate, MacRumors earns a commission from qualifying purchases made through links in this post.
Been trying to keep my eye out for suppliers of most of the cards in the UK, really not coming across any though unfortunately, which is why I'm still considering the likes of the PX1, would be nice to go for something that'll do the higher speeds on a single drive, and I've spinning drives RAIDed already, so 1tb would do for boot/current fcpx projects, then move them off when done so a fair bit cheaper than the 2tb ones

My experience has been the same. I've not been able to find any UK stockists of any of the PCIe cards we've been discussing so far.

In fact the SI-PEX40129 is now sold out even at Newegg, which unusually for a US stockist lets potential buyers in the UK see pricing, including delivery, in Pounds, without even having to add the item in question to the Shopping Cart for many items.

It's incredibly impressive to me, simply because I've never seen this level of customer service - i.e. ease of immediately seeing accurate pricing from a non UK website for UK buyers - from anyone else. Newegg is to be commended on this!

Realistically it'll probably be into the new year before I'll actually be buying the PCIe card and NVMe storage.

Once I've bought them, installed & tested I'll report back with any significant findings.

Thanks again to all who have contributed.
 
Been trying to keep my eye out for suppliers of most of the cards in the UK, really not coming across any though unfortunately, which is why I'm still considering the likes of the PX1, would be nice to go for something that'll do the higher speeds on a single drive, and I've spinning drives RAIDed already, so 1tb would do for boot/current fcpx projects, then move them off when done so a fair bit cheaper than the 2tb ones
Hello bazza5938

I just read through your posts from over a year ago. Like you, I'm interested in upgrading my cMP 5,1 I/O to enhance FCPX's ability to edit 4K.

I know it's a BIG ask, but would you please enlighten me— and perhaps others, with responses to the following?

1) Please describe what drives (type and capacity) are inside your cMP and where they're positioned?

2) Which NVMe adaptor did you choose? What slot is it positioned in? What do you like/dislike about it? Any fan noise or thermal issues, etc?

3) Did you create any RAID setups? If so, which of the drives did you RAID? What RAID type are you using? Did you create your setups using software RAID via Disk Utility? Other?

4) What are you using to backup your RAID(s), ie Carbon Copy Cloner, SuperDuper, other?
How are your backups configured— what backs-up to what?

5) Did your upgraded I/O improve FCPX performance? If yes, where are the improvements seen?

6) Where do you store your active FCPX libraries? How about your archived libraries?

7) If you were to replicate the I/O upgrade today... would you do anything differently?

Thank You very much, in advance!!

Anyone reading this who has recently upgraded their cMP I/O is welcome to also respond.
Your input would be greatly appreciated as well.
 
Hello bazza5938

I just read through your posts from over a year ago. Like you, I'm interested in upgrading my cMP 5,1 I/O to enhance FCPX's ability to edit 4K.

I know it's a BIG ask, but would you please enlighten me— and perhaps others, with responses to the following?

1) Please describe what drives (type and capacity) are inside your cMP and where they're positioned?

Ended up rather odd, I have 3 2tb and 2 3tb spinning drives, in the 4 carriers and slots for them, plus one in the lower optical bay. A 500gb sata SSD in a pcie card for it, and a 1tb crucial P1 nvme also in a slot.

2) Which NVMe adaptor did you choose? What slot is it positioned in? What do you like/dislike about it? Any fan noise or thermal issues, etc?

I ended up going with the aquacomputer kyro, it's in slot 2, and has heatsinks on both sides, which is at least some nod towards cooling, while not active, it is at least in the airflow from the front fans. The heatsink on the rear of the card can get in the way of fans on GPU though.

3) Did you create any RAID setups? If so, which of the drives did you RAID? What RAID type are you using? Did you create your setups using software RAID via Disk Utility? Other?

I've two raids, just using disk utility, both 6tb raid0 arrays, the three 2s, and the two 3s, one of these is used as unimportant storage, and the other is archive, and backed up etc.

4) What are you using to backup your RAID(s), ie Carbon Copy Cloner, SuperDuper, other?
How are your backups configured— what backs-up to what?

The nvme is backed up using time machine, the archive raid is backed up using rsync to a small NAS (could use a bigger faster one) and to backblaze
[/Quote]

5) Did your upgraded I/O improve FCPX performance? If yes, where are the improvements seen?

It certainly removed waiting for reading/writing, the nvme does 1600MBps, Vs a few hundred from the raid. Boot times aren't hugely improved, but I've also since upgraded GPU, and done some tweaks from other threads to use the acceleration, which means I can work on 4k60 h264 footage without making proxies or transcoding at all.

6) Where do you store your active FCPX libraries? How about your archived libraries?

Not claiming the best, but current projects are on the nvme, then when done, archived to the backed up raid mentioned above

7) If you were to replicate the I/O upgrade today... would you do anything differently?

I don't think so, I'm not really using the SATA SSD now, its my bootcamp drive, so has a few games etc on it, just don't get to play much, so I'd think about looking at 10gb ethernet in that slot to go for fast network storage when it comes to it, but for small time operation like me, it's not really a requirement.

[/Quote]
Thank You very much, in advance!!

Anyone reading this who has recently upgraded their cMP I/O is welcome to also respond.
Your input would be greatly appreciated as well.

[/QUOTE]
 
Ended up rather odd, I have 3 2tb and 2 3tb spinning drives, in the 4 carriers and slots for them, plus one in the lower optical bay. A 500gb sata SSD in a pcie card for it, and a 1tb crucial P1 nvme also in a slot.



I ended up going with the aquacomputer kyro, it's in slot 2, and has heatsinks on both sides, which is at least some nod towards cooling, while not active, it is at least in the airflow from the front fans. The heatsink on the rear of the card can get in the way of fans on GPU though.



I've two raids, just using disk utility, both 6tb raid0 arrays, the three 2s, and the two 3s, one of these is used as unimportant storage, and the other is archive, and backed up etc.



The nvme is backed up using time machine, the archive raid is backed up using rsync to a small NAS (could use a bigger faster one) and to backblaze



It certainly removed waiting for reading/writing, the nvme does 1600MBps, Vs a few hundred from the raid. Boot times aren't hugely improved, but I've also since upgraded GPU, and done some tweaks from other threads to use the acceleration, which means I can work on 4k60 h264 footage without making proxies or transcoding at all.



Not claiming the best, but current projects are on the nvme, then when done, archived to the backed up raid mentioned above



I don't think so, I'm not really using the SATA SSD now, its my bootcamp drive, so has a few games etc on it, just don't get to play much, so I'd think about looking at 10gb ethernet in that slot to go for fast network storage when it comes to it, but for small time operation like me, it's not really a requirement.

[/Quote]
Thank You very much, in advance!!

Anyone reading this who has recently upgraded their cMP I/O is welcome to also respond.
Your input would be greatly appreciated as well.

[/QUOTE]
[/QUOTE]
You are THE MAN, @bazza5938.

Thanks for your generous and thorough sharing about your cMP setup, my friend... not to mention your world-record reply time! Very impressive and appreciated. Thanks much!!
 
Mostly because technology evolves and gets cheaper, and my purchases track this movement, the collection of drives in my 4/5,1 are an odd assortment.
PCIe slot 2: 1TB HP EX 920 on an inexpensive adapter with heat sink (1400 MBps R/W)
PCIe slot 4: 256 GB SATA SSD from an MBA with Sintech adapter on Accelsior. Boots 10.13.6, though not in several months. (300-ish R/W)
Disk Bays - can't remember order - all spinners:
- 1 TB Hitachi (about 80-100 MBps R/W)
- 2TB WDC (about 80-100 MBps R/W)
- 4TB WDC, 5400 rpm with 64MB cache (repurposed from failed TC, about 80-100 MBps R/W)
- 4TB Seagate Enterprise, 7200 with 256 MB (about 230 MBps R/w)

Optical Bay
- Superdrive DVD
- 480 GB Sandisk SSD (boots 10.14, 400-ish MBps R/W)

USB-3 8TB Seagate (about 80-90 MBps R/W)

Yes, a mess. But my mess.

A couple of interesting observations from this:
- Booting from NVMe on PCIe takes longer than booting from SSD on SATA bus. Perhaps 20 seconds longer. Difference doesn't matter to me, and the tradeoff in FCPX performance is a no-brainer.

- The NVMe can keep many of the other disks busy. I once copied large projects from the NVMe to 3 drives simultaneously. While they could only write as fast as they could, according to Activity Monitor the aggregate writing was 675 MBps.

For FCPX work, NVMe is a good boost, but activating AMD Hardware Acceleration on my RX580 was a similar boost.
 
Mostly because technology evolves and gets cheaper, and my purchases track this movement, the collection of drives in my 4/5,1 are an odd assortment.
PCIe slot 2: 1TB HP EX 920 on an inexpensive adapter with heat sink (1400 MBps R/W)
PCIe slot 4: 256 GB SATA SSD from an MBA with Sintech adapter on Accelsior. Boots 10.13.6, though not in several months. (300-ish R/W)
Disk Bays - can't remember order - all spinners:
- 1 TB Hitachi (about 80-100 MBps R/W)
- 2TB WDC (about 80-100 MBps R/W)
- 4TB WDC, 5400 rpm with 64MB cache (repurposed from failed TC, about 80-100 MBps R/W)
- 4TB Seagate Enterprise, 7200 with 256 MB (about 230 MBps R/w)

Optical Bay
- Superdrive DVD
- 480 GB Sandisk SSD (boots 10.14, 400-ish MBps R/W)

USB-3 8TB Seagate (about 80-90 MBps R/W)

Yes, a mess. But my mess.

A couple of interesting observations from this:
- Booting from NVMe on PCIe takes longer than booting from SSD on SATA bus. Perhaps 20 seconds longer. Difference doesn't matter to me, and the tradeoff in FCPX performance is a no-brainer.

- The NVMe can keep many of the other disks busy. I once copied large projects from the NVMe to 3 drives simultaneously. While they could only write as fast as they could, according to Activity Monitor the aggregate writing was 675 MBps.

For FCPX work, NVMe is a good boost, but activating AMD Hardware Acceleration on my RX580 was a similar boost.
Hey Kohlson... another fine cMP accounting there. Thanks for taking the time to prepare it!

"Yes, a mess. But my mess."
Love it! Our cMP's with their various cobbled-together components become as unique as we are!

Several folks here on the forum have touted the benefits of AMD Hardware Acceleration on the RX580. I've had my RX580 Pulse for about a month. The acceleration sure is attractive but I read through the instructions in recent weeks and the modification is outside my comfort/knowledge zone, unfortunately.

Thanks again!
 
  • Like
Reactions: choreo
Hey Kohlson... another fine cMP accounting there. Thanks for taking the time to prepare it!

"Yes, a mess. But my mess."
Love it! Our cMP's with their various cobbled-together components become as unique as we are!

Several folks here on the forum have touted the benefits of AMD Hardware Acceleration on the RX580. I've had my RX580 Pulse for about a month. The acceleration sure is attractive but I read through the instructions in recent weeks and the modification is outside my comfort/knowledge zone, unfortunately.

Thanks again!

If you are not happy which the latest OpenCore method (this will provide HWAccel and boot screen for your PULSE RX580), then at least use the depreciated Lilu + WhateverGreen method. Which nothing more install two kexts, and add a boot argument. No system files modification, no need to touch the EFI partition, very impossible to go wrong (in worst case, should be just nothing changed).

The fact is, “the ability to edit 4k H264 without transcoding / proxy” comes from HWAccel, but nothing about storage speed.

Even at 4k, H264 video rarely encoded with higher than 100Mbps, which only need ~12.5MB/s sequential speed to play smoothly. Any modern HDD can do that workout any problem. The real reason can’t edit it smoothly on cMP is because lack of HWAccel. The CPU simply not fast enough to handle the time line that has H264 codec.

Therefore, if you want the cMP can handle 4k H264 / HEVC timeline. There is no choice but enable HWAccel.

If you don’t mind to transcode to ProRes, then CPU performance isn’t that important, but storage speed is very important. ProRes is a very low compression rate codec. It is very easy to decode, very low demand to the CPU, but also due to the low compression rate, it has very large bitrate, and require fast storage speed to play smoothly.

This is why cMP users keep chasing faster storage for video edition traditionally (because we have no choice but using ProRes).

To sum up, if you want to edit high resolution videos smoothly on cMP. You either enable HWAccel to edit high compression rate codec (e.g. H264). Or have high speed storage to edit high bitrate codec (e.g. ProRes).

Pick a route you want, and go for the hardware (or modification) you need.
 
Last edited:
If you are not happy which the latest OpenCore method (this will provide HWAccel and boot screen for your PULSE RX580), then at least use the depreciated Lilu + WhateverGreen method. Which nothing more install two kexts, and add a boot argument. No system files modification, no need to touch the EFI partition, very impossible to go wrong (in worst case, should be just nothing changed).

The fact is, “the ability to edit 4k H264 without transcoding / proxy” comes from HWAccel, but nothing about storage speed.

Even at 4k, H264 video rarely encoded with higher than 100Mbps, which only need ~12.5MB/s sequential speed to play smoothly. Any modern HDD can do that workout any problem. The real reason can’t edit it smoothly on cMP is because lack of HWAccel. The CPU simply not fast enough to handle the time line that has H264 codec.

Therefore, if you want the cMP can handle 4k H264 / HEVC timeline. There is no choice but enable HWAccel.

If you don’t mind to transcode to ProRes, then CPU performance isn’t that important, but storage speed is very important. ProRes is a very low compression rate codec. It is very easy to decode, very low demand to the CPU, but also due to the low compression rate, it has very large bitrate, and require fast storage speed to play smoothly.

This is why cMP users keep chasing faster storage for video edition traditionally (because we have no choice but using ProRes).

To sum up, if you want to edit high resolution videos smoothly on cMP. You either enable HWAccel to edit high compression rate codec (e.g. H264). Or have high speed storage to edit high bitrate codec (e.g. ProRes).

Pick a route you want, and go for the hardware (or modification) you need.
Hello h9826790

I noticed you're the OP of the Hardware Acceleration instructional thread.
Thanks for all the work you've done to help explain the process. 👍

Also, thanks for explaining:
- HWAccel helps most when editing high compression rate codecs (e.g. H264).
- high speed storage helps most when editing high bitrate codecs.


A couple questions, please:

Can Hardware Acceleration cause my Pulse RX580 to overheat?

Is Hardware Acceleration and Overclocking the same thing?

Thanks again!
 
Can Hardware Acceleration cause my Pulse RX580 to overheat?
No, not even close. This video captured my PULSE RX580 during decoding VP9 by using HWAccel, only 60°C

Is Hardware Acceleration and Overclocking the same thing?
No, if you want to OC the card (or downvolt the card), join this thread.
 
No, not even close. This video captured my PULSE RX580 during decoding VP9 by using HWAccel, only 60°C


No, if you want to OC the card (or downvolt the card), join this thread.
Thank you, @h9826790
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.