5 is faster than 6, but it's not as redundant (n=1 failure, and the data is retained; but if a second drive fails while it's in a degraded state, the data is gone). 6 increases the redundancy to n=2, but there's a performance penalty for doing this.is raid6 faster?
There's always a trade-off, regardless of the level (wiki is a useful source of information on the details of the various levels).
Rebuild times are certainly valid, but this isn't a critical server either (if equipment is chosen properly and replaced on a reasonable schedule, then it shouldn't be hitting many rebuilds at all, thus a reasonable compromise for the specific use).It also becomes a factor when the rebuild times are extremely long...... which is exactly this case since with a 6+2 set up you need pull data from 6 drives to reconstruct the 2. At 2TB a piece that is on the order of 12TB of data to grab.
This is what bothers me about the generic "10 is always better than any parity level", as the specifics must be taken into account (uptime, data throughput requirements, capacity, capacity growth, budget,...).
Once upon a time, that statement was true, even for throughput rates. But it's no longer the case. Hasn't been since at least 2006.
Now the additional parallelism can be leveraged, thus making larger member counts in parity faster than 10 using the same stripe values (card settings), member count, and drive models.
This is a special case, and not likely to be found at this level. Even if it is done, on 8x disks, it's not important enough to forsake the speed in the described situation by the OP.System wise the RAID5/6 also has less redundancy. As I pointed out can put the mirrored components of a RAID 10 arrive on different paths. If the RAID controller support dual components that is an additional redundancy don't have if layer the single set across two (or more) connection paths.
Real world test:Holding both disk count and stripe size constant it is going to be rather difficult to compare across the major raid levels. Unless talking about using various subsets of the total disk count, that's one of the major trade-offs between major families.
- Same card
- Same disk models
- Same member count
- Same stripe size for each test
- Change level only
But throughputs for the described work are critical, so it needs to take precedence over faster rebuild speeds for the reduced performance of 10 in this case.
Statistically, 6 has proven acceptable in this particular usage (even 5 so long as the member count is kept in check), and can be further reduced by proper cycle planning (HDD replacement schedule).Unless you are on a deadline and have to get the latest dailies out to your customers in the next couple of hours. ........ but the RAID array is thrashing away at a super-wide rebuild.