wow!? so much back fire...
areca raid array are just the contrary of single point of failure: if the raid card burns, you just replace the controler...
and netapp have two hotswap controller and if you want maximum availability you just run them in a active-active configuration, once again as long as you dont change the drive from emplacements in the hard drive bay you are good.
I have even replaced a areca 1680 to a 1880 and then to a 1882, in the mean time I went from 8 x4 to 12x4 to 16 x4 and now I have one areca 24 port in a supermicro chassis with 4tb in raid 6 for pure archival purpose, and two netapp ds4346 with 24x2tb in raid 60 for speed...
on the twin netapp volume i get almost 3000mb/s R/W with a very low latency on a 72tb volume, and it has cost me less tha 1500€....
I would a hundred time fear more about the single psu go south on a ssd zfs array than having an issue on my 4 psu, twin active active controller and easy replacement areca controller ...
areca 1880xi 24 can be have for 300$ used on ebay and 1880ix can be have for under 200$... I have a spare for each...
I paid 500$ for my ds4346 loaded with 24x 2tb hgst drives,
So i’m curious on how the hell you manage to do a 72tb raid 60 array with redondant point of failure, 3000mb/s R/W and the ability to loose up to 6 drive per array without compromising data, and still acces to them for 1500$ with ssd....?
as I say : I wouldn’t trust a zfs ssd array for anything else than cache and temporary media copy for work.
over the network my 5.1 access the 72 Tb data at over 1,2 to 1,4 gb/s with the two atto nt12...
if it is fast and reliable enough to work daily with 8tb 5,5k timelapse video projects, it might be good for 90% of us...
anyhow I am happy with what I have , and just wait for my MP7.1 to arrive and I will be good for the next two years, and by then I will buy more ds4346 shelfs... and eventually go to 40gbe if i can!
I just wanted to pass around my experience to the one who are thinking about spending 32 grand on a promise vtrack, or 5k on a synology....
areca controller have been dam reliable to me and alway kept my data safe.
and if anything go south , my most important data are on a separate shelf at a friend house... just need to go pick it up and in no time i am back on track...
If you guys are happy with theirs zfs software ssd array, I am happy for you!
i just hope you will never experience a power outage without a dedicated memory backup battery...like all areca card have... or have to rebuit a zfs pool with greater than 2 tb sata drive
any how, zfs, software, raid, jbod, if tour data are not on 3 different place, they dont exist.
it is not for no reason that millitary go for redundancy...
[automerge]1581278983[/automerge]
and by the way, if you hit anything close to 1200mb/s on an internal zfs 8x8tb drive array , consider this as a miracle, and then when a drive will go bad I hope you will have a backup, because rebuilding a zfs pool on 7 desktop 8tb sata drive will never happen... just google zfs over 2 tb sata....
[automerge]1581279142[/automerge]
@edgerider, well, in practice, the enormous computations occur *entirely* on the cluster (i.e., we do all the computing on the cluster and the data stays on the cluster). The Mac Pro is just for prototyping, at this scale! Like I mentioned earlier in the thread, a couple years ago, I ran a job on our clusters at our university that rendered 72 petabytes of data, and used 37 years of computing time on the cluster (of course, the computational tasks occurred in parallel).
way out of my league!!!???
but in essence I would really like apple to revive XGrid, that was sick!