Is anybody out there using OpenZFS on a Mac Pro with Catalina 10.15.3 ??
I'm using OpenZFS 1.9.3.1 (https://openzfsonosx.org/) on a Mac Pro with 1.5 TB of RAM, 8 TB (Apple-installed) SSD, and 96 TB of spinning disks, namely: six of the 16 TB Seagate Exos drives (2 in a Pegasus J2i and 4 in a Pegasus R4i). The spinning disks are setup in one zpool, in a very basic way:
sudo zpool create -f -o ashift=12 tank disk2 disk3 disk4 disk5 disk6 disk7
I'm a longtime zfs user, but only used zfs (until now) on Linux servers. I'm running Catalina 10.15.3. I am able to import approximately 100 GB or sometimes as much as 200 GB at once, but if I do larger inputs than this, the data transfer will freeze, resulting in a need to reboot the Mac. I'm trying to move data from the Apple-installed 8 TB SSD (APFS format) to the zfs pool. The transfers are also much slower than we might expect, based on the published write speeds for these drives.
Not only the (big) writes are failing: (Big) reads are failing too. In other words, I am unable to transfer large amounts of data at once, from the zfs pool back to the (APFS format) SSD.
I've spent the last week (very frustrated) about this, after countless freezing and rebooting of the Mac Pro.
I tried to post on the OpenZFS forum but did not (yet) get a solution, and I have not heard from the experiences of others who are trying to use OpenZFS on either of their Pegasus enclosures inside the new Mac Pro.
All suggestions are welcome. As I wrote this, I tried to move 533 GB from the APFS SSD to the zpool, and it died after 82.69 GB of data was transferred.
P.S. My data consists of (roughly) 10 GB per file, plain ASCII text, nothing strange here. Thanks for listening!
P.P.S. I have even tried this many times after shutting down the Mac and starting things up fresh, so that the Mac is completely cool when I am attempting such data transfers, just to make sure that I don't have overheating CPU or overheating drives. The failures occur within the first 60 to 90 seconds of the (large) data writes or reads.
[automerge]1581122504[/automerge]
I'm not a newbie with regard to large data projects. A couple years ago, I ran a job on our clusters at our university that rendered 72 petabytes of data, and used 37 years of computing time on the cluster (of course, the computational tasks occurred in parallel).
I'm using OpenZFS 1.9.3.1 (https://openzfsonosx.org/) on a Mac Pro with 1.5 TB of RAM, 8 TB (Apple-installed) SSD, and 96 TB of spinning disks, namely: six of the 16 TB Seagate Exos drives (2 in a Pegasus J2i and 4 in a Pegasus R4i). The spinning disks are setup in one zpool, in a very basic way:
sudo zpool create -f -o ashift=12 tank disk2 disk3 disk4 disk5 disk6 disk7
I'm a longtime zfs user, but only used zfs (until now) on Linux servers. I'm running Catalina 10.15.3. I am able to import approximately 100 GB or sometimes as much as 200 GB at once, but if I do larger inputs than this, the data transfer will freeze, resulting in a need to reboot the Mac. I'm trying to move data from the Apple-installed 8 TB SSD (APFS format) to the zfs pool. The transfers are also much slower than we might expect, based on the published write speeds for these drives.
Not only the (big) writes are failing: (Big) reads are failing too. In other words, I am unable to transfer large amounts of data at once, from the zfs pool back to the (APFS format) SSD.
I've spent the last week (very frustrated) about this, after countless freezing and rebooting of the Mac Pro.
I tried to post on the OpenZFS forum but did not (yet) get a solution, and I have not heard from the experiences of others who are trying to use OpenZFS on either of their Pegasus enclosures inside the new Mac Pro.
All suggestions are welcome. As I wrote this, I tried to move 533 GB from the APFS SSD to the zpool, and it died after 82.69 GB of data was transferred.
P.S. My data consists of (roughly) 10 GB per file, plain ASCII text, nothing strange here. Thanks for listening!
P.P.S. I have even tried this many times after shutting down the Mac and starting things up fresh, so that the Mac is completely cool when I am attempting such data transfers, just to make sure that I don't have overheating CPU or overheating drives. The failures occur within the first 60 to 90 seconds of the (large) data writes or reads.
[automerge]1581122504[/automerge]
I'm not a newbie with regard to large data projects. A couple years ago, I ran a job on our clusters at our university that rendered 72 petabytes of data, and used 37 years of computing time on the cluster (of course, the computational tasks occurred in parallel).