Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Draeconis,
are you reporting about your test any time really soon?

I guess my problem is bugs in kernel, probably about filesystem.
Appleraid just can't handle fusiondrives.
When I close all other apps and put CCC to clone my fusionRaid to fast esata, it will crash within 15 minutes.
If I just use casually the mp with 2 dozens apps open, it will take many hours to crash.

I guess I have to now:
#1
Make one ssd hold the system

#2
Put the home directories to normal "simple" fusion drive

#3
Use the unused hdd to make "nightly" CCC-clones of both #1 & #2

#4
Keep timemachine doing backups of #1 & #2.

Sigh,
fusionRaid would have been so much easier...
 
Draeconis,
are you reporting about your test any time really soon?

I guess my problem is bugs in kernel, probably about filesystem.
Appleraid just can't handle fusiondrives.
When I close all other apps and put CCC to clone my fusionRaid to fast esata, it will crash within 15 minutes.
If I just use casually the mp with 2 dozens apps open, it will take many hours to crash.

I guess I have to now:
#1
Make one ssd hold the system

#2
Put the home directories to normal "simple" fusion drive

#3
Use the unused hdd to make "nightly" CCC-clones of both #1 & #2

#4
Keep timemachine doing backups of #1 & #2.

Sigh,
fusionRaid would have been so much easier...

On a course this week, will test more next week.

Got it all set up, went to configure it, SIP got in the way :(

Will rebuild with 10.10 and re-test.
 
I had to move on; installed normal FD and will take frequent clones to the freed hdd from fusionRaid.
Decided not to install system and users to separate disks.
The other ssd is still in 2nd odd bay.
I partitioned it to have no partitions at this point (%nopartition%).

Am I right, if I'm assuming that nopartition ssd will not be under any wear since there's no partition?
Is it at "sleep" all the time?
Of course it has electricity, so there's some component on active state, but it's not collecting any writes shortening lifespan?
 
I'm re-testing this today. I can bless the resulting AppleRAID, but I'm having issues booting to it. Might be something to do with the fact it's VM, but might not be.

--

Scratch that, looks like the methodology I was using for testing was wrong. Tested on a more simple level creating the OS from a Recovery Partition. Seems to be working well; I can see both SSDs working away when doing things, and the HDDs doing nothing. After a period of time, I'd expect these to kick in and pull data across, provided I gave it some to move.

Testing now for stability.

--

Tested shutting it off, removing a disk, and powering it up. Machine booted, logged in, no visible warnings. Disk Utility does highlight that the RAID is degraded. Shut down, added the disk, rebooted.

RAID array has now failed, and there seems to be no option to rebuild. Can add back in a disk to fix, but it's not immediately clear from Disk Utility which disk is 'new' or was at fault.

From studying how 'diskutil list' is now laid out, it looks like;

disk 0 and disk1 create LVG disk2
disk 3 and disk5 create LVG disk5
appleRAID mountpoint is disk6

The LVG that forms one of the parts of the mirror is disk2 which 'failed', and the disk I removed then has to be disk0 (the SSD and HDD disks have different sizes to differentiate them).

Added this disk back to the system and rebooted. RAID array is still listed as failed; 'Repair' option wants to replace the failed disk with another, but this fails with the following;

Code:
The operation couldn't be completed (com.apple.StorageKit error 118).

This isn't entirely unexpected; Disk Utility isn't expecting the AppleRAID volumes to be LVGs, and besides, DiskUtility is now garbage.

I will see if this is just another GUI bug with it, and attempt a repair in CLI

--

To recover after this type of error, with this configuration, you need to firstly remove the failed Fusion disk from the AppleRAID, then delete the LV from the failed Fusion disk, then re-create the LV, then add it back to the AppleRAID. This process will force both the SSD and the HDD to rebuild, and perhaps interestingly the rebuilt mirrors the behaviour across both Fusion drives; both SSD volumes are lit up rebuilding, both HDD volumes are still basically not being used.

So, if you have a single disk failure, when you rebuild, you're copying everything across when you recover, regardless of which physical disk failed, because AppleRAID only sees the two LVGs (Fusion disks, in this example).

I'm not sure if this is something to do with VMware, but after the rebuild, although the two disks are now marked as online, their sizes externally don't match, whereas previously they did. Bit odd.
 
Last edited:
When I built fusionRaid, I did it with diskutil and remembered to add automatic rebuild. After taking one disk away, Disk Util showed "degraded". But cheking from
Code:
diskutil appleraid list
, or something like that, showed it was rebuilding.
 
Last edited:
When I built fusionRaid, I did it with diskutil and remembered to add automatic rebuild. After taking one disk away, Disk Util showed "degraded". But cheking from dCODE]diskutil appleraid list[/CODE], or something like that, showed it was rebuilding.
While it's "rebuilding", it's still "degraded". Perhaps confusing, but not incorrect.

Wait - did it say "rebuilding" while the disk was missing? It shouldn't say that until the disk is replaced and is being re-synched.
 
While it's "rebuilding", it's still "degraded". Perhaps confusing, but not incorrect.

Wait - did it say "rebuilding" while the disk was missing? It shouldn't say that until the disk is replaced and is being re-synched.

Nope, I'd connected missing drive again at that time.
 
When I built fusionRaid, I did it with diskutil and remembered to add automatic rebuild. After taking one disk away, Disk Util showed "degraded". But cheking from
Code:
diskutil appleraid list
, or something like that, showed it was rebuilding.

Not entirely sure what you mean by 'automatic rebuild'?
 
Not entirely sure what you mean by 'automatic rebuild'?
Rebuilding the mirror when missing drive is found or when you define a new UUID to be the new mirror.
In the GUI, you have tick the box in options to enable automatic rebuild.
You can change it with
Code:
diskutil ar update autoRebuild 1 <raidname>
.
 
Rebuilding the mirror when missing drive is found or when you define a new UUID to be the new mirror.
In the GUI, you have tick the box in options to enable automatic rebuild.
You can change it with
Code:
diskutil ar update autoRebuild 1 <raidname>
.

Interesting, didn't know that command. Unfortunately I get the following error;

Code:
Error updating RAID: Couldn't modify RAID (-69848)
 
I never said it didn't work.. it works just fine, from what I can tell. Personally I wouldn't trust HFS+ with any data that's important though, so it isn't of particular interest to me, apart from the technical challenge involved.
 
What didn't work?

I created two fusion drives from two 'SSDs' and two 'HDDs' within a vm, then AppleRAID mirrored the logical volumes together, then blessed the resulting mountpoint, and it all worked fine.
 
This is a fascinating experiment, and I'm intrigued to see if the desired result can be achieved, but should you risk your data on an experiment?

As I understand the problem, the goal is to combine the performance and economic benefits of Fusion with the redundancy of RAID. The idea is to depend upon RAID to keep the Fusion drive running (perhaps for several weeks) while a replacement disk(s) can be obtained.

You're depending on a single OS to maintain both RAID and Fusion - a logical volume made of logical volumes. What about contention/prioritization errors? It's possible that, under certain conditions, the OS would have to prioritize maintaining the integrity of the Fusion drive over the integrity of the RAID, as there's no redundancy in Fusion. How can you be sure that will happen if the OS was not written (and debugged) to anticipate that need?

If you were using hardware RAID, then the OS's task is simplified; for example, two hardware RAIDs, one for the SSDs, the other for the HDDs. The OS would see each RAID as a physical drive, and, conceivably, would be able to manage the two-drive system as any other Fusion array. (Now, Apple doesn't support using external drives for Fusion, but there are those who have made it work... it still seems less risky then depending on the OS to manage both RAID and Fusion.) Now, hardware RAID would be a more costly solution, and maybe not cost-justifiable for your need.

Practically? I don't see that full-time up-time is required for your usage. Your problem is not downtime, it's excessive downtime (weeks waiting for a replacement). The old-fashioned, low tech solution is simple - keep a spare SSD and a spare HDD on the shelf, and replace if/when needed, rather than actively deploy them for RAID. Your out-of-pocket cost is similar. Since they'll be on the shelf, they won't be subjected to wear. (Of course, this is a lot less fun than trying to bring your dream to life.)

There's little evidence that Apple has engineered its OS and file systems to anticipate this usage. On the other hand, ZFS has. https://en.wikipedia.org/wiki/ZFS#ZFS_compared_to_most_other_file_systems
 
I used to use O3X (OpenZFS for macOS) for my storage requirements. it may have been CLI only, but was certainly very robust. I used to do snapshots, and then use zfs send to send them to a FreeNAS box for a backup.

I've since moved to Windows 10, and have a Two-Way Mirror configured with Storage Spaces utilising ReFS instead, which also works very well.

---

From what I can tell, AppleRAID and CoreStorage do their own thing, without interrupting one-another. CoreStorage sees the disks assigned, and treats them both as Fusion Drives. AppleRAID treats both CoreStorage LVs as drives in their own right, and mirrors the data.

What was interesting was seeing the behaviour of the mirroring; when only the SSD was active, the other SSD was also active, but both HDDs were effectively not being used. Since AppleRAID only sees two logical volumes, I wasn't sure how data would effectively be transferred from one FD to another, but CoreStorage seems to mirror how the data is written, even though AppleRAID can't see the physical disks.

As stated though, while this is certainly interesting, since Apple have no shipping system with more than two disks, this was more than likely never considered. And since I've installed a GTX 1080 in my system, I won't be using macOS on it any time soon ;)

In theory, if you had an iMac or Mac mini with a FD setup, you could manually create an external FD on two disks (one SSD, one HDD) attached via something like Thunderbolt, and then AppleRAID these together. But just because you can do a thing, does not mean it should be done :)
 
But just because you can do a thing, does not mean it should be done :)
This very phrase was circulating in my mind as I wrote my previous post. I heard it from John Woram at an Audio Engineering Society convention, sometime in the mid-1970s. That's back when he wrote about pro audio, rather than computing. He called it Foobini's Law. "Not everything that can be done should be (done)."
 
  • Like
Reactions: Draeconis
What didn't work?
#18 & #23, etc.
I can post more crash logs, if anybody can decipher any useful info out of them.
[doublepost=1490381656][/doublepost]
This is a fascinating experiment, and I'm intrigued to see if the desired result can be achieved, but should you risk your data on an experiment?

As I understand the problem, the goal is to combine the performance and economic benefits of Fusion with the redundancy of RAID. The idea is to depend upon RAID to keep the Fusion drive running (perhaps for several weeks) while a replacement disk(s) can be obtained.

You're depending on a single OS to maintain both RAID and Fusion - a logical volume made of logical volumes. What about contention/prioritization errors? It's possible that, under certain conditions, the OS would have to prioritize maintaining the integrity of the Fusion drive over the integrity of the RAID, as there's no redundancy in Fusion. How can you be sure that will happen if the OS was not written (and debugged) to anticipate that need?

If you were using hardware RAID, then the OS's task is simplified; for example, two hardware RAIDs, one for the SSDs, the other for the HDDs. The OS would see each RAID as a physical drive, and, conceivably, would be able to manage the two-drive system as any other Fusion array. (Now, Apple doesn't support using external drives for Fusion, but there are those who have made it work... it still seems less risky then depending on the OS to manage both RAID and Fusion.) Now, hardware RAID would be a more costly solution, and maybe not cost-justifiable for your need.
I'm not risking my data.
I've used few hardware raids before, but since they are not giving any noticeable speed improvements, I don't want them now. I've also hunted down motherboard or card with same few years old raid chip, when original fried.
Just wondering how simple tasks are too complex for OS. Making raid10 or 0+1 has same kind of "complexity". If the code is clean, it should work. Does Darwin have Fusion support?
Practically? I don't see that full-time up-time is required for your usage. Your problem is not downtime, it's excessive downtime (weeks waiting for a replacement). The old-fashioned, low tech solution is simple - keep a spare SSD and a spare HDD on the shelf, and replace if/when needed, rather than actively deploy them for RAID. Your out-of-pocket cost is similar. Since they'll be on the shelf, they won't be subjected to wear. (Of course, this is a lot less fun than trying to bring your dream to life.)

There's little evidence that Apple has engineered its OS and file systems to anticipate this usage. On the other hand, ZFS has. https://en.wikipedia.org/wiki/ZFS#ZFS_compared_to_most_other_file_systems
Can you boot osx from zfs?
I don't have mandatory need for fusionDrive, it would have just been "a nice to have". Little geekiness with real (if not so big) benefit. I'm not diligent enough to spend hundreds of hours to find out why fusionDrive didn't work with my setup. If APFS will support raid & fusion, I might try again. Second ssd sits inside a warm cMP unpartitioned, waiting for duty...
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.