You took it the wrong way, my comment wasn't a suggestion, it was to show the stretch and trade off involved with building a Fusion array on par with that guy's custom internal one.
IMO, having to manage files over multiple physical volumes has always been an inevitable part of a growing computing life of anyone. Trying to stay away from it by having a single drive solution is just limiting your options.
Much truth here.
Eventually, no matter how much storage you buy, you run into the issue of bulk archival requirements (lots of space, who cares about speed) vs. current projects (small subset of your data, want it to be FAST).
In theory, SSD+spinning disk caching is helpful, but the reality is there is an overhead associated with it, and due to this, you may end up not running from the SSD cache as much as you would like. The system also has to shuffle data on/off the cache; i'm not talking specifically regarding fusion drives here, but with any auto-tiering disk system. This is an overhead that has to run at some point, causing your drives to be busy doing that at some point. Hopefully when you're not using them aggressively, but occasionally if it needs to juggle the cache while you work (due to say, a large amount of new data to write to disk after your write cache is full), it will have to try and do that while you're actively working with it.
More specifically regarding fusion drives, I've seen/heard much online about the scheme apple uses for the tiering being somewhat retarded - it just doesn't work as efficiently as it probably should. Part of that is because Apple aren't storage experts (HFS+, and even the new AppleFS show this to a degree - APFS is better than HFS, but it's by no means cutting edge - its behind where ZFS was about 15 years ago). Partly this may be due to caching just not working as well as we would hope for a single user's data.
Enterprise SANs do the tiered caching thing (I have a few at work, Netapps with Flashcache), but the caching on that is block level and based on hundreds of users hitting it. So if something SHOULD be cached, by the time
you hit it, someone (or maybe plenty of other users) else probably caused it to be there by accessing it many times before you. With many users, SSD cache also helps serialise thousands of concurrent random IO requests into big block writes to the hard disk (which hard disks dont suck at so bad) by buffering them on the SSD (combining say 10,000 small writes into one big write to hard disk).
With a single user system, YOU are going to be that user having the cache miss (and falling back to the hard drive) whenever it happens. Fusion will be better than a hard drive, sure. But it won't be anywhere near as fast as a pure SSD setup. Also on a single user system, you aren't doing thousands of concurrent random IO requests. Because there's only one user generating them.
When dealing with any significant quantities of brand new data (e.g, processing some new video you imported for example), any fusion setup is either going to be spending half its time shuffling data around to aggressively cache anything new and push the old stuff to hard disk, or you'll be hitting the non-SSD storage several times before it learns to cache it. If you use SSD for current projects, it goes on SSD first time until you're done with it...
TLDR:
caching works a lot better with many users on the same storage. it is less great when you're the only user of it. Also, once you exceed the amount of SSD cache you have, it's all downhill performance wise from there. It's better than no cache for sure. Just don't expect it to be same as an SSD-only drive because it just won't be. It's not a magic bullet unfortunately.