When one drive fails, the whole drive fails, I guess, no different from single hard drives.
Want backup? Get an external drive.
I don't expect any different in workflow.
This.
Assume all hardware will fail and plan accordingly (keep backups).
When one drive fails, the whole drive fails, I guess, no different from single hard drives.
Want backup? Get an external drive.
I don't expect any different in workflow.
I may not be great at maths, but I think that the chance of one drive failing is twice as likely as with a single drive setup.
Self contained hybrid drives will never get anywhere near the performance of this setup - they don't have enough flash.
No.
The right solution is the one that works transparently, so the user can spend their time doing more productive things.
Self contained hybrid drives will never get anywhere near the performance of this setup - they don't have enough flash.
The probability is chance of HDD + chance of SSD . If they had the same likelihood (they don't ) that would be 2x, but also misses the major point. The probability of failure is small. Two times 0.1% is still a small number. Even 1% is still single digits. It is a marginal change versus all of the other factors that could compromise your system.
If the SSD soakes up a large portion of the random disk access then HDD failure likelihood may even go down ( due to less 'abuse' ).
----------
A 3.5" Hybrid could. That wouldn't work for the mini, but part of the problem with is the 2.5" Hybrid designs is that they are limited on space for the flash ( in addition to the cost of flash). The other issue is that early hybrids used SLC Flash which is substantially more expensive.
In the 2.5" context, yes. mSATA Flash + the 2.5" disk is likely going to be more effective.
99 percent good after one year for each unit means 99 x 99 = .9801 good after 1 year.
But you are making the assumption of perfect software which may not be true so let us say 99 percent for the software so
No it isn't. The probability of two independent events don't multiply. Multiplying is two events occuring together. You only need one here for a failure.
My disagreement is as follows;
both must occur to work so they multiply to get success rate of 98.01 after 1 year.
so 99 percent success multiplies by 99 percent success giving 98.01 percent of the time they will both work after a year.
You could say that but it is largely a diversion.
All indications so far is that this "move files to faster media on same volume" is the primarily the same mechanism that is already in place. It gets invoked already on HDD and SSD drives (although not really necessary much of the time on SSD so less frequent. ). There is different data being fed into the same basic heuristics ( it may move more or less now) but failure is in terms of loosing data not optimization level. That isn't going to be any more prone to failure now than it was before so is largely part of the nominal background risk.
Your the one who is asserting "perfect" software. There are software failure modes with or without Fusion.
Similarly a write driven cache. There are block caches in the OS now. This is just a variant on the fundamentally the same mechanism that is extremely lacking in failures now.
Throw on top that Apple so far is really only pointing this being done with their drives means there isn't any unknown, untested configuration that is gong to pop up. Software isn't like hardware. If run the same config then get the same results. If a defect is found it is a "find once fix many" context. Over time the probabilities go down unlike hardware where getting different component failures due to physical effects.
Software components like file systems where system wide data integrity are at risk are typically tested far more rigorously than "normal PC" application software.
If Apple had thrown out File Vault 2 and Fusion last year on top of a relatively new Core Storage framework perhaps this "version 1.0 software" boogeyman you are trotting out here would have more legs. They didn't. Similarly if File Vault 2 surface with several integrity bugs ... again didn't. They have actually held back on aggressively using the features. That is highly indicative that this isn't a "ship it even with bugs" kind of software project.
I cancelled my order for the Mini with Fusion Drive yesterday because I'm just not sure about it right now...
Does anyone know for sure if it will be a single drive, or will there be 2 drives inside the Mac mini (so that you can't put in a second one by yourself), or 1 HDD + 1 Flash module (like in the MBair) which sits in that nice space below the HDD?
2 drives;
1 is ssd
1 is hdd
The probability is chance of HDD + chance of SSD . If they had the same likelihood (they don't ) that would be 2x, but also misses the major point. The probability of failure is small. Two times 0.1% is still a small number. Even 1% is still single digits. It is a marginal change versus all of the other factors that could compromise your system.
If the SSD soakes up a large portion of the random disk access then HDD failure likelihood may even go down ( due to less 'abuse' ).
----------
A 3.5" Hybrid could. That wouldn't work for the mini, but part of the problem with is the 2.5" Hybrid designs is that they are limited on space for the flash ( in addition to the cost of flash). The other issue is that early hybrids used SLC Flash which is substantially more expensive.
In the 2.5" context, yes. mSATA Flash + the 2.5" disk is likely going to be more effective.
No one has them yet as far as I know, when selected at release it gave a 7-10 day shipping time. I think they are great for the general public apple users (ie: my mom), but as someone that knows what an SSD is for, what to put on it, and how to manage multiple drives, I feel that I might be giving up too much and just going to end up frustrated trying to fight with the fusion software all day. On the phone with apple now to change it out.
No.
The right solution is the one that works transparently, so the user can spend their time doing more productive things.
Self contained hybrid drives will never get anywhere near the performance of this setup - they don't have enough flash.
Regard tiered storage, hardware abstraction and what is enterprise and what is not, I personally think consumer gear should be more highly abstracted in the way that enterprise gear is. Why? well enterprise pay people with expertise to manage their gear and everything associated with it. These people deal with abstracted infrastructure as it is making the enterprise job easier and allows better fault tolerance but the ease of replacement could function just as well in the consumer market.
Just saying ;-)