Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I may not be great at maths, but I think that the chance of one drive failing is twice as likely as with a single drive setup.

The probability is chance of HDD + chance of SSD . If they had the same likelihood (they don't ) that would be 2x, but also misses the major point. The probability of failure is small. Two times 0.1% is still a small number. Even 1% is still single digits. It is a marginal change versus all of the other factors that could compromise your system.

If the SSD soakes up a large portion of the random disk access then HDD failure likelihood may even go down ( due to less 'abuse' ).

----------

Self contained hybrid drives will never get anywhere near the performance of this setup - they don't have enough flash.

A 3.5" Hybrid could. That wouldn't work for the mini, but part of the problem with is the 2.5" Hybrid designs is that they are limited on space for the flash ( in addition to the cost of flash). The other issue is that early hybrids used SLC Flash which is substantially more expensive.

In the 2.5" context, yes. mSATA Flash + the 2.5" disk is likely going to be more effective.
 
No.

The right solution is the one that works transparently, so the user can spend their time doing more productive things.

Self contained hybrid drives will never get anywhere near the performance of this setup - they don't have enough flash.

This is correct. I looked the Momentus XT ssd/hdd up at the Seagate site and the tech specs say its a 750 hd with 8 GB flash.

Someone posted a screenshot of the new iMac in another thread and it has two drive bays. One is an hdd and the other is a blade style sata ssd bay. The current rMBP offerings use this and you can see a photo on the Other World Computing website. Look at the storage upgrades available for these machines. I'm not sure the blade from the iMac is the same configuration as the one in the laptops, though. Ssd isn't confined to the form factor of platter drives and can be made to fit available space.

Dale
 
The probability is chance of HDD + chance of SSD . If they had the same likelihood (they don't ) that would be 2x, but also misses the major point. The probability of failure is small. Two times 0.1% is still a small number. Even 1% is still single digits. It is a marginal change versus all of the other factors that could compromise your system.

If the SSD soakes up a large portion of the random disk access then HDD failure likelihood may even go down ( due to less 'abuse' ).

----------



A 3.5" Hybrid could. That wouldn't work for the mini, but part of the problem with is the 2.5" Hybrid designs is that they are limited on space for the flash ( in addition to the cost of flash). The other issue is that early hybrids used SLC Flash which is substantially more expensive.

In the 2.5" context, yes. mSATA Flash + the 2.5" disk is likely going to be more effective.



99 percent good after one year for each unit means 99 x 99 = .9801 good after 1 year.

But you are making the assumption of perfect software which may not be true so let us say 99 percent for the software so

99 x 99 x 99 = 97.0299 will be good after 1 year.


I can tell you that 99 percent for a hdd is optimistic .

I can tell you 99 percent for a samsung 830 ssd is pessimistic.

I can tell you 99 percent for perfect software is a guess.

I would be comfortable in guessing a 1 to 3% failure rate for fusion in one year.

It is not if it fails I don't worry about failure the real question is how do you recover?



-----------------------------------------------------------------------------------------------------------

( If you have a backup clone do you just press the power button and does the machine know to ignore both internal drives.

In a mini turning the mini upside down and lifting the sata connectors to force the mini to pick an external drive should work.

Doing it is a technical void of the warranty that should be undetectable. But do you want to do this? )




ALL GUESSWORK ^

-------------------------------------------------------------------------------------------


I should order one with fusion and I will but not til the refurbs come out.
 
99 percent good after one year for each unit means 99 x 99 = .9801 good after 1 year.

No it isn't. The probability of two independent events don't multiply. Multiplying is two events occuring together. You only need one here for a failure.


But you are making the assumption of perfect software which may not be true so let us say 99 percent for the software so

You could say that but it is largely a diversion.

All indications so far is that this "move files to faster media on same volume" is the primarily the same mechanism that is already in place. It gets invoked already on HDD and SSD drives (although not really necessary much of the time on SSD so less frequent. ). There is different data being fed into the same basic heuristics ( it may move more or less now) but failure is in terms of loosing data not optimization level. That isn't going to be any more prone to failure now than it was before so is largely part of the nominal background risk.

Your the one who is asserting "perfect" software. There are software failure modes with or without Fusion.

Similarly a write driven cache. There are block caches in the OS now. This is just a variant on the fundamentally the same mechanism that is extremely lacking in failures now.

Throw on top that Apple so far is really only pointing this being done with their drives means there isn't any unknown, untested configuration that is gong to pop up. Software isn't like hardware. If run the same config then get the same results. If a defect is found it is a "find once fix many" context. Over time the probabilities go down unlike hardware where getting different component failures due to physical effects.

Software components like file systems where system wide data integrity are at risk are typically tested far more rigorously than "normal PC" application software.

If Apple had thrown out File Vault 2 and Fusion last year on top of a relatively new Core Storage framework perhaps this "version 1.0 software" boogeyman you are trotting out here would have more legs. They didn't. Similarly if File Vault 2 surface with several integrity bugs ... again didn't. They have actually held back on aggressively using the features. That is highly indicative that this isn't a "ship it even with bugs" kind of software project.
 
No it isn't. The probability of two independent events don't multiply. Multiplying is two events occuring together. You only need one here for a failure.

My disagreement is as follows;
both must occur to work so they multiply to get success rate of 98.01 after 1 year.

so 99 percent success multiplies by 99 percent success giving 98.01 percent of the time they will both work after a year.




You could say that but it is largely a diversion.

All indications so far is that this "move files to faster media on same volume" is the primarily the same mechanism that is already in place. It gets invoked already on HDD and SSD drives (although not really necessary much of the time on SSD so less frequent. ). There is different data being fed into the same basic heuristics ( it may move more or less now) but failure is in terms of loosing data not optimization level. That isn't going to be any more prone to failure now than it was before so is largely part of the nominal background risk.

Your the one who is asserting "perfect" software. There are software failure modes with or without Fusion.

Similarly a write driven cache. There are block caches in the OS now. This is just a variant on the fundamentally the same mechanism that is extremely lacking in failures now.

Throw on top that Apple so far is really only pointing this being done with their drives means there isn't any unknown, untested configuration that is gong to pop up. Software isn't like hardware. If run the same config then get the same results. If a defect is found it is a "find once fix many" context. Over time the probabilities go down unlike hardware where getting different component failures due to physical effects.

Software components like file systems where system wide data integrity are at risk are typically tested far more rigorously than "normal PC" application software.

If Apple had thrown out File Vault 2 and Fusion last year on top of a relatively new Core Storage framework perhaps this "version 1.0 software" boogeyman you are trotting out here would have more legs. They didn't. Similarly if File Vault 2 surface with several integrity bugs ... again didn't. They have actually held back on aggressively using the features. That is highly indicative that this isn't a "ship it even with bugs" kind of software project.


I also disagree with your software assertion. right now the 2012 minis are having a driver issue with the integrated gpu. So they tested and I would guess they decided the screen black outs for under a second at random were okay enough to sell as the "we need to get this to market we will patch later". Mentality took over.


Just think apple maps. Scott Forstall


"DaringFireball's John Gruber believes that Forstall was forced out of Apple:
Forstall is not walking away; he was pushed. Potential factors that worked against Forstall: his design taste, engineering management, abrasive style, and the whole iOS 6 Maps thing. I also wonder how much Forstall was effectively protected by his close relationship with Steve Jobs — protection which, obviously, no longer exists.
Inside Apple author Adam Lashinsky agrees with that sentiment and also cites the Apple Maps issue as a reason for his demise:
I also heard that Forstall refused to sign the letter apologizing for the mapping fiasco, sealing his fate at Apple.
Lashinsky is referring to a public apology posted by Apple CEO Tim Cook about iOS 6's Maps. The Map app in iOS 6 replaced Google Maps with Apple's own proprietary solution. After a significant amount of criticism after iOS 6's launch, Cook wrote an open letter apologizing to customers about not meeting expectations. "


the quote is from mac rumors article


https://www.macrumors.com/2012/10/29/scott-forstall-reportedly-forced-out-of-apple/





Look the idea of fusion is stop gap and already too old as ssds are just getting cheaper.

What will happen is ram plus ssd will be the goto not ssd plus hdd

and real speeds will be achieved. this tech is already being done with pc's



http://forums.anandtech.com/showthread.php?t=2279821


this can be done with a diy pc pretty easily makes fusion look like molasses.
 
I cancelled my order for the Mini with Fusion Drive yesterday because I'm just not sure about it right now...

Does anyone know for sure if it will be a single drive, or will there be 2 drives inside the Mac mini (so that you can't put in a second one by yourself), or 1 HDD + 1 Flash module (like in the MBair) which sits in that nice space below the HDD?
 
I cancelled my order for the Mini with Fusion Drive yesterday because I'm just not sure about it right now...

Does anyone know for sure if it will be a single drive, or will there be 2 drives inside the Mac mini (so that you can't put in a second one by yourself), or 1 HDD + 1 Flash module (like in the MBair) which sits in that nice space below the HDD?

2 drives;
1 is ssd
1 is hdd
 
The probability is chance of HDD + chance of SSD . If they had the same likelihood (they don't ) that would be 2x, but also misses the major point. The probability of failure is small. Two times 0.1% is still a small number. Even 1% is still single digits. It is a marginal change versus all of the other factors that could compromise your system.

If the SSD soakes up a large portion of the random disk access then HDD failure likelihood may even go down ( due to less 'abuse' ).

----------



A 3.5" Hybrid could. That wouldn't work for the mini, but part of the problem with is the 2.5" Hybrid designs is that they are limited on space for the flash ( in addition to the cost of flash). The other issue is that early hybrids used SLC Flash which is substantially more expensive.

In the 2.5" context, yes. mSATA Flash + the 2.5" disk is likely going to be more effective.


The chance both are failed is the square of the failure chance, so it is very unlikely to happen.
 
No one has them yet as far as I know, when selected at release it gave a 7-10 day shipping time. I think they are great for the general public apple users (ie: my mom), but as someone that knows what an SSD is for, what to put on it, and how to manage multiple drives, I feel that I might be giving up too much and just going to end up frustrated trying to fight with the fusion software all day. On the phone with apple now to change it out.

Highly doubt that you will end up frustrated trying to fight with the fusion software all day
 
No.

The right solution is the one that works transparently, so the user can spend their time doing more productive things.

Self contained hybrid drives will never get anywhere near the performance of this setup - they don't have enough flash.

You're contradicting yourself. Hybrid drives work transparently. Fusion Drive does not. That's why hybrid drives win, I don't care if a non-transparent solution is faster.
 
Unless I missed something regarding flash drives, a failure in the flash drive is the inability to write data to it as the cycle count is used up for those cells. This would imply you have not lost data but it is now read-only (although that is still not very useful.)

HDD failures are usually mechanical and relate directly to damage of the storage medium, either it is physically damaged or it is not able to be rotated anymore (without transplanting.)

Regard tiered storage, hardware abstraction and what is enterprise and what is not, I personally think consumer gear should be more highly abstracted in the way that enterprise gear is. Why? well enterprise pay people with expertise to manage their gear and everything associated with it. These people deal with abstracted infrastructure as it is making the enterprise job easier and allows better fault tolerance but the ease of replacement could function just as well in the consumer market.

Just saying ;-)
 
Regard tiered storage, hardware abstraction and what is enterprise and what is not, I personally think consumer gear should be more highly abstracted in the way that enterprise gear is. Why? well enterprise pay people with expertise to manage their gear and everything associated with it. These people deal with abstracted infrastructure as it is making the enterprise job easier and allows better fault tolerance but the ease of replacement could function just as well in the consumer market.

Just saying ;-)

It's going to go that way, all the new computing tech gets developed in the enterprise first.

RAID, ZFS, Tiered storage, caching raid controllers, processor cache memory, etc.

It starts out expensive, price comes down and then it trickles down to the consumer grade gear.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.