Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Your SSD's are about half as fast as Apple included ones. Add another 300$ to that price to put in 256GB PCI-e SSD.

----------

256GB of storage for a Mac Pro? Really? That's a Macbook Air Config isn't it? And, of course, everyone's been clamoring for a Quad, right?

Those SSD's cost 500$ for 256GB. So you really want the base model to come with 1TB and add another 1500$ to the cost?
 
Your SSD's are about half as fast as Apple included ones. Add another 300$ to that price to put in 256GB PCI-e SSD.

Again, 3 x Samsung 840 SSDs actually run faster than Apple's PCIe. This has been discussed multiple times in this thread and there are plenty of sources with real-world benchmarks.

----------

And I ignored the advantages of expandable internal storage, because I actually want a smaller size and compact footprint. If I reason I don't haul my current Mac Pro around is because it's difficult to do so.

Fair enough, but let me ask you this: Do you plan to have any external devices and storage? Are you factoring in the cumbersomeness of that kind of setup?

Not everyone is satisfied with 256GB :)
 
Again, 3 x Samsung 840 SSDs actually run faster than Apple's PCIe. This has been discussed multiple times in this thread and there are plenty of sources with real-world benchmarks.

----------


That's a RAID0 solution. Not a single disk. Obviously 3 drives on RAID0 run faster but you are taxing the CPU due to software RAID, your change of data loss is 3 times higher so no, it's not the same thing. One drive is better than 3 drives in a RAID. Otherwise why would anyone bother with a PCI-e SSD?

But in any case even when you add 300$ you get to 3K which is still 1K less than Apple's solution so you still have a point.
 
That's a RAID0 solution. Not a single disk. Obviously 3 drives on RAID0 run faster but you are taxing the CPU due to software RAID

Really? How much are you taxing it?

your change [sic] of data loss is 3 times higher

Really? Why are the chips on the SATA drives different from the chips on the PCIe? Same chips, same likelihood of failure per chip. I'm not saying the failure rate is the same, but it's not the same multiplier as with platter drives. It's likely not even close to 3 times higher. Besides, are we not backing up here? :)

Think of it this way: would a platter hard drive be more that much more reliable than a 4 platter drive raid 0 if that one hard drive had 4 different platters, 4 motors, and 4 different reading apparatuses? It would still be more reliable if it had the same circuitry, but it wouldn't be 4 times as reliable, as the most common points of failure have been multiplied. The most common point of failure (90%) on an SSD are the NAND chips themselves, and the PCIe SSDs have a roughly equal number of NAND chips per GB.

If you really wanted to be a bad-***, you could just make a 6 drive SSD RAID-10 for $532 more (4 more drives). This would make it faster and more reliable than Apple's PCIe solution, while still keeping the price under $3,200 :)

Otherwise why would anyone bother with a PCI-e SSD?

Why indeed.

But in any case even when you add 300$ you get to 3K which is still 1K less than Apple's solution.

Not to nitpick (especially since you're agreeing with me), but adding a 3rd 128GB Samsung 840 drive to this particular setup is another $133, making the total cost $2800.
 
Last edited:
Really? How much are you taxing it?



Really? Why are the chips on the SATA drives different from the chips on the PCIe? Same chips, same likelihood of failure per chip. I'm not saying the failure rate is the same, but it's not the same multiplier as with platter drives. It's likely not even close to 3 times higher. Besides, are we not backing up here? :)

Think of it this way: would a platter hard drive be more that much more reliable than a 4 platter drive raid 0 if that one hard drive had 4 different platters, 4 motors, and 4 different reading apparatuses? It would still be more reliable if it had the same circuitry, but it wouldn't be 4 times as reliable, as the most common points of failure have been multiplied. The most common point of failure (90%) on an SSD are the NAND chips themselves, and the PCIe SSDs have a roughly equal number of NAND chips per GB.

What are you talking about? In a RAID0 with 3 drives, if one of the drives fail, you lose all your data. So your chance of data loss is 3 times higher. If you want actual math here's how it goes.

Assume that the drive failure chance of a single drive is X/100. That means (100-X)/100 is the chance that it does not fail. So with one drive you have (100-X)/100 chance of never losing your data. With 3 drives, your chance to never lose data is ((100-X)/100)^3.

If X=1, you have 99% of not losing data with single drive, with 3 drives your chance is 97,0299%, so your chance of losing data went from 1% to 3%

If X=2, you have 98% of not losing data with single drive, with 3 drives your chance is %94,1192, so your chance of losing data went from 2% to 5.9% which is approximately 6%.

As X grows larger, the ratio will grow smaller but no drive has higher than 5% failure rate I hope, so with small X's, using multiple drives linearly increases your risk of data loss.

If you really wanted to be a bad-***, you could just make a 6 drive SSD RAID-10 for $532 more (4 more drives). This would make it faster and more reliable than Apple's PCIe solution, while still keeping the price under $3,200

A 6 drive RAID has 6 times more change to fail. So no. RAIDs with multiple drives only make sense if you use them in reasonable RAID settings, not RAID0, which is the worst type of RAID set there is. It offers no protection. So if you want to actually get better performance, and at the same time not increase the risk of data loss, you need to keep adding drives and using different RAID settings, which adds to the price.

But, there's of course this. One can always use the RAID0 set for boot drive and apps, and not keep any crucial information on it, then of course it's irrelevant how big the failure rate is since in any case you are not losing important data but simply being inconvenienced more often.

----------

Why indeed.

The same question can be asked about any type of drive. Why buy 500MB/sec SSD when you can two 250MB/sec for less price and RAID0 them? Or get four 125MB/sec for even less probably and RAID0 them?

And the answer is the same because adding more and more disks to a RAID0 is a very very bad idea. It's so bad that RAID0 is never used in a professional environment, it's only for enthusiasts with no important data to lose and only want to test their drive speeds.
 
What are you talking about? In a RAID0 with 3 drives, if one of the drives fail, you lose all your data. So your chance of data loss is 3 times higher. If you want actual math here's how it goes.

Here's what I'm talking about: Think of a PCIe SSD is just a "RAID" of a bunch of NAND chips. Observe:

Go49f06l.jpg


That's not a single chip on there. There are multiple chips functioning together, all multiplying the risk of failure.

Therefore, since it's the chips themselves are the point of failure 90% of the time, having more chips increases the chance of failure regardless of if they're divided up into small 2.5" sata-controlled boxes or soldered onto a single card.



A 6 drive RAID has 6 times more change to fail. So no. RAIDs with multiple drives only make sense if you use them in reasonable RAID settings, not RAID0, which is the worst type of RAID set there is. It offers no protection. So if you want to actually get better performance, and at the same time not increase the risk of data loss, you need to keep adding drives and using different RAID settings, which adds to the price.

My post was talking about a 6 drive RAID-10, not extending a RAID-0.

I8DaNlQ.png
 
Last edited:
That's not a single chip on there. There are multiple chips functioning together, all multiplying the risk of failure.

Therefore, since it's the chips themselves are the point of failure 90% of the time, having more chips increases the chance of failure regardless of if they're divided up into small 2.5" sata-controlled boxes or soldered onto a single card.

That's a good point. Need to check the failure rates of SSD controllers and SSD chips and which contributes how much. If the chips fail most of the time then indeed the difference might be negligible.
 
That's a good point. Need to check the failure rates of SSD controllers and SSD chips and which contributes how much. If the chips fail most of the time then indeed the difference might be negligible.

Exactly. Having more controllers obviously increases the chances of something going wrong, but it's the transistors themselves that are the culprit with SSD 90% of the time according to multiple sources. If you find out something different, let me know.
 
Here's what I'm talking about: Think of a PCIe SSD is just a "RAID" of a bunch of NAND chips. Observe:

That particular SSD yes, typically it's not a RAID configuration like in that picture. On a striped volume you do increase the risk of failure the more disks you add.
 
What I'm interested is that since the Mac Pro comes with 3 thunderbolt controllers with 2 channels each, can one in theory plug in 3 external SSD's, each rated 1GB/sec connected through thunderbolt chassis to different controllers and then Software RAID0 them to get speeds close to 3GB/sec like one can with PCI-e RAID cards. Single TB is limited to 2GB/sec.
 
What I'm interested is that since the Mac Pro comes with 3 thunderbolt controllers with 2 channels each, can one in theory plug in 3 external SSD's, each rated 1GB/sec connected through thunderbolt chassis to different controllers and then Software RAID0 them to get speeds close to 3GB/sec like one can with PCI-e RAID cards. Single TB is limited to 2GB/sec.

I believe this has been discussed in other threads, and I think everyone agreed this is probably the case. Software RAID doesn't discriminate based on which controller the drive(s) are plugged into, so it should work--limited to the total 6.0 - 6.3GBps of the nMP's TB channels. It should function regardless of whether the TB2 ports can aggregate with each other (which they almost certainly can't).

I'm not sure why you'd want three hardware RAIDs and not just JBOD all of them and let the software RAID figure it out (is there an advantage to this?).

My gripe would be having another big box full of hard drives with another PSU and a bunch of cords for something that should fit in the main PC--but that's just my opinion.
 
I believe this has been discussed in other threads, and I think everyone agreed this is probably the case. Software RAID doesn't discriminate based on which controller the drive(s) are plugged into, so it should work--limited to the total 6.0 - 6.3GBps of the nMP's TB channels. It should function regardless of whether the TB2 ports can aggregate with each other (which they almost certainly can't).

I'm not sure why you'd want three hardware RAIDs and not just JBOD all of them and let the software RAID figure it out (is there an advantage to this?).

My gripe would be having another big box full of hard drives with another PSU and a bunch of cords for something that should fit in the main PC--but that's just my opinion.

At first I thought of Hardware RAID's for price, since 1GB/Sec SSD's are expensive.

Btw I think the TB channels are rated much higher than that. Even with TB1, one can get 900MB/sec throughput using the Areca 8 drive raid. So one TB1 channel is capable of running that data bandwidth in real world scenario. A TB2 channel should be able to double that, so 1.8GB/sec should be possible using only a single connection, but I'd like to know if it's possible to get to 3GB/sec, or even 4GB/sec.

About the extra drive boxes, that's my current setup anyway. I have 35 drives, 4 of them sitting inside the MP. It's irrelevant for me if 35 of them are external or 32 of them. :)

And as someone who in the past carried this Mac Pro in his suitcase internationally whenever I had to move, I actually love the small volume. This is like a portable workstation one can carry around.
 
But isn't that a moot point - since the Mac Mini Pro forces you to put the Samsung 840 Pros behind the T-Bolt bottleneck?

To be fair, if it were TB1, that'd be the case. TB1 seems to only be able to go up to 900MBps, which would significantly bottleneck even a 2-drive RAID0 with decent SATA SSDs.

However, TB2 is probably going to get up to 2000MBps at some point, with some controller (TB1 controllers often dissappoint). That should not bottleneck a 2 or 3 SSD drive RAID0. 4-drives will probably hit the wall, but 1500MBps, as in a 3 drive array, should work okay.

Of course, a 2000MBps RAID-0 would wipe out 1/3 of your total TB2 throughput on a nMP.

----------

I'd like to know if it's possible to get to 3GB/sec, or even 4GB/sec.

3GBps over TB2? The theoretical maximum is 2.5GBps (20Gbps / 8), by all accounts it's almost certainly 2.1GBps or less.

About the extra drive boxes, that's my current setup anyway. I have 35 drives, 4 of them sitting inside the MP. It's irrelevant for me if 35 of them are external or 32 of them. :)

And as someone who in the past carried this Mac Pro in his suitcase internationally whenever I had to move, I actually love the small volume. This is like a portable workstation one can carry around.

This doesn't make any sense: you're talking about the portability of the nMP but saying you're going to need to externalize even more of your storage than you already do? Is that really a more portable machine, or are you just not counting the parts of the machine that aren't in the main box? What good is moving your computer around if you have none (or just a small portion) of your data with you?
 
With all that SSD discussion, please do not forget that a SATA or PCI based Raid 0 (and that's what most PCI cards actually are) in fact do deliver much better performance in certain benchmarks. But that is certain benchmarks, and NOT what you get in "real life".

So, the actual performance gain in actual, normal usage is far from what you can get in theory. Therefore I think that a RAID does not gain that much compared to a single SATA III drive. And also, SATA III has double bandwidth compared to SATA II, but NOT double performance overall in real life applications.
 
With all that SSD discussion, please do not forget that a SATA or PCI based Raid 0 (and that's what most PCI cards actually are) in fact do deliver much better performance in certain benchmarks. But that is certain benchmarks, and NOT what you get in "real life".

So, the actual performance gain in actual, normal usage is far from what you can get in theory. Therefore I think that a RAID does not gain that much compared to a single SATA III drive. And also, SATA III has double bandwidth compared to SATA II, but NOT double performance overall in real life applications.

Definitely true, there are bottlenecks that are not having to do with the hard drives. After a certain point, the speed of the HD stops contributing to the time it takes to perform a task.

Take these benchmarks, for instance:
YxObcnW.png


This is because the bottleneck with high bandwidth, high IOPS, high read/write is usually the rest of the computer. The array speed stops being useful because the computer literally isn't ready for the data it's able to output due to processing speed, RAM limitations, etc--it can't run at peak performance.

With video editing off the hard drive, the benchmarks do, in fact, reflect the ability to perform certain tasks well. Some benchmark apps even report this:

uCOd3gn.png


(This is a dual SSD RAID-0 over SATA)

So while HD speed isn't everything, benchmarks in many cases do reflect real-world scenarios--specifically those tasks that are based entirely around those speeds.
 
Your SSD's are about half as fast as Apple included ones. Add another 300$ to that price to put in 256GB PCI-e SSD.


Those SSD's cost 500$ for 256GB. So you really want the base model to come with 1TB and add another 1500$ to the cost?
They were much faster than anything Apple sold at the time, and regardless of speed, 256 just doesn't cut it any more for a boot drive, especially for a machine hamstrung by having a single internal drive.

If they would actually give us a price we could decide for ourselves, couldn't we?
 
How do you figure? Go price out a comparable workstation at any competitor or custom builder. Unless you're buying parts and building it yourself, the price is certainly reasonable. There are questions/concerns about the new Mac Pro, but the pricing really isn't one of them.




Well they've never really "released" the high end mac pro price. It's always been a built to order that you have to price out yourself.

No its not, I did exactly that on dells site and it came out over 1000 dollars cheaper than apple.

0e37.png
 
No its not, I did exactly that on dells site and it came out over 1000 dollars cheaper than apple.

Image

Wrong CPU, that's the old model. Those V7900s you chose are in no way comparable to the default cards on the nMP. No thunderbolt. No ethernet. Actually no wifi as far as I can tell. Slower SSD. Probably loud, large, and ugly. But you do get a DVD drive... enjoy!
 
Wrong CPU, that's the old model. Those V7900s you chose are in no way comparable to the default cards on the nMP. No thunderbolt. No ethernet. Actually no wifi as far as I can tell. Slower SSD. Probably loud, large, and ugly. But you do get a DVD drive... enjoy!

This is the closest I could get on Dell



Here's BOXX

 
Wrong CPU, that's the old model. Those V7900s you chose are in no way comparable to the default cards on the nMP. No thunderbolt. No ethernet. Actually no wifi as far as I can tell. Slower SSD. Probably loud, large, and ugly. But you do get a DVD drive... enjoy!

The v7900s have 1250 stream processors each. That's actually more than the Mac pro.

You really think dell would ship a workstation without ethernet.
 
This is the closest I could get on Dell

No its not, I did exactly that on dells site and it came out over 1000 dollars cheaper than apple.

Dell is as big a ripoff as Apple. The point of my OP was that the things you're supposedly paying more for (Fast hard drive, GPU, etc) have cheaper alternatives which are better. Moreover, homebuilt machines have better warranties and obviously more expandability.
 
Dell is as big a ripoff as Apple. The point of my OP was that the things you're supposedly paying more for (Fast hard drive, GPU, etc) have cheaper alternatives which are better. Moreover, homebuilt machines have better warranties and obviously more expandability.

Large companies do not deal with individual part warranties as it's more hassle, they pay more for workstations from reputable dealers for onsite, and quick turnaround for support.

If something fails in a custom built box, you have to deal with RMA procedures, shipping to them, having them test it and then determine if they'll replace it.
In the mean time you have downtime waiting on that part.

Workstations and servers if something goes wrong you contact them , and they give a replacement asap, or send a tech over asap with the replacement parts. Time is money, and you pay good money for good service and little downtime.

It's one of the reasons workstation parts cost so much.

You don't need to deal with the likes of XFX with horrible warranty support, you deal directly with the OEM/Distributer. And if you went custom route and bought a workstation GPU, you deal directly with AMD, and NVIDIA's business support instead of EVGA, XFX, Sapphire, or PowerColor.

Home built machines are also only as expandable as the motherboard chosen for them. If you're building a proper workstation you won't be using the likes of MSI, Asrock or others. You'll be using proper workstation boards from Intel, AMD, Supermicro, and Tyan.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.