Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Yes, uncapped... Unless you're running a RAID0 or are otherwise using more than 1 drive at once. Each drive doesn't get its own TB channel, all three share 1 channel. That channel tops out at 850MBps, so therefore your RAID is crippled.

Actually the channel does not top at 850 MB/sec, since they already got 900MB/sec using the ARECA TB1 box and it may go even further depeding on future TB1 controllers. And like I said, it's not capped at SATAIII, it's capped around 7Gb/sec with the ARECA box.


Want faster? More drives? Cheaper? Try the HighPoint RocketRAID 2744 - Four SAS for $450. This card supports 16 SATA III drives over PCIe 2.0 16x - 8GBps.

I don't understand this. I said they used RocketRAID 2744 and you are saying they should use RocketRAID 2744? That card is 450$ and 450$ is not cheaper than 450$.
That's more bandwidth than all 6 nMP thunderbolt 2 ports combined (which is really only 6.0 to 6.3GBps).

3 nMP ports combined give you 60 Gb/sec with overhead. Assuming TB2 overhead is close to TB1 overhead, the real world speeds we'll get using channel bonding will be close to 5 GB/sec, which is faster than what you can get out of RocketRAID 2744 using 8 SSD's, which topped around 3 GB/sec. Although I'd love to see how high you can go using RocketRAID 2744 with 16 SSD's. And if channel bonding does not work, then the fastest you'll get out of the Mac Pro will be around 1900 MB/sec, again, assuming the overhead is similar to TB1.
 
Last edited:
Given how pretty much anyone who's doing digital imaging for the past decade has been expanding still to also include video - - and how HD video started five years ago and is now to the point where 1080 is essentially the default (vice 720), the minimum bandwidth demands are in the ~250MB/s ballpark.

As such, when it comes to "archives" questions, it again comes down to the use case question of how frequent is access going to be, for if it is worthwhile to save money by having such slow archives such that one's workflow then requires copying the data onto "scratch" media before doing anything with it (potentially even as little as just watching it to verify if it is indeed the specific clip you're searching for).
-hh

These two paragraphs above address the two types of storage I outlined earlier. I'm going to summarize my views again, here, and then let it rest (you can have the last word if you like)... My apologies for sounding repetitive, but I find I'm getting better at articulating my views as we go along.

Again, everyone needs "some amount" of fast storage... Totally agree. A few years ago, that was done with a couple of HDs in RAID0. Today, that is the job of SSDs. In some rare cases, like Wally, the amount required is still not practical for SSD but that day will come it's just a matter of time. However, Wally is anything but typical, and I believe there are very few workflows today that can't be productive on any given day with 1TB of SSD (or less).

Then, in addition to their immediate project needs, most people have a need for lots of additional storage for backups, archives, and media collections. While faster is always better, this type of storage often doesn't benefit from SSD like performance. This makes it perfectly suitable for JBOD disks or a NAS where other benefits might actually trump performance such as reliability, availability, lifespan, portability between systems, portability for offsite storage, or cloud access, etc. Speed for this storage is often not the most important attribute and RAID0 or parity RAID (with larger arrays) in order to improve performance may actually be counter to some of these more important objectives.

So even if you and a handful of others must have high performance storage for backups, archives, or media libraries, and want the added risk that goes along with it of running large RAID arrays for this type of storage... many people do not. And they will find the new Mac Pro fits into their workflow without a lot of added expense or compromise.
 
Actually the channel does not top at 850 MB/sec, since they already got 900MB/sec using the ARECA TB1 box and it may go even further depeding on future TB1 controllers. And like I said, it's not capped at SATAIII, it's capped around 7Gb/sec with the ARECA box.

Again, you're beating around the bush: Regardless, a single TB channel can be easily maxed out by two (yes TWO) SSDs in a RAID 0. If you have a 3 drive SSD RAID 0 on a single TB channel, your array is severely crippled.



I don't understand this.

There are wonderful cards out there that support much faster than 3,000MBps arrays. Solutions using these cards are much cheaper than thunderbolt solutions.

Moreover, if you wanted to match the speed of your CalDigit enclosure with a 2-4 drive SATA III SSD RAID0, you could adapt an old Mac Pro to store those drives internally for around $75.

Ultimately my point is and always has been this:

1. For internal storage, the old Mac Pro offers excellent cooling, reliability, and quietness as good as or better than the best Thunderbolt arrays. It does so at no additional cost.

2. To match the internal storage reliability (referring to the PSU) and quietness of the Old Mac Pro in the New Mac Pro, you have to spend a boatload of money.

and I'll add a 3rd point, since I brought it up

3. Even for External solutions, for large arrays and high speed, nothing can beat the old Mac Pro. You get better solutions for much less than thunderbolt, even when you factor in the cost of the PCIe card to drive the enclosure.

Edit: one final point that several others have pointed out: Even though the old Mac Pro wins hands down for storage capability internally and externally, it does so when it's not even a fair fight! The old Mac Pro is using 3-4 year old technology. Comparing the nMP to any other PC in its weight class makes the numbers even more skewed. This example I put together can run an 8 drive SATA III RAID with no additional parts. How much does an 8 drive Thunderbolt array cost? $1000?
 
Last edited:
Again, you're beating around the bush: Regardless, a single TB channel can be easily maxed out by two (yes TWO) SSDs in a RAID 0. If you have a 3 drive RAID on a single TB channel, your array is severely crippled.

Of course. I never said otherwise. You claimed the enclosure is capped due to SATA, which is not.

There are wonderful cards out there that support much faster than 3,000MBps arrays. Solutions using these cards are much cheaper than thunderbolt solutions.
And there should be, I just haven't seen any demonstrated. The fastest drive bandwidth speed test I've ever seen on the old Mac Pro capped at 3000MBps, which is extremely fast anyway. Even working on 12 Bit 4:4:4 uncompressed non interlaced 4K editing you don't need 3000MB/sec. I'm not trying to diss the card, but I haven't seen anything faster than that anywhere.

About price, to get to the bandwidth limit of the new Mac Pro, which is like I said will be around 5000 MB/sec if channel bonding works, would be to get 3 TB2 enclosures, and install 4 SSD's in each of them. The price of the SSD's is irrelevant since you need the same amount on the old Mac Pro anyway. Right now the cheapest TB1 enclosures which have 4 bays cost around 500$. Assuming the trend continues with TB2, yes you are looking at 1500$ at least for this kind of bandwidth on the new Mac Pro. Now, which cards are we talking about for the old one and how much do they cost?

Moreover, if you wanted to match the speed of your CalDigit enclosure with a 2-4 drive SATA III SSD RAID0, you could adapt an old Mac Pro to store those drives internally for around $75.

To match 850MB/sec on the old Mac Pro, you need 4 SSD's without paying anything. What's the 75$ for? And I'm not getting the Calldigit for the bandwidth it offers, I'm getting it because it claims it's dead quiet, which the old Mac Pro isn't. So for me those are non comparable. If I wanted bandwidth, I'd be waiting for TB2 enclosures, not buying a TB1 one.


1. For internal storage, the old Mac Pro offers excellent cooling and quietness as good as or better than the best Thunderbolt arrays. It does so at no additional cost.

48 dB quiet? No. Cooling, indeed it's good. My CPU's and drives never overheated in my Mac Pro except my velociraptors and I actually lost 2 of them inside this Mac Pro.

2. To match the internal storage reliability (referring to the PSU) and quietness of the Old Mac Pro in the New Mac Pro, you have to spend a boatload of money.

Well, to match the noise levels of the old Mac Pro, I can get any of the enclosures I want, since nothing will be louder than my Mac Pro. Again, my Mac Pro isn't quiet.


3. Even for External solutions, for large arrays and high speed, nothing can beat the old Mac Pro as it can use PCIe cards that are capable of more bandwidth and/or cost much less than thunderbolt solutions.

Yes on paper. I'm still trying to figure out who needs more than 5GB/sec right now. They edited Hobbit on an AVID system with 450MB/sec bandwidth. And that's a 4K 3D picture. Surely for a movie of 250 million budget they could have gotten any kind of RAID enclosures they wanted and they chose one with 450MB/sec bandwidth. If 450 MB/sec is enough for editing Hobbit, then the new Mac Pro will be enough for any 4K video editor.
 
Of course. I never said otherwise. You claimed the enclosure is capped due to SATA, which is not.

I claimed it was capped due to TB, which it is.

Now, which cards are we talking about for the old one and how much do they cost?

HighPoint RocketRAID 2744 - $450. This can do 16 SATAIII drives for up to 8GBps.

The HighPoint RocketRAID 2722 can do 8 SATA III drives up to 4GBps for $280

I can built a MiniSAS -> SATA III enclosure with excellent cooling and PSU reliability better than any TB enclosure for very cheap. I can do a 15 bay for $450, an 8 bay for maybe $300.

So, can you find an 8 bay TB enclosure that does 4GBps with top-of-the-line Power supplies for $580?

Likewise, can you find anything close to a 15 bay enclosure that does 8GBps running over thunderbolt for ~$900? How much would something like that cost, if it were even possible?

What's the 75$ for?

$50 for a SATA III PCIe 2.0 4x controller, $25 for the mounting bracket - 2GBps of SATA III bandwidth for the old Mac Pro for $75.

The old Mac Pro's controller appears to be capped at 660MBps according to benchmarkers on this forum. For people needing more than that, it can be adapted cheaply and easily (much cheaper than adding a new enclosure).

it claims it's dead quiet, which the old Mac Pro isn't.

That solution uses a (presumably) passively-cooled powerbrick of dubious quality, it is no comparison to the PSU of the old Mac Pro. If you want to say this solution is good for you, I can't dispute that. If you want to say it's just as good, you're way off base--PSU reliability is not too much to ask and it is very important, to all users, not just to the Pros. These cheapo storage solutions cut corners to save on costs. I've had more than my share of enclosure PSUs die on me.

Therefore, this array is not comparable. Also, I'm still waiting for verification of that 15dBA claim (not on their site that you mentioned).

Yes on paper. I'm still trying to figure out who needs more than 5GB/sec right now.

It's not just the speed, it's the expense and quality of the enclosures. PCIe opens up so many low cost, high-quality options compared to thunderbolt.

They edited Hobbit on an AVID system with 450MB/sec bandwidth. And that's a 4K 3D picture. Surely for a movie of 250 million budget they could have gotten any kind of RAID enclosures they wanted and they chose one with 450MB/sec bandwidth. If 450 MB/sec is enough for editing Hobbit, then the new Mac Pro will be enough for any 4K video editor.

Yep the nMP will do 450MBps just fine. Except you'll pay $500-1000 more to get it.
 
HighPoint RocketRAID 2744 - $450. This can do 16 SATAIII drives for up to 8GBps.

On Paper yes, but I haven't seen a single test showing those speeds. The fastest I've seen is 3GB/sec with 8 SSD's and I don't believe this can reach 8GB/sec with 16 SSD's since that wouldn't be possible even if it went totally linear.
The HighPoint RocketRAID 2722 can do 8 SATA III drives up to 4GBps for $280

The fastest actual speed with that one I've seen is 2GB/sec. You are looking at theoretical bandwidths, not real world results.

I can built a MiniSAS -> SATA III enclosure with excellent cooling and PSU reliability better than any TB enclosure for very cheap. I can do a 15 bay for $450, an 8 bay for maybe $300.

So, can you find an 8 bay TB enclosure that does 4GBps with top-of-the-line Power supplies for $580?
TB cannot do 4GB/sec, TB is rated around 1GB/sec at most. Even TB2 alone cannot do that.

Likewise, can you find anything close to a 15 bay enclosure that does 8GBps running over thunderbolt for ~$900? How much would something like that cost, if it were even possible?

There's an 8 bay enclosure which costs 1500$. But of course it's limited to TB1. The same enclosure if it did cost the same for TB2 controller would be limited to 2GB/sec. But that's basically the price.


$50 for a SATA III PCIe 2.0 4x controller, $25 for the mounting bracket - 2GBps of SATA III bandwidth for the old Mac Pro for $75.

The old Mac Pro's controller appears to be capped at 660MBps according to benchmarkers on this forum. For people needing more than that, it can be adapted cheaply and easily (much cheaper than adding a new enclosure).

Ah ok, didn't know SATAII was capped at that. Never tried 4 SSD's in my Mac Pro.

That solution uses a (presumably) passively-cooled powerbrick of dubious quality, it is no comparison to the PSU of the old Mac Pro. If you want to say this solution is good for you, I can't dispute that. If you want to say it's just as good, you're way off base--PSU reliability is not too much to ask and it is very important, to all users, not just to the Pros. These cheapo storage solutions cut corners to save on costs. I've had more than my share of enclosure PSUs die on me.

Sorry but I am on my 3rd PSU with this Mac Pro, so I can't say Mac Pro PSU's are of good quality according to my experience. And no, that enclosure is actively cooled, with a quiet fan. And the CallDigit is not a cheap storage solution really. It's one of the more expensive ones. And it's indeed interesting that you can call a TB enclosure "cheap", while claiming TB solutions are much more expensive.
Therefore, this array is not comparable. Also, I'm still waiting for verification of that 15dBA claim (not on their site that you mentioned).

It was probably on the product video. I've seen it somewhere in the website, or the video on the site. And yes, the array is not comparable, it's much better than what I have like I said.

It's not just the speed, it's the expense and quality of the enclosures. PCIe opens up so many low cost, high-quality options compared to thunderbolt.

I'm not convinced that all TB enclosures are of bad quality (which is only your claim) and many of the non-TB enclosures are much better.

Yep the nMP will do 450MBps just fine. Except you'll pay $500-1000 more to get it.

Ah no? To get 450 MBps you only need USB 3.0. You can get a USB 3.0 enclosure for much cheaper than TB. It'll be cheaper than buying a SATAIII card and an enclosure for your old Mac Pro. And even if you get it through TB, for 450 MB/sec with HDD you need 3 bays. Those enclosures cost 250$ or so.
 
On Paper yes, but I haven't seen a single test showing those speeds. The fastest I've seen is 3GB/sec with 8 SSD's and I don't believe this can reach 8GB/sec with 16 SSD's since that wouldn't be possible even if it went totally linear.

Why not? The technology seems to scale just fine with drives running upwards of 500MBps per drive. Here are 3 drives doing 1500MBps. Here is another article strictly on scalability (note that SSD scale nearly perfectly.

I'm skeptical about 8GBps too, but 6-7 is definitely within reach. Again though, it's more about principle: This is a cheaper solution that is higher quality and capable of more.


TB cannot do 4GB/sec, TB is rated around 1GB/sec at most. Even TB2 alone cannot do that.

I know :D

That's why PCIe is superior.


There's an 8 bay enclosure which costs 1500$. But of course it's limited to TB1. The same enclosure if it did cost the same for TB2 controller would be limited to 2GB/sec. But that's basically the price.

That what I've found as well. So let's just give it the benefit of the doubt and say TB2 will cost the same. That means you're paying three times as much for something half as fast as a MiniSAS solution over PCIe.

Again: Three times the money for something of inferior quality. Keep in mind the nMP isn't even out yet, and this technology at these prices could've been bought several months ago.


Ah ok, didn't know SATAII was capped at that. Never tried 4 SSD's in my Mac Pro.

It's not the SATA II, it's the controller's interface with the PCIe bus. I assume each SATA II port in the Mac Pro can do 3Gbps, but their total throughput is capped in a RAID situation.



Sorry but I am on my 3rd PSU with this Mac Pro, so I can't say Mac Pro PSU's are of good quality according to my experience.

We've discussed this: the 3,1 had defective PSUs, your experience is not typical. The typical Mac Pro user will enjoy a high quality PSU--much higher than that of most enclosures.

And no, that enclosure is actively cooled, with a quiet fan.

You're probably right. The PSU, however, is not. While I'm sure the setup is quiet, one of the reasons is because it uses an inferior power-brick instead of a decent PSU. Therefore the comparison to the drives in the old Mac Pro is not applicable.

And the CallDigit is not a cheap storage solution really. It's one of the more expensive ones. And it's indeed interesting that you can call a TB enclosure "cheap", while claiming TB solutions are much more expensive.

Sorry, I'm confusing my definitions of cheap. TB solutions are expensive and/or low quality. This CalDigit product, I will grant you, is likely not going to be inexpensive. However, using powerbricks is definitely a sign of poor reliability.

I'm not convinced that all TB enclosures are of bad quality (which is only your claim) and many of the non-TB enclosures are much better.

If I said that, I didn't mean to. I just meant that to get something of comparable quality, you have to pay a lot more for it. There are plenty of low-quality USB/FW enclosures, probably Mini-SAS too.

Ah no? To get 450 MBps you only need USB 3.0. You can get a USB 3.0 enclosure for much cheaper than TB. It'll be cheaper than buying a SATAIII card and an enclosure for your old Mac Pro. And even if you get it through TB, for 450 MB/sec with HDD you need 3 bays. Those enclosures cost 250$ or so.

I'd like to see a decent quality 4 bay enclosure, even USB 3, for $250 (hint: What's the PSU in there?).

Also, you don't need an enclosure to run SATA III in the old Mac Pro , you just need a $50 PCIe card and a mounting bracket for your optical bay. What's this? The 3rd time I've said it?

The comparison is $50 for old Mac Pro users to have SATAIII RAID with 4 drives and reliable power Vs $250 for an external enclosure for the same capability.

This is why not having internal expansion stinks: You have to pay for another box with another PSU which adds costs. Though costs will go down as TB matures, they will never as low because it's inefficient and wasteful.
 
Last edited:
It's not about software vs hardware. It's about getting SATAIII to old Mac Pro. You need a decent RAID card to get to 3000 MB / sec with old Mac Pro. Check the barefeats tests, they are using a 550$ RAID card, not Apple's own card, that one sucks.

Even if we care about that, its irrelevant to this discussion.
 
I know :D

That's why PCIe is superior.

Again, if you are testing theoretical limits, it is superior. Does anybody use that kind of bandwidth in real world scenarios today? Probably no.

That what I've found as well. So let's just give it the benefit of the doubt and say TB2 will cost the same. That means you're paying three times as much for something half as fast as a MiniSAS solution over PCIe.
1.5 times the price you mean, 900$ to 1500$.



We've discussed this: the 3,1 had defective PSUs, your experience is not typical. The typical Mac Pro user will enjoy a high quality PSU--much higher than that of most enclosures.

Mac Pro 3.1 is still a Mac Pro. If it has a bad PSU, then a Mac Pro has a bad PSU. To get a good PSU I have to shell out 4000$ to get another old Mac Pro, which is still 4000$ investment.





The comparison is $50 for old Mac Pro users to have SATAIII RAID with 4 drives and reliable power Vs $250 for an external enclosure for the same capability.

If they are lucky enough to have a model with a decent PSU you mean. :)

This is why not having internal expansion stinks: You have to pay for another box with another PSU which adds costs. Though costs will go down as TB matures, they will never as low because it's inefficient and wasteful.

And the costs of the bigger box do not factor in when you first buy the machine? In general a bigger case is more expensive than a smaller one. And to have another PSU for the expansion is a good idea since at some point the PSU in your Mac Pro won't be enough, for example installing an even higher power demanding GPU than the Mac Pro can handle or installing more drives than your Mac Pro can physically hold.

I think the smaller chassis is amazing, it's much better than the old design because it is small and portable. If I wanted to move to another city for 2 weeks, I can easily bring my Mac Pro with me instead of lugging the old beast in my suitcase which might damage it (I did this many times).

----------

Even if we care about that, its irrelevant to this discussion.

It's irrelevant to the discussion of how much drive bandwidth one can get in the old and new Mac Pro? I'm confused. :)
 
Well I'm looking at AJA Data Calc and I'm seeing a possible need for 5GB/second storage arrays.....if you edit 4K full aperature (4096x3112p) 24fps in 16bit uncompressed with an alpha channel...seriously though 5GB/s? I'm pretty sure you could get away with any level project at under 600MB/s which is within USB 3.0's bandwidth, although Thunderbolt is definitely better for building cheap RAID's (unless you have a ridiculous abundance of USB 3.0 ports to waste). I think video codecs are moving towards more visually lossless solutions that are most definitely NOT 1:1 compression ratios as its pretty wasteful, I mean even REDCODE RAW is very effecient. Maybe when 8K workflows start popping up I'll see the light...

Also I see VirtualRain is still hell bent on SSD's...maybe he should all buy us 1TB SSD's? :D just drop me a PM with a Visa Gift card so I can stuff a few Promise Pegasus R6's full of 1TB SSD's... :D
 
Also I see VirtualRain is still hell bent on SSD's...maybe he should all buy us 1TB SSD's? :D just drop me a PM with a Visa Gift card so I can stuff a few Promise Pegasus R6's full of 1TB SSD's... :D

Isn't everyone hell bent on SSDs?! :D Anyway, your Visa Gift card is in the mail! :p
 
Well I'm looking at AJA Data Calc and I'm seeing a possible need for 5GB/second storage arrays.....if you edit 4K full aperature (4096x3112p) 24fps in 16bit uncompressed with an alpha channel...seriously though 5GB/s? I'm pretty sure you could get away with any level project at under 600MB/s which is within USB 3.0's bandwidth, although Thunderbolt is definitely better for building cheap RAID's (unless you have a ridiculous abundance of USB 3.0 ports to waste). I think video codecs are moving towards more visually lossless solutions that are most definitely NOT 1:1 compression ratios as its pretty wasteful, I mean even REDCODE RAW is very effecient. Maybe when 8K workflows start popping up I'll see the light...

Also I see VirtualRain is still hell bent on SSD's...maybe he should all buy us 1TB SSD's? :D just drop me a PM with a Visa Gift card so I can stuff a few Promise Pegasus R6's full of 1TB SSD's... :D

Even at 16 bit with alpha channel you need 2.5 GB/sec I think. But yeah I'd love a Pegasus running 6 SSD's. The problem with SSD raids is that you have to buy it at once and spend a buckload of money. One can't do an SSD Raid over years like we did with HDD's since SSD technology is constantly changing. The models you buy this year won't be around next year or won't be worth to buy anymore so you will have pairing issues.
 
...Again, everyone needs "some amount" of fast storage... Totally agree. A few years ago, that was done with a couple of HDs in RAID0. Today, that is the job of SSDs.

That's the technology transition path ... but there's still miles to travel: we aren't there yet.

In some rare cases, like Wally,...

Incorrect.

You simply have utterly no objective data to try to claim that Wally or anyone else is sufficiently "rare" to infer that everyone else can now be running on SSDs.

About the best that you can do is to infer that anyone who's willing to pay more for performance is willing & able to drop the additional $500 or $5000 per node for the capacity which will satisfy their use case.


... the amount required is still not practical for SSD but that day will come it's just a matter of time.

Which will be just when? 2014?

True, SSD's price : performance have been improving quite dramatically over the past two years, but we've not yet come all that close, let alone achieved actual cross-over:

In early 2011, SSDs were $2000/TB, which was roughly 20x more costly than a conventional HDD. Nevertheless, some products like the Macbook Air paid the freight for SSD, in no small part due to also its power savings...but did so with a tiny 64GB or 128GB size.

In 2012, prices slipped to under $1500/TB, and the marketplace response was that more 'early mainstream' adopters started running their OS/Apps on an SSD because they saw the incremental system cost to do so as favorable vs other means of enhancing performance. For data, the SSD became the faster scratch disk.

Within this past year (early 2013), SSDs broke the $1000/TB barrier ... and today to as little as $600/TB ... but HDD prices have resumed their fall too, so we're still at ~10x ratio. Adoption continued to broaden because the minimum buy-in of a 256GB for the boot drive (OS/Apps) continues to fall (~$300) and be an affordable system performance boost, and fast data really finally became practical for some use cases.

Yes, the gap is closing, but it is premature to claim that it is already closed, because the use cases with the least storage capacity demands such that the "10x more" ratio doesn't result in a large amount of absolute dollars are the first to transition - - functionally, it is closing from the "bottom" up, not "top down", because big data still remains unaffordable.


However, Wally is anything but typical, and I believe there are very few workflows today that can't be productive on any given day with 1TB of SSD (or less).

Mother called ... she wants her Apple Pie back.

Unfortunately, 1TB worth of SSD doesn't yet cost $120 to make its adoption an easy no-brainer. When we look at the nMP, it comes standard with only 256GB ... and that's despite it being 20% more expensive than the 5,1

And discussions on Lou Borella's page have been analyzing that 256GB default on the nMP to determine if it is adequate for their use case. Some have been finding that that SSD blade will be completely consumed by just their installation of OS X with their Pro tools....zero capacity left for working data.


Then, in addition to their immediate project needs, most people have a need for lots of additional storage for backups, archives, or media libraries...

Incorrect and disingenuous grouping. Again. The scope of my discussion has always been within the duty cycle of the immediate "Day To Day" use case.


So even if ...

Conclusions which are purposefully based on a false premise have zero creditability.


-hh
 
Again, if you are testing theoretical limits, it is superior. Does anybody use that kind of bandwidth in real world scenarios today? Probably no.


1.5 times the price you mean, 900$ to 1500$.

There's a $900 eight-bay Thunderbolt RAID option? News to me.

Mac Pro 3.1 is still a Mac Pro. If it has a bad PSU

If they are lucky enough to have a model with a decent PSU you mean. :)

Look, most Mac Pros offer good PSUs. I'm sorry yours didn't. Since you're a broken record on this, let me point out the obvious: If your main PSU is bad, it really is irrelevant how reliable your power bricks are. If you're satisfied using poor quality PSU to run your Motherboard, then by all means: continue to power your external devices with low-quality power bricks.

If you're saying Mac Pros have bad PSUs, you're basically telling everyone on this forum to buy a different brand of computer, as this is unacceptable.


And the costs of the bigger box do not factor in when you first buy the machine? In general a bigger case is more expensive than a smaller one.

Apparently not in this case! You're paying an incredible premium to buy this custom form-factor.

This case is $24 and has room for 4x 5.25" bays, 5 x3.5" bays, a full-sized motherboard & GPU, and standard PSU.


And to have another PSU for the expansion is a good idea since at some point the PSU in your Mac Pro won't be enough, for example installing an even higher power demanding GPU than the Mac Pro can handle or installing more drives than your Mac Pro can physically hold.

The Mac Pro has a 1,000 Watt PSU. Do you have any idea how many hard drives that can power?!

As far as eventually needing more storage: when that day comes, as I pointed out previously the old Mac Pro offers less expensive, more reliable options.

If you're looking for inexpensive and small: Buy an eSATA card and enclosure.

If you're looking for a large array: Use miniSAS.

Both are superior to USB and thunderbolt in price and often performance.

I think the smaller chassis is amazing, it's much better than the old design because it is small and portable. If I wanted to move to another city for 2 weeks, I can easily bring my Mac Pro with me instead of lugging the old beast in my suitcase which might damage it (I did this many times).

What are you going to do once you get there? Browse the web?! All your data is on bulky external storage you left at home!

Also, where's your monitor?

If you're arguing for a laptop, that's a totally different computer than the nMP.
 
As an Amazon Associate, MacRumors earns a commission from qualifying purchases made through links in this post.
What are you going to do once you get there? Browse the web?! All your data is on bulky external storage you left at home!

Also, where's your monitor?

If you're arguing for a laptop, that's a totally different computer than the nMP.

You don't need to bring all your backup storage with you on a 2 week trip. About the display, you can use your laptops display if the OS supports it. I used to carry my big display with me as well, which is a pain in the ass but the Mac Pro was even a bigger pain since displays aren't that heavy. I have 100 TB's at home and I don't need 99% of them at any time but I keep them unless I need them in the future.

----------

There's a $900 eight-bay Thunderbolt RAID option?

No. There's a 1500$ one. You said you can get an 8 bay SAS for 900$ so the price is increasing from 900$ to 1500$, which is not 3 fold like you claimed.


Look, most Mac Pros offer good PSUs. I'm sorry yours didn't. Since you're a broken record on this, let me point out the obvious:

It's not the case of one bad PSU, which can happen to anyone. I replaced 2 PSU's, so the PSU's were going broken one by one. That means there's a design failure with the PSU itself unless I was really really unlucky.

If you're saying Mac Pros have bad PSUs, you're basically telling everyone on this forum to buy a different brand of computer, as this is unacceptable.

Or I was using the Mac Pro to a further extent than Apple intended to. I had high power drawing GPU's inside, 3 velociraptors, and tons of USB peripherals, maybe all that was taxing the PSU more than most people use it for.



Apparently not in this case! You're paying an incredible premium to buy this custom form-factor.

We don't know that. That's your guess. I'd say that if the new Mac Pro had a larger case, and the same internals, it could have cost 200$ more simply due to increased shipping and storage costs.




The Mac Pro has a 1,000 Watt PSU. Do you have any idea how many hard drives that can power?!

And you can't even put in one Nvidia Titan.


What are you going to do once you get there? Browse the web?! All your data is on bulky external storage you left at home!

Also, where's your monitor?

If you're arguing for a laptop, that's a totally different computer than the nMP.
 
No. There's a 1500$ one. You said you can get an 8 bay SAS for 900$ so the price is increasing from 900$ to 1500$, which is not 3 fold like you claimed.

Actually, now that I look at prices, a case and PSU (80 PLUS PLATINUM) to hold 8 drives will be $170. Add a $280 MiniSAS controller and you're talking $450 for an 8-drive highly-reliable array capable of 4GBps.

On a New Mac Pro: You can pay $1500 for a 2 port thunderbolt2 solution (which doesn't even exist yet, by the way).

So yes, 3 times the price--and that's conservative.

Yes, I know you can run USB 3 on the nMP, but before you point that out, you should know it's only 5Gbit/s per port, and you have to pay for a bridgeboard. eSATA is cheaper and much faster.



Or I was using the Mac Pro to a further extent than Apple intended to. I had high power drawing GPU's inside, 3 velociraptors, and tons of USB peripherals, maybe all that was taxing the PSU more than most people use it for.

.... 1000Watts is 1000Watts. The 3.1 had a bad line of PSUs prone to failure. It has nothing to do with what you had hooked up.

Computing equipment is not supposed to fail like that (unless it's disposable junk, which the MP is not supposed to be).


And you can't even put in one Nvidia Titan.

That has nothing to do with the PSU and everything to do with Apple's case design. The Mac Pro PSU should be more than enough to run a Titan and anything else you put in the case (with the possible exception of 2 more titans :) ).
 
You don't need to bring all your backup storage with you on a 2 week trip. About the display, you can use your laptops display if the OS supports it. I used to carry my big display with me as well, which is a pain in the ass but the Mac Pro was even a bigger pain since displays aren't that heavy. I have 100 TB's at home and I don't need 99% of them at any time but I keep them unless I need them in the future.


If you’re bringing you laptop with you and going to use its display anyway, why not just leave your big computer and just remotely connect to it?
 
Last edited:
After reading though this thread, would it make the most sense to KEEP one's existing "Cheese Grater" MP tower to use as a combination file server/storage hub + node for distributive computing tasks, and use a nMP as the primary CPU?

This would only be necessary, one would suppose, if for mission-critical tasks the nMP proved to be a substantial speed upgrade over the previous gen MP tower. That would probably vary with the sort of work being done. Assuming for the type of production done an nMP IS clearly faster and therefore worth the upgrade, does using both a MP tower and nMP make the most sense?
 
If you’re bringing you laptop with you and going to use its display anyway, why not just leave your big computer and just remotely connect to it?

On a 2 week trip maybe not, but I used to bring my Mac Pro to summer trips if I was changing cities, back then my laptop wasn't enough. These days I can probably make do with my rMBP a lot longer than before though.

----------

Actually, now that I look at prices, a case and PSU (80 PLUS PLATINUM) to hold 8 drives will be $170. Add a $280 MiniSAS controller and you're talking $450 for an 8-drive highly-reliable array capable of 4GBps.

On a New Mac Pro: You can pay $1500 for a 2 port thunderbolt2 solution (which doesn't even exist yet, by the way).

So yes, 3 times the price--and that's conservative.
The thing about the price is that we are comparing offerings from different companies. When you compare the same companies thunderbolt offerings and eSATA, USB 3.0 or Firewire offerings, the prices aren't that different. Areca 8 bay Thunderbolt RAID is 1500$ but Areca 8 bay USB 3.0 eSATA RAID is 1350$ anyway. There's only a 150$ different for going to TB, which is perfectly acceptable. The problem isn't that TB is more expensive, the problem is that there are no cheap alternatives if you want TB like there are with eSATA. But if your company was already buying ARECA boxes or Promise boxes, then switching to TB won't set you back much.
 
The thing about the price is that we are comparing offerings from different companies.

You're making up excuses to say it's the same price. If an inexpensive solution doesn't exist, then it doesn't exist, period. It doesn't matter that it's the result of lack of competition.

One of the reasons (other than lack of TB adoption) there aren't a lot of options for thunderbolt is Intel is controlling the market like a dictator controlling his subjects. If you can't buy an inexpensive Thunderbolt --> SAS/SATA controllers (which is the real limitation), then that needs to factor in.

PCIe is many thousands of times more ubiquitous than thunderbolt. That's why PCIe controllers, even for the tiny but persistent Mac market, are so much cheaper.

TB will never be cheaper than PCIe as it basically is a proprietary version of PCIe. The most they can hope for is some semblence of parity.

When you buy your ARECA TB array, you're buying a TB SATA controller, a PSU, and a box to put it all in. The problem with TB storage is you're locked into these expensive packages. If you could purchase the TB controller separately (and it didn't cost an arm and a leg ($900 for ONE SAS port), or lacked features (Two eSATA II for $200)), it would make it easier. Things will likely get better for the TB market, but don't forget that competition is strictly limited here as Intel is suing the crap out of anyone who doesn't bow to them before releasing a product.
 
Last edited:
... a lot of options for thunderbolt is Intel is controlling the market[/URL] like a dictator controlling his subjects.
.....
That's why PCIe controllers, even for the tiny but persistent Mac market, are so much cheaper.

Every PCIec controller in every Mac is also made by Intel. Just as big of a dictator in that respect too. Intel being the dictator is a big difference here.

If talking diversity of PCIe switch implementers perhaps but not controllers.

TB will never be cheaper than PCIe as it basically is a proprietary version of PCIe.

It is not a proprietary version of PCIe at all. PCIe is pragmatically required on both sides of a Thunderbolt implementation's network. It is neither replacing or even remotely trying to achive parity with PCIe at all.

PCIe has huge volume in part because it is NOT limited to just PCs (haven't checked lately but probably getting close to the point where more non PC devices sold/year have PCIe than PCs do. ) and absolutely is NOT limited to PCIe cards.

Thunderbolt is going to be pragmatically limited to just PCs (and their peripherals ). That is just fine because they are to different things in two different markets.
 
Every PCIec controller in every Mac is also made by Intel. Just as big of a dictator in that respect too. Intel being the dictator is a big difference here.

TB Peripherals and cables have to be approved by Intel. This is not the case with PCIe.

It is not a proprietary version of PCIe at all. PCIe is pragmatically required on both sides of a Thunderbolt implementation's network. It is neither replacing or even remotely trying to achive parity with PCIe at all.

Semantics. TB is proprietary, that's the point. Proprietary = higher price and less competition--that's another point. I invite you to agree with me and stop arguing just to argue.

PCIe has huge volume in part because it is NOT limited to just PCs

I'm not really sure why you pointed this out except to make yourself look smart. It doesn't conflict with my point: PCIe has no dictator sitting over it limiting the influx of new accessories and devices. This is a fundamental problem with TB.
 
TB Peripherals and cables have to be approved by Intel. This is not the case with PCIe.

There are NO 3rd party Intel CPU micro-architecture or chipsets at all. (this is where the PCIe controllers reside). They are all approved by Intel because only intel makes them. They are even bigger dictator in this area.

You can scamper off into a different sub area but dictatorship isn't really the primary issue.

TB parts must pass some standards. That is actually a good thing for consumers not a bad thing. A large aspect of the gatekeeping here is folks not releasing marginal devices.





Semantics. TB is proprietary, that's the point.

It may be the point you'd like to make, but it is not what you said.


Proprietary = higher price and less competition--that's another point.

No. Proprietary has to do with ownership. Not necessarily price or competition. It is not equated at all. Correlated in many contexts? Sure.
Proprietary what is far more important as to whether there is limited competition or prices.

OPEC has driven higher prices and there is nothing proprietary about oil at all.



I invite you to agree with me and stop arguing just to argue.

Keep saying the sky is honeydew green colored and you can invite all day long. You still be incorrect and I'm not going there.

I'm not arguing to argue. You are spewing gibberish. The "argue to agure" is just lame misdirection from your arm flapping.


I'm not really sure why you pointed this out except to make yourself look smart.

Not really. It is there for two reasons. First, because you keep flapping on about "being cheaper". Volume drives lower prices. Even "proprietary" stuff is generally cheaper at sustained higher volumes.

Second, because PCIe is being used in alot more places than you make it out to be. Your arm flapping is really about PCIe cards that fit in slots. That is a relatively small and very quickly shrinking segment of the PCIe market. In that larger and growing subset Thunderbolt is far more an enabler as much as a competitor. That point being you don't even have the competitors mapped out correctly let alone have a grip on the competition.


It doesn't conflict with my point: PCIe has no dictator sitting over it limiting the influx of new accessories and devices. This is a fundamental problem with TB.

Short term, it is a matter of getting the mix correct. Intel has shown absolutely zero indications of wanting to severely restrict this over the long term. The article you quoted from Appleinsider had this:

" ... A four-channel Thunderbolt chip component has a wholesale price of $35, while a two-meter Thunderbolt cable has a recommended retail price of $39. These price points are keeping some smaller manufacturers from entering the Thunderbolt device segment, leaving it largely to established players ... "
http://appleinsider.com/articles/13...-thunderbolt-keeps-accessories-off-the-market

The current reality? Try half to 1/4 that amount! ( $9-13)

http://ark.intel.com/products/series/67021

So the whole "TB controllers cost $9 .... the sky is going to fall because that is sooooo expensive " is crock of manure. On $1,000+ host systems an even bigger crock of manure. An impediment to $20 race-to-the-bottom drive enclosures? Sure. TB market really doesn't need that.

Even more so when put into perspective of 10GbE controllers that cost about 10x as much

http://ark.intel.com/products/family/24586/Intel-10-Gigabit-Ethernet-Network-Connection

Proprietary has alot less to do with cost than just what is being implemented. 10GbE is open and it isn't cheaper than Thunderbolt. TBv1 is 10Gb/s.

Thunderbolt is not PCIe. So what is being implemented is different. Just flapping your arms about the different costs isn't really all that informative.

Intel did stop an immediate "race to the bottom" on TB devices. Given all the stumbling blocks folks ran into along the way with coordination on drivers and getting a grip on unfamilar technology on the whole that was probably far more so "Benevolent dictator like Linus in Linux" than "Evil empire" . The expensiveness is far more a factor of being new than unbridled greed.
 
Isn't everyone hell bent on SSDs?! :D Anyway, your Visa Gift card is in the mail! :p
Great! Now I can enjoy my Samsung 840 Evo's :D B&H sent me their catalog and they highlighted the top of the line Samsung SSD, they've been keeping tabs on my history on their website. You don't happen to work at B&H do you? :D :D

But on a serious note how long do SSD's hold up in terms of read and write speeds compared to HDD's? My rMBP's (I believe they are high end Samsungs) came out of the box at 550MB/s read and 420MB/s write, and now both those read and right speeds like to hang in the mid to upper 400MB/s...not a major issue as it is my boot drive and when I work with ultra HD or whatever its still within the limits...BUT I would be concerned if I had SSD's in a RAID and they start to loose their speed...regardless I guess it would be easy to say buy the fastest SSD's possible from the start for the RAID and deal with it.

Even at 16 bit with alpha channel you need 2.5 GB/sec I think. But yeah I'd love a Pegasus running 6 SSD's. The problem with SSD raids is that you have to buy it at once and spend a buckload of money. One can't do an SSD Raid over years like we did with HDD's since SSD technology is constantly changing. The models you buy this year won't be around next year or won't be worth to buy anymore so you will have pairing issues.
4096x3112p23.98 with 16-bit RGBA is 2.4GB/s on the nose...but like I said compression tech is moving on and I don't think anyone is going to edit online with some insane 2.4GB/s uncompressed format...I mean unless your trying to get a bigger budget for storage in front of your boss it doesnt make sense. The SSDs constantly changing is an issue though, I have three HDD's they were built 3 years apart (I call them the twin drives) and I often use them for small RAID 0 set ups, tried pairing two SSD's for a RAID recently and it was near impossible and RAID really needs teh same drives or its just not worth it.

[Originally Posted by slughead View Post]
TB Peripherals and cables have to be approved by Intel. This is not the case with PCIe.
..And this is the not so mysterious slow pace of TB issue everyone complains about. I think controllers be tightly approved and only made by intel is ok but every other peripheral having to go through approval is insane. I believe a TB device of mine was delayed 6 months because it was ready to go and fully working but Intel had to give its stamp of faith first...which apparently takes 6 months.... :mad:
 
Great! Now I can enjoy my Samsung 840 Evo's :D B&H sent me their catalog and they highlighted the top of the line Samsung SSD, they've been keeping tabs on my history on their website. You don't happen to work at B&H do you? :D :D

No, but if I did, it wouldn't be good, I'd spend all my earnings on employee purchases! :D

.And this is the not so mysterious slow pace of TB issue everyone complains about. I think controllers be tightly approved and only made by intel is ok but every other peripheral having to go through approval is insane. I believe a TB device of mine was delayed 6 months because it was ready to go and fully working but Intel had to give its stamp of faith first...which apparently takes 6 months.... :mad:

One thing I wonder… Perhaps the spartan selection and high prices for TB enclosures are more a symptom of lack of demand than anything else? Consider, that long before TB came along, people with MacBooks and iMacs had already accepted that internal SSDs were king for project work and USB or NAS storage was the way to go for their high volume needs, and Mac Pro users never needed much in the way of external storage (and would use eSATA if they did). So perhaps the whole TB drive enclosure issue is a bit of vicious circle… cheap enclosures won't materialize until demand improves and demand won't improve until cheap enclosures are available.

Of course, those who've been following my dissertation on storage in this thread know fully that I don't think TB is really necessary for that. ;)
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.