Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
The thought of having to fork out £300+ to get just a box, and then maybe another £300+ to essentially get your data back if it dies out of warranty just seems utterly, utterly stupid to me.
This is why you don't rely on RAID to be your sole source of protection, after all it doesn't do anything for file-system corruption, accidental deletes etc. Redundancy is great for giving you high capacity beyond what you can do (or afford) with a single drive, and redundancy is great for ensuring your volume stays running in the event of a drive failure, but it offers no protection at all against failure of the RAID controller or its power supply.

So yeah, I'm moving to RAID-5 for my main storage, because it gives a great mix of speed and redundancy, but I'll still have a local Time Machine backup and a copy of the latest backup on my NAS, just in case ;)

Also, other than software RAID I'm not aware of any truly interchangeable RAID systems; I mean, controllers by the same provider should recognise existing RAID data, but there's no guarantee of that at all as a card provider may not use the same controller chip between updates for example. Of course you could check into which chip is used and whether it maintains compatibility, but you really should just have a proper backup. No least of all because it also gives you some option for file-recovery if you accidentally delete, saves you from possible file system issues and so-on.
 
Also, other than software RAID I'm not aware of any truly interchangeable RAID systems; I mean, controllers by the same provider should recognise existing RAID data, but there's no guarantee of that at all as a card provider may not use the same controller chip between updates for example.

HP does guarantee portability between the same controllers on ProLiant servers. In fact, they even allow you to move the disks and the GBs of writeback cache to a different system and restart.

Their small SAN arrays (e.g. MSA2040) save the dirty cache data to CompactFlash cards - move the disks and the CF cards to a new array. (They also have dual controllers and dual-ported disks, so the failure of one controller doesn't affect ongoing access to the data.)

In general you are right, but if you really need that capability there are some solutions.
 
Kind of a pointless exercise then really isn't it? ;)

No, not at all. If you're ever in a position where you need to get your data "back out of" a failed DAS or NAS box, you've done something dreadfully wrong. The proprietary RAID format used by Drobo (or other similar products) is a non-issue. In that event you prepare a new storage device and you restore from backups.

Write it 100 times on the chalkboard until it sinks in...

RAID is not a backup.

RAID is not a backup.

RAID is not a backup.
 
HP does guarantee portability between the same controllers on ProLiant servers. In fact, they even allow you to move the disks and the GBs of writeback cache to a different system and restart.
.

ZFS

And one of the advantages of using the zfs file system to store your data is that the disks can be read by any zfs system; your data can be moved from linux, to solaris, to freebsd, to whatever. In other words, you can reimplement your storage systems by putting your disks in the new system. You are not bound to their hardware.

With most systems there at best propriety lock-in, if the hardware fails your data on the disks may be un-usable.
 
No, not at all. If you're ever in a position where you need to get your data "back out of" a failed DAS or NAS box, you've done something dreadfully wrong. The proprietary RAID format used by Drobo (or other similar products) is a non-issue. In that event you prepare a new storage device and you restore from backups.

Write it 100 times on the chalkboard until it sinks in...

RAID is not a backup.

RAID is not a backup.

RAID is not a backup.

Where did I suggest it was?

My point is that unless you want to rebuild, potentially, 20TB of data from backups just because the hardware, not the disks, has failed, then the system is poor.

Just because you have copies of data is no excuse to just go 'it'll be fine', because Sod's law says something else will go wrong.
 
Last edited:
Here's an update on my Drobo. I've had it now for about 3 weeks. It has four WD 4TB RE drives and one WD 2TB Black drive. It's formatted for dual disk redundancy which gives me 9.03 TB of usable storage. I've finished moving all my data from the internal drives on my old Mac Pro. That took almost 2 weeks. Today it started displaying a caution indicator that it is getting full. At present it is 85% full and has 1.33 TB of free space. For earlier experience, I expect performance to suffer somewhere between 90% and 95% full. When it gets down to about 1TB free, I'll probably buy another 4TB RE drive to replace the 2TB.
 
When it gets down to about 1TB free, I'll probably buy another 4TB RE drive to replace the 2TB.

What's your guess as to how many days it will take to rebalance after the switch to the larger drive? Will it even work if your free space is less than the size of the drive that you're pulling?

(This isn't a knock against Drobo - moving a handful of TBs around is a slow process regardless of the technology.)
 
It amazes me, with all of the attention to maximizing space on the nMP, that Apple went with 1Gb ethernet. Why not 10Gb? So many leaps in that arena as far as switches and other hardware. The speeds I get on my QNAP are just amazing but then of course, I need a PCIe card to get them. That's a head-scratcher to me.
 
It amazes me, with all of the attention to maximizing space on the nMP, that Apple went with 1Gb ethernet. Why not 10Gb? So many leaps in that arena as far as switches and other hardware. The speeds I get on my QNAP are just amazing but then of course, I need a PCIe card to get them. That's a head-scratcher to me.

Maybe because 10GBE NICs are around $500-$1000 and not everyone can use one?
 
What's your guess as to how many days it will take to rebalance after the switch to the larger drive? Will it even work if your free space is less than the size of the drive that you're pulling?

(This isn't a knock against Drobo - moving a handful of TBs around is a slow process regardless of the technology.)

I don't know. It took about 4 days to rebuild when I added the fourth 4TB and switched to dual redundancy. I suspect it will take about half as much time when I add the new drive since it will only have to generate 2TB of parity data rather than 4TB. I'll post something when that happens, but it may not be for a few months.
 
Maybe because 10GBE NICs are around $500-$1000 and not everyone can use one?

Well, well, that was snarky. Little condescending there?

So you're saying price is the reason they're keeping a legacy connection? How is that in line with any other aspect of this machine? I was simply saying a 10GbE connector would be more in-line with TB and all of the forward thinking on the nMP, and would play nicely with legacy networks as well as simplify things for some of us.

But you're right, most nMP buyers are trying to save a buck. ;)
 
Well, well, that was snarky. Little condescending there?

So you're saying price is the reason they're keeping a legacy connection? How is that in line with any other aspect of this machine? I was simply saying a 10GbE connector would be more in-line with TB and all of the forward thinking on the nMP, and would play nicely with legacy networks as well as simplify things for some of us.

But you're right, most nMP buyers are trying to save a buck. ;)

Sorry. There's just so much complaining going on and lots of people who think their personal needs apply to the majority. In all seriousness, isn't 10GBE in very limited use? Doesn't it make more sense to offer it as a TB adapter than build it in?
 
Sorry. There's just so much complaining going on and lots of people who think their personal needs apply to the majority. In all seriousness, isn't 10GBE in very limited use? Doesn't it make more sense to offer it as a TB adapter than build it in?

It seems to have taken off like wildfire in the world of editing and post production over the last couple years. My Netgear switch was about $900 but allows me to edit directly off the NAS (I know, I know) at absolutely ridiculous speeds and is just plug and play. I'm also able to move terabytes of data between my MBP, MP and z820 like it's nothing.

If they have a TB adapter, then that's something.
 
It seems to have taken off like wildfire in the world of editing and post production over the last couple years. My Netgear switch was about $900 but allows me to edit directly off the NAS (I know, I know) at absolutely ridiculous speeds and is just plug and play. I'm also able to move terabytes of data between my MBP, MP and z820 like it's nothing.

If they have a TB adapter, then that's something.

Atto sells a TB -> 10 GbE adaptor.
 
price check on aisle 13

Maybe because 10GBE NICs are around $500-$1000 and not everyone can use one?

HP charges $360 list price to upgrade a ProLiant to dual 10 GbE - I believe that works out to $180 per port ....

I think that you are way over-estimating what the cost would be to Apple....
 
Atto sells a TB -> 10 GbE adaptor.

I could put my new bag of nMP adaptors with my old bag of MBP adaptors. That seems pretty pro! :D Better than nothing I suppose. I was actually considering that adaptor last year, hope the prices come down. Holy c#$p!
 
HP charges $360 list price to upgrade a ProLiant to dual 10 GbE - I believe that works out to $180 per port ....

Base-T sockets? or empty SPF sockets that require yet another transceiver?
There are ballon squeeze solutions in 10GbE space where costs are moved to the cables and/or connectors and out of the base infrastructure card. Doesn't necessarily mean the per port costs are lower.
 
Oh, like T-Bolt... ;)

Yeah... TB controllers+sockets cost $20-40 versus the $300 you are talking about here (and the $100-150 Intel & others are charging for the 10GbE controllers) . The cables costs more than basic copper Base T ones, but the base infrastructure costs alot less.


Good point. The $360 option is for SFP+. The dual port Base-T Cu option is only $330 list price.

Where? Comes up blank here for something like that?

http://www8.hp.com/us/en/products/iss-adapters/#!view=grid&page=1&facet=|

there are some single port X540's floating around roughly in that price range, but most of the cards with drivers are higher.
 
Last edited:
Yeah... TB controllers+sockets cost $20-40 versus the $300 you are talking about here (and the $100-150 Intel & others are charging for the 10GbE controllers) . The cables costs more than basic copper Base T ones, but the base infrastructure costs alot less.

A 1 metre 10GBASE-CU SFP+ Cable costs $79 - including the two SFP+ connections.

Where? Comes up blank here for something like that?

It's the 533FLR-T mezzanine card for G8-series servers.

c03926365.png


(To get an idea of scale, the two ports on the left are standard RJ-45 connectors for copper network cables. It's not very big.)
_________

My point is that while Apple claims "no compromises" in the Mac Mini Pro design -- Apple certainly compromised by putting last generation Ethernet in the can.

Maybe I'm warped, since I just got a new 48 port 10 GbE SFP+ network switch to play with. It's my first switch with over 1 Tbps aggregate capacity....

Gigabit Ethernet is legacy technology today, yet that's what the Mac Mini Pro uses. I can get HP servers with 10 GbE for about $100/port additional. And I do.

To paraphrase Phil - "No compromises, my ample ass".

The main reason that the Mac Mini Pro has legacy Ethernet is because Apple wants you to pay much more for T-Bolt 10GbE dongles. Pure manipulation - make all the tasty bits T-Bolt only....
 
Last edited:
I just bought one of these Velociraptor Duo's off Amazon Warehouse Deals for $400. Maybe I'm crazy, but it seems like a good buy for 2TB of SSD-like performance. While these were ridiculously overpriced at launch, they seem much more reasonably priced now.

My plan is to use the 1TB internal SSD on the nMP for OS/Apps and my current Aperture Library and then archive my other (recent) Aperture Libraries to this TB Duo so that when I occasionally need to access them, it's not too painful.

Thoughts?
 

Attachments

  • Screen Shot 2013-12-05 at 11.00.49 AM.png
    Screen Shot 2013-12-05 at 11.00.49 AM.png
    78 KB · Views: 92
That seems overkill for accessing data that you only need occasionally. I like to spend money on cool, shiny computer stuff, but I would have a hard time justifying that purchase to myself.
 
Did you spend $400 or $499, the pic shows $499. Thats a great deal at $400.
 
Last edited:
Did you send $400 or $499, the pic shows $499. Thats a great deal at $400.

I got one for $400.

That seems overkill for accessing data that you only need occasionally. I like to spend money on cool, shiny computer stuff, but I would have a hard time justifying that purchase to myself.

Yeah, it's certainly not a no-brainer. I could have gone slower and USB3 for a lot less or SSD and faster for a lot more. This sits somewhere in the middle.
 
I bought 2x of the Seagate Backup Plus 3TB USB3 drives at Sam's Club for $98/each. Then I bought 2x of the Seagate Thunderbolt Desktop adapters for $150/each from Amazon. Plus a couple of Apple Thunderbolt cables.

I set it up as 3TB (software) mirrored in Mavericks. It's not the most cost-effective solution, and it's not any faster than USB3, but it also allows me to drive both an Apple LED Cinema Display & Apple Thunderbolt Display off my Mini. It also frees up a couple USB3 ports.
--
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.