Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Simebaby

macrumors newbie
Original poster
Apr 1, 2010
18
6
Hi All

About to push the button on the NMP - coming from a MP 1,1, I have 4 (min) drives that I need to use with the NMP - All I want is box that I can stick these drives in and access them, as discrete drives - no RAID required.

Interface ideally would be thunderbolt but I'm thinking such a think doesn't exist - and to be fair speed isn't really an issue, these are storage disks first and foremost.

Can anyone recommend a solution? Anyone with a NMP in the same boat?

Edit: forgot to add, I'm in the UK.

Cheers

Si
 
Last edited:
Hi All

About to push the button on the NMP - coming from a MP 1,1, I have 4 (min) drives that I need to use with the NMP - All I want is box that I can stick these drives in and access them, as discrete drives - no RAID required.

Interface ideally would be thunderbolt but I'm thinking such a think doesn't exist - and to be fair speed isn't really an issue, these are storage disks first and foremost.

Can anyone recommend a solution? Anyone with a NMP in the same boat?

Edit: forgot to add, I'm in the UK.

Cheers

Si

Here are my suggestions... Oyen Digital Mobius with USB3 if speed is not important (my review), Thunderbay IV with TB1 if you can afford a bit more and want better performance, and Thunderbay 4 with TB2 if you want future proofing for use with SSDs.
 
OWC Thunderbay IV (empty) sounds like the perfect solution for you. It's marketed as a "mac pro migration enclosure," so its built with that use in mind. Not sure about UK pricing but the empty enclosure is $449 in the US. I have two of them and the've been excellent so far!
 
grab a cat6 cable and plug nMP into old MP. Works great its what i do. Does not need to be on all the time either, i just fire it up when the current job is finished and move it over for archive.
 
Hi All

About to push the button on the NMP - coming from a MP 1,1, I have 4 (min) drives that I need to use with the NMP - All I want is box that I can stick these drives in and access them, as discrete drives - no RAID required.

Interface ideally would be thunderbolt but I'm thinking such a think doesn't exist - and to be fair speed isn't really an issue, these are storage disks first and foremost.

Can anyone recommend a solution? Anyone with a NMP in the same boat?

Edit: forgot to add, I'm in the UK.

Cheers

Si

Yes there is 5x tray-less SATA 6 JBOD with Thunderbolt and eSATA option sBOX-TJ

You can hot swap drive all day long
 
As an Amazon Associate, MacRumors earns a commission from qualifying purchases made through links in this post.
I set up many of these for my clients. I do not have one on hand

Matter of fact lot of my customer upgrade from their sBOX-eS with Thunderbolt module
From eSATA to Thunderbolt, other just simply use U3eSATA with their existing eSATA Port Multiplier boxes to see 4/5 drive in their new MAC's USB3.0

Save tons of money if you just want a JBOD
 
As an Amazon Associate, MacRumors earns a commission from qualifying purchases made through links in this post.
I just about my drives with a USB 3.0 drive dock. Works great.


I've already migrated most of my data to SSDs in my Promise Pegasus enclosures.
 
Why do you need four drives? If they are no larger than 1-2TB you could copy all the contents to one USB 3.0 drive. Are these just archive files or scratch disk for video editing or what? We need to know before we can advise what solution is best.
 
Why do you need four drives? If they are no larger than 1-2TB you could copy all the contents to one USB 3.0 drive. Are these just archive files or scratch disk for video editing or what? We need to know before we can advise what solution is best.
Good suggestion - an 8TB USB 3.0 drive is only $300 (http://www.neweggbusiness.com/product/product.aspx?item=9b-22-178-682), cheaper than most empty T-Bolt housings.

It would be great for archiving, but could really suck as a scratch drive.
 
Why do you need four drives? If they are no larger than 1-2TB you could copy all the contents to one USB 3.0 drive. Are these just archive files or scratch disk for video editing or what? We need to know before we can advise what solution is best.

Some of us have large data sets that won't actually fit on 1 drive. Especially when one could get 2 & 3 TB drives for under $100 USD for years.

My iTunes library is at 4TB and it is heavily constrained (I had to move my TV shows off, as well as some of my movies). My Data volume is also at 4TB and it is also nearly full. My Backup system is at 8 TB and it is time to expand it also.

I am bumping up from 12 2TB drives to 8 4TB drives (along with another 2 4TBs for spares).

Moving to an nMP, you have to consider the Total Cost of Ownership. When I made the decision to retire my 1,1 the most important reason for moving to a 4,1 instead of a 6,1 was the cost of replacing everything that Sir Idiot Boy (SIB) considered irrelevant - like storage space. In my case, I'd have to spend an additional $1,500 to replace everything that SIB took out (2 external enclosures to hold the 6 internal drives, another external enclosure to hold the backup system (no E-Sata to TB adapters at the time) and at least 1 external dock for all of my other peripherals.
 
It's a Seagate though. Probably better spending a little more per GB in either case
"Seagate" is an asset. "WD" is a liability.

YMMV. Both companies make some great drives, both companies have had lemons. If you look at reliability reports, pay close attention to the model numbers. If the report isn't specific to model numbers, ignore it. (One cloud storage site often quoted to say that "Seagate 3TB drives suck" doesn't call out the models, and the model that sucked went out of production a couple of years ago.)

None of my enterprise storage systems use WD drives - only Seagate and HGST. (My sample size is small though, only about three quarters of a petabyte. My most common purchase is 72TB drive shelves - they give you a nice 48TB of usable space.)
 
Some of us have large data sets that won't actually fit on 1 drive. Especially when one could get 2 & 3 TB drives for under $100 USD for years.

My iTunes library is at 4TB and it is heavily constrained (I had to move my TV shows off, as well as some of my movies). My Data volume is also at 4TB and it is also nearly full. My Backup system is at 8 TB and it is time to expand it also.
That's why many of us use RAID controllers or volume managers to create volumes larger than any single spinner, so that storage is virtualized and at the user level we simply don't have to worry that our dataset won't fit on a single physical drive.

Not only can a volume (a "volume" is visible to the user and OS essentially as a physical "disk") span multiple physical disks, but most volume managers can dynamically expand volumes while the system is running. (Windows and VMware can do this without rebooting, some Linux systems require a reboot or dismounting/remounting the volume. Most enterprise hardware RAID controllers can reconfigure the volume on the fly, but whether the OS handles a "physical disk" changing size while running might be an issue - not an issue on Windows, it immediately sees the new size and you have the option to add the new unallocated storage to whatever volume you wish.)

My vSphere cluster has been running on the edge of disk space - the 96TB was 90% used, and 140% used if you looked at overcommitted thin-provisioned disks. This week I got a new 72TB (12 * 6TB) 2U storage shelf for my vSphere (VMware) cluster. Connected it to the SAN RAID controller, configured it as a 48TB volume (RAID-60 with two parity groups). When the volume was ready, went into vSphere and saw the new 48TB disk drive and added it to the VMware volume.

(Note that storage virtualization is often multi-layered. The SAN RAID controller sees a bunch of physical disks, and makes a volume from them. (Even spookier, since we're using RAID-60, the controller creates two RAID-6 volumes, then stripes them into a single RAID-60 volume.) The vSphere IO system sees the combined RAID-60 volume as (an abstraction of) a physical disk, and adds that to its abstract "datastore" volume. The VMware admin sees the "datastore" as a disk drive - but one that is suddenly 48TB larger.)

Everyone is happy, and it all happened without any shutdowns.
 
"Seagate" is an asset. "WD" is a liability.

YMMV. Both companies make some great drives, both companies have had lemons. If you look at reliability reports, pay close attention to the model numbers. If the report isn't specific to model numbers, ignore it. (One cloud storage site often quoted to say that "Seagate 3TB drives suck" doesn't call out the models, and the model that sucked went out of production a couple of years ago.)

None of my enterprise storage systems use WD drives - only Seagate and HGST. (My sample size is small though, only about three quarters of a petabyte. My most common purchase is 72TB drive shelves - they give you a nice 48TB of usable space.)

Your experience would appear to be quite different from the mean failure rate. And mine.

I agree on HGST though, our SAN at work is all HGST.

ImageUploadedByTapatalk1447316457.656450.jpg
 
That's why many of us use RAID controllers or volume managers to create volumes larger than any single spinner, so that storage is virtualized and at the user level we simply don't have to worry that our dataset won't fit on a single physical drive.

Not only can a volume (a "volume" is visible to the user and OS essentially as a physical "disk") span multiple physical disks, but most volume managers can dynamically expand volumes while the system is running. (Windows and VMware can do this without rebooting, some Linux systems require a reboot or dismounting/remounting the volume. Most enterprise hardware RAID controllers can reconfigure the volume on the fly, but whether the OS handles a "physical disk" changing size while running might be an issue - not an issue on Windows, it immediately sees the new size and you have the option to add the new unallocated storage to whatever volume you wish.)

My vSphere cluster has been running on the edge of disk space - the 96TB was 90% used, and 140% used if you looked at overcommitted thin-provisioned disks. This week I got a new 72TB (12 * 6TB) 2U storage shelf for my vSphere (VMware) cluster. Connected it to the SAN RAID controller, configured it as a 48TB volume (RAID-60 with two parity groups). When the volume was ready, went into vSphere and saw the new 48TB disk drive and added it to the VMware volume.

(Note that storage virtualization is often multi-layered. The SAN RAID controller sees a bunch of physical disks, and makes a volume from them. (Even spookier, since we're using RAID-60, the controller creates two RAID-6 volumes, then stripes them into a single RAID-60 volume.) The vSphere IO system sees the combined RAID-60 volume as (an abstraction of) a physical disk, and adds that to its abstract "datastore" volume. The VMware admin sees the "datastore" as a disk drive - but one that is suddenly 48TB larger.)

Everyone is happy, and it all happened without any shutdowns.

I wish our experience with vsphere was so smooth. We have had no issues in terms of the esx setup itself being problematic, but we just haven't been given enough hosts to run the workload we are running.

I believe our SAN is a Dell equalogix solution. The hot swappable aspect is definitely helpful.

Thankfully at home matters are simpler, a simple four disk raid0 array backed up regularly to a spinner.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.