Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

blueroom

macrumors 603
Feb 15, 2009
6,381
27
Toronto, Canada
Here's the list
$3000 Synology DS3612xs NAS Intel i3 inside, upgradeable DRAM
$5000 Netgear XSM7224 managed switch with 10GbE
$???? NAS drives WD Reds are nice.
 

Griggi

macrumors newbie
Dec 27, 2012
22
2
Here's the list
$3000 Synology DS3612xs NAS Intel i3 inside, upgradeable DRAM
$5000 Netgear XSM7224 managed switch with 10GbE
$???? NAS drives WD Reds are nice.

Are you sure the Netgear XSM7224 would be suitable? I just googled it and it seems to have 24 SFP-Ports and only 4 Copper ports.

If OP also wants to connect his devices to this switch I would take one that has 24 copper ports and 4 10G SFP uplink ports :D
 

blueroom

macrumors 603
Feb 15, 2009
6,381
27
Toronto, Canada
Hmm not sure, thought it had 4 10GbE ports for a server or two. But yes the OP needs 24 copper & 2-4 10G ports.

Oh yes the DS3612xs 10GbE dual card add $500
 

Aiva

macrumors newbie
Apr 1, 2014
3
0
Mcgizzle,
I'm in exactly the same situation - maintaining a media production company with a growing team, centralised storage requirement, macos based server. I'm not a network tech - I'm a motion graphics designer but I've built stuff for myself so somehow ended up with the network tech duties.

The only difference in our setup is the server - I built a dual xeon hackintosh tower with 20 drive bays with a mix of jbod and raid storage. This also doubles as ftp/webdav/http server with backup and other duties, so other than that it's the same.

After the same thought process, considering 10gbe, fibre channel, infiniband, I decided to stick with ethernet and strengthen the server to switch connection.

After much trial and error I managed to find a quad-port gig ethernet card (Supermicro card) which uses chips supported by a stock Apple kext, and aggregated those ports in conjunction to the 2 onboard ports.
So six aggregated ports to the switch then single gig links to clients.
Great in theory, but we had a lot of instability - one day we had 4 people constantly reading 100mbit rushes and the client machines kept being 'kicked off' (console was showing ethernet links dropping and coming back on so afp kept dropping out), so it went back to a bottlenecked single link.

We use an old 24 port 3com switch with about 40 gigabit switching capacity so is well within spec, it seems to do well.
There are a number of potential gremlins in my setup so we're still testing it. When it worked for a week or 2 it showed great promise - I had 3 client machines transferring 100gig projects back and forth and each client had a saturated link to the switch. On top of that it was sending a backup to a NAS. Then the instabilities started.

I'm going to upgrade the server to mavericks for SMB and incase it's outdated ethernet kext, will let you know how it goes.:apple:

edit:
another idea I had was to run a single link from server to switch for the main studio and all the suites, then give 4 'suites' exclusive direct ad-hoc links to the server's 4 port card with crossover cables. Meaning each suite has it's own gig pipe and is unaffected by the others. But not sure how you would prevent a loop from happening, as afp has 2 paths to the same destination and it needs to pick the direct route somehow and not the long route through the switch.
 
Last edited:

Silencio

macrumors 68040
Jul 18, 2002
3,532
1,663
NYC
Here's the list
$3000 Synology DS3612xs NAS Intel i3 inside, upgradeable DRAM
$5000 Netgear XSM7224 managed switch with 10GbE
$???? NAS drives WD Reds are nice.

For a 12 drive RAID, I would not go for the Red drives. WD themselves do not recommend using them for RAIDs of 8 or more drives, IIRC. Got to spring for the RE drives - they're well worth it.

For 10-20 clients, I think four link-aggregated gigabit ports should be a good start. Stepping up to 10GbE is quite spendy still.
 

Aiva

macrumors newbie
Apr 1, 2014
3
0
Another point to mention is the negative impact of large raid 1,5 and 6 arrays on your access performance.
I'm not an expert but from what I understand those levels of raid increase the iops needed for data reads/writes. On raid 5 for example, for a random write operation 4 iops are needed, when compared to 1 iop on a single drive.
Having many clients accessing the same big array can soon overwhelm it and your 800MB/S will become very laggy indeed. It's a tradeoff.

Obviously the data redundancy is a good thing but it's still not really a proper backup solution. So do consider more smaller arrays instead of 1 big one, and split their use as much as possible.
Or even a couple of raid 0 arrays with regular mirror backups. This is what I'm planning to do eventually.
Just my 2 pence worth.:)
 

unplugme71

macrumors 68030
May 20, 2011
2,827
754
Earth
Get a Synology Disk station with dual NICs that supports iSCSI or Drobo B800i

I believe both support cache drives using mSATA to speed things up. I'd also recommend WD RE drives for reliability/speed vs WD Green or Black.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.