Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Does anyone make a good card with 4 external SAS ports?

I see this one...

http://eshop.macsales.com/item/Highpoint+Technologies/RRAID2744/
I'm not a fan of Highpoint products, and it looks like another RoC based product, which is supported by the price. Though 16 port external RAID cards using MiniSAS doesn't exist anyway (true hardware based cards, not RoC; implementations with this many ports are done internally).

But Highpoint is a reseller (design and manufacture nothing), that direct their attentions to the consumer market. They've pushed their gear towards independent professionals/SMB's, but it isn't really suited, as even if the product itself is sufficient (i.e. RAID card from Areca with some short-cuts), their support is absolutely horrible. Hence the reason I recommend you avoid them (unless you're fine with dealing with any problem on your own, including not being able to get newer drivers and/or firmware to fix problems).

So there is a risk involved with this particular vendor and product.

But there are non-RAID cards out there, and are incredibly good.

ATTO Technologies has 2x (both are 6.0Gb/s):
Unfortunately, they're not cheap either. The standard H6F0 has an MSRP of $795, and the GT version will be worse (they just have a "Call Us" statement :eek: :p).

Areca makes a 16 port non-RAID HBA (Host Bus Adapter), but not in an external version.

But at this sort of money, you can get a proper RAID card, and use the internal-to-external MiniSAS cables I've linked previously (here they are again for convenience).

Some Areca's even have an External SFF-8088 port on their otherwise internal cards (i.e. when you hit the ARC-...ix12/16/24 models). This would allow you to save purchasing one of the internal-to-external cables, which at ~$60, isn't something to look down at.

Be aware however, they tend to share ports with an internal connector, and in the case of the ARC-188x - ix series, they use SAS Expanders on the card itself. I know this is probably a bit confusing, but I'll help you if needed (just want you to take a look), so don't panic.

Performance wise, this doesn't cause an issue for normal operation, particularly with a modest number of drives (remember, these cards are actually designed to operate up to 128 disks). Where it can matter, is with things like initialization and features such as Online Expansion and Online Migration (there are other threads that cover this if you search, and they're not that old; about 4 months or so IIRC).
 
Yeah...some of it can get a bit confusing, but I can follow enough.

I'm setting up online editing storage for my RED. What's so attractive about products like the Pegasus or the Sonnet 8 bay is that I can plug the array into my MBP if need be. Are there any tower solutions that have "proper" RAID hardware in the box?

I'm going to have to get an expansion box for extra NVIDIA GPUs for Davinci Resolve. So I'm trying to figure out my PCI slot situation.

I need a slot for , ATI GPU, Red Rocket Card and PCI Expansion card. So that leaves an extra slot. I need to also have a main RAID tower, secondary backup tower and a LTO5 Drive.

I'm hoping someone makes a Thunderbolt LTO drive or at least a SAS>Thunderbolt enclosure. That would solve a lot of problems.

Just so many options, trying to figure out my best route.
 
A lot of people are now using Cubix and Cyclones for the exact same reason:

http://www.cubixgpu.com/Products/Rackmount

http://www.cyclone.com/products/expansion_systems/600-2707.php

Say for example get the 6 bay Cubix (do they make a 3 or 4 bay ?)

Add
2 x Nvidias 580
Red Rocket
Blackmagic Design card

AtI 5870 in the Top PCEi
Cubix in the next
Areca or Atto in the third
? In the 4th

Personally at the moment though you are probably better off with a PC for resolve at this point, more PCIe lanes, cheaper, faster RAM and processors
 
Personally at the moment though you are probably better off with a PC for resolve at this point, more PCIe lanes, cheaper, faster RAM and processors

I've been told that before. I just can't fathom going back to PC. I enjoy the OSX environment too much. Everything else I have is built around it. It would be too much of a hassle to have a separate machine just for Resolve.

I've heard pretty good things about the Cyclone, will probably go that route. I'm told it's better to have video cards like the Blackmagic in the tower. I've seen RedRockets in external boxes though.

I'm also considering having the Rocket in a single Thunderbolt enclosure so I can take it in the field if need be.
 
I'm setting up online editing storage for my RED. What's so attractive about products like the Pegasus or the Sonnet 8 bay is that I can plug the array into my MBP if need be. Are there any tower solutions that have "proper" RAID hardware in the box?
Yes. :)

The Areca ARC-8040. It's not cheap though, and it requires a SAS HBA to connect to the system (seeing it for ~$1900 without drives or SAS HBA; example).

You'd be better off going with an internal solution (card) as it's going to be impossible ATM to share such a unit with your laptop (read on). Drives could be either, as there is the internal to external cable I've linked previously.

I'm going to have to get an expansion box for extra NVIDIA GPUs for Davinci Resolve. So I'm trying to figure out my PCI slot situation.
This may be your only solution, and the internal PCIe slots are a bit worse than you may realize. Specifically, slots 3 and 4 use the same 4x lanes via a PCIe switch.

I need a slot for , ATI GPU, Red Rocket Card and PCI Expansion card. So that leaves an extra slot. I need to also have a main RAID tower, secondary backup tower and a LTO5 Drive.
The "open" slot will actually be shared, as it would be either slot 3 or 4, as slots 1 & 2 would be best suited to the GPU and PCIe expansion card for the enclosure.

I'm hoping someone makes a Thunderbolt LTO drive or at least a SAS>Thunderbolt enclosure. That would solve a lot of problems.
TB is too new ATM, so it currently doesn't exist. It's been postulated (there is interest), but no one has announced anything AFIAK, and worse, I'm not aware of any tape drive makers having any interest in TB.

Assuming a SAS to TB device is actually released however, you'd be able to use that in order to connect to the LTO5 tape drive.

But it's too new and ultimately uncertain ATM, which is why you're going to have difficulties in doing what you want to do (share a RAID enclosure with both the MP and the laptop).

  • BTW, how fast does your camera stream RED footage into the laptop?
I ask, as you may need to use another product in the interim (i.e. Qx2 with the laptop, then use an eSATA card in the MP to import that data onto the primary array for editing). Not the fastest solution, but inexpensive while you wait, and the Qx2 could find another use (array is hardware, so both systems will "see" it as a single volume).

Software based RAID wouldn't be advisable for this (could easily end up with wiped data).

Sorry it's not better news, but there's just too much unknown ATM, as TB is still so new. So some, if not the majority, of the vendors will sit back and wait it out before they move (waiting to see what happens with adoption rates).
 
This might be of interest, even if a little bit old.
RED on Mac Pro, MacBook Pro
They mention needing a RAID with only 500-600MB/sec throughput. My RAID6 is well above that, which was why I built it... I anticipate editing RED soon, but admit that I haven't yet.
 
This might be of interest, even if a little bit old.
RED on Mac Pro, MacBook Pro
They mention needing a RAID with only 500-600MB/sec throughput. My RAID6 is well above that, which was why I built it... I anticipate editing RED soon, but admit that I haven't yet.
That's to keep up with both feeding input to the RED Rocket card, and storing the completed output.

I'm wondering about the streaming throughput of the camera itself when the data is being recorded (i.e. camera feed is stored to a pool controlled connected to a laptop for field use).

Since I didn't have an answer when I began posting this, I searched on the web and found an article that lists a couple of figures (source; look under Image Recording), both their own and others the author claims to have located from other sources.

So I calculated it on the largest value of 1.5MB per frame as a worst case scenario. So at 60 seconds, that's only 90MB/s of sustained throughput required. And it may not be this high (the author claims s/he measured it at 1.2MB per frame, which is only 72MB/s).

Either way, according to this information, it's not that taxing in terms of bandwidth requirements and is able to be handled by a single mechanical HDD (assuming it's not on it's inner-most tracks). ;)

Just record to an eSATA box, transfer it to the RAID, then go to work with the editing and transcoding from there, as the data's now on a volume fast enough to keep up with the RED Rocket card. :D
 
Also, those RE-4 disks are cheaper on Amazon at the moment.

Thank you for the heads up, I did purchase them on amazon when I bought them, but I had that link in my original budget proposal so that's what I posted, however I did buy one from amazon because there is a 3 quantity limit :(

----------

sorry to tangent off of where this thread discussion has ened up going but I have one more question, with the 8 drive bay, I will have four empty slots for the time being, is it possible to put 4 smaller drives in those ports and run two separate RAID arrays?

so 4x2TB RAID5 in the top 4 slots

and

4x500GB RAID5 in bottom 4 slots?

or will the card only create one array with all the drives in the array, I know (to my knowledge) you cannot create an array with multiple drive sizes, with the exception of Drobo...
 
As an Amazon Associate, MacRumors earns a commission from qualifying purchases made through links in this post.
Thank you for the heads up, I did purchase them on amazon when I bought them, but I had that link in my original budget proposal so that's what I posted, however I did buy one from amazon because there is a 3 quantity limit :(

----------

sorry to tangent off of where this thread discussion has ened up going but I have one more question, with the 8 drive bay, I will have four empty slots for the time being, is it possible to put 4 smaller drives in those ports and run two separate RAID arrays?

so 4x2TB RAID5 in the top 4 slots

and

4x500GB RAID5 in bottom 4 slots?

or will the card only create one array with all the drives in the array, I know (to my knowledge) you cannot create an array with multiple drive sizes, with the exception of Drobo...
You can do any of the above, and I think it's wise to use the four empty slots for anything you can. The second RAID doesn't have to be RAID5, either. You could make it a RAID0 or whatever.
 
I have one more question, with the 8 drive bay, I will have four empty slots for the time being, is it possible to put 4 smaller drives in those ports and run two separate RAID arrays?

so 4x2TB RAID5 in the top 4 slots

and

4x500GB RAID5 in bottom 4 slots?

or will the card only create one array with all the drives in the array, I know (to my knowledge) you cannot create an array with multiple drive sizes, with the exception of Drobo...
The one limitation you have, is the fact you cannot run a JBOD (concatenation) simultaneously with a RAID level. The firmware setting only allows for one or the other.

Other than that, you can do what you're planning to do (and any other combination up to the 128 disk limit for that card). This also means you can run RAID along side Pass Through disks (single disks). Not hard to do either. :D
 
Nanofrog knows way more than I do. I didn't consider the JBOD restriction, because I personally don't like them. :p
 
wow thats fantastic!
I wouldn't do JBOD anyway, I don't like them either, im tempted to do a 4 drive RAID0 on the extra smaller that would be fast as hell, but ill probably stick with RAID5 for the redundancy. When I get everything and have it up and running I will post some speeds I'm getting.
 
According to Anandtechs findings on CES, OCZ is taking the first step in the right direction for drives. PCI-ssds without SATA. This makes it possible to create external drives like this:

http://www.anandtech.com/show/5321/oczs-portable-thunderbolt-ssd-lightfoot
with performance in the range of the current Revodrives (sustained reads ~750MBps)

this is based on the Kilimanjaro platform which is native pci which brings many possibilities for future pci-cards and thunderbolt drives. As someone said, this finally means that you can have a fully fledged os/software/documents-drive which you can bring with you if you know that there will be macs where you arrive :) Wonderful!

Here's some more reading on the Kilimanjaro-platform: http://www.storagereview.com/ocz_zdrive_r5_kilimanjaro_platform_announced
 
I'm looking at this Cyclone box.

http://www.cyclone.com/products/expansion_systems/600-2707.php

Looks like it has room for drives inside as well. Would it be reasonable to just get a RAID card, drives, stick them all in the Cyclone and call it a day? Would there be any bandwidth issues if I have 2 NVIDIA cards in there?
Well, it has a switch inside so you can only run one PCI card at a time inside. I don't know what I'd do with that. Maybe fill it with those PCI SSDs and be able to choose one for each project? And I don't have a rack to mount it in.
 
Well, it has a switch inside so you can only run one PCI card at a time inside. I don't know what I'd do with that. Maybe fill it with those PCI SSDs and be able to choose one for each project? And I don't have a rack to mount it in.

I don't think that's accurate. The box is made to run multiple cards at the same time.
 
I don't think that's accurate. The box is made to run multiple cards at the same time.
But only has bandwidth for a single x16 PCI 2.0 lane, according to that info you linked. So one GPU would take all the bandwidth, right?
600_2707blk.gif


See how it runs to a single x16 cable after the switch? That makes it look like it can only handle one GPU at a time, and that card takes up the second x16 lane in the Mac Pro.

Maybe I'm wrong, but that's what it looks like to me.
 
I don't think that's accurate. The box is made to run multiple cards at the same time.
From a technical POV, it may be.

For example, if it's just a MUX (= only operates a single card at a time), it's all or nothing, and is only switched between active slots as the next data request is carried out.

But if it was designed with a QoS scheme in mind (Quality of Service; requires both hardware and software support), it may be able to divide it's bandwidth to more than one slot simultaneously.

You'd need to contact the vendor to find out the details.

But only has bandwidth for a single x16 PCI 2.0 lane, according to that info you linked. So one GPU would take all the bandwidth, right?
A 16x Gen 2.0 card can't saturate all 16 lanes, but is designed for that many due to do with the fact most cards exceed 8x lanes performance wise, but the next level up is 16x. BTW, the fastest GPU's currently only max out ~10x lanes of bandwidth consumption, so 6x are sitting idle in order to get that additional performance to the card).

As per the PCIe switch (Cyclone), I'd hope it's running a QoS implementation in order to divide down bandwidth as needed to better utilize it (16x * 1 or 8x * 2, and hopefully 4x *4 operation as well), depending on the specific traffic requests at any given time. Particularly given the cost. ;)

But the product vendor needs to be contacted to verify what/how it works (either a simple MUX, or if it's an "intelligent" switch that implements some sort of QoS solution to better utilize the available bandwidth at any given moment, based on the I/O requests).
 
All I could find was:
"The expansion chassis' non-blocking eighty lane PCI Express Gen2 switch enables the coupling of cost-effective enterprise host PCs with high bandwidth, peer-to-peer capable I/O subsystems."

"THEORY OF OPERATION
The basic PCI Express link consists of dual unidirectional differential links, implemented as a transmit pair and a receive pair. The signaling rate for PCI Express Gen2 is 5.0 Gigabits/second/Lane/direction. A link supports at least one lane.
The PCI Express link from the PCIe2-426 over the cable to the PCIe2-427 is a sixteen lane (x16) link. The PCIe2-427 provides three x16 slots (slots J3, J5, J7) and two x8 slots (slots J1 and J2). All slots are populated mechanically with x16 connectors; the upper eight lanes of slots J1 and J2 are not connected. All slots can accommodate either single lane (x1), x4, x8 or x16 add-in cards. Although not expressly permitted by the PCI Express Specification, slots J1 and J2 accommodate “down-shifting” a x16 card into a x8 slot. Plugging a smaller link card into a larger link connector is fully allowed.
Once the PCIe2-426 is installed into the host PC, the cable connected to the PCIe2-427, the chassis plugged into an AC power outlet and any desired add-in cards are installed, the system is ready to be turned on. When the host is turned on, a signal from the PCIe2-426 will turn on the PCIe2-427 chassis. A number of things happen at this point. First, the PCI Express links are initialized. This is a purely hardware initialization where each PCI Express link is set up following a negotiation of lane widths by the two ends of each link. No firmware or operating system software is involved. Once the links are initialized or “trained”, there are LED indicators on each of the Cyclone Microsystems cards that indicate the links are trained. A detailed explanation of the LEDs follows later in this manual.
One essential requirement for system initialization is the ability of the host system’s BIOS to be able to enumerate the many bridges inherent in a complex PCI Express design. The links from the PCIe2-426 to the PCIe2-427 are created with PCI Express Switches. Each link looks like a PCI-to-PCI bridge to the Host’s BIOS. The number of bridges can add up quickly. Older BIOS may not have the resources to enumerate the number of bridges. Make sure that the BIOS on the host computer has the latest updated BIOS. If required, contact the host system’s manufacturer to make sure that the BIOS used can handle the large number of bridges that it will see in the system."
 
All I could find was:
"...[snip]...Once the links are initialized or “trained”, there are LED indicators on each of the Cyclone Microsystems cards that indicate the links are trained. A detailed explanation of the LEDs follows later in this manual....[snip]..."
Is the manual down-loadable (I presume it is)?

I'm still not sure what to make of the language from the quote you provided from Cyclone, as it could just imply a simple MUX after configuring the card's lane configuration, or the manual may indicate a QoS implementation (sniped down for simplicity). Definitely the feel of a marketing/PR spin though, as it has a feeling of concealment...hmmm...

Now the PCIe switch used in the MP itself for slots 3 & 4 (2009/10 systems), is just a MUX, as there's no substantial benefit for adding the complexity for it to do 1x * 2 simultaneously (added cost and not beneficial for most cases).

With a 16x lane PCIe switch however, it would make a substantial difference for simultaneous high I/O if the added hardware were added for a QoS implementation (beneficial for more than a single configuration, so the added complexity, which increases the cost, was at least worth considering when designing the chip). Wish I knew what the P/N was, as that *could* be used to locate a data sheet.
 
It *is* downloadable. I copy/pasted from the PDF.

It's all way more than I would need right now, but I just question if one could actually use it to run, say, a second GPU and a RAID for example.

I agree that all the written documentation looks really "double-speak" and elusive.
 
I just question if one could actually use it to run, say, a second GPU and a RAID for example.

That would be too convenient now wouldn't it!

Though I know people run Dual NVIDIAs and a RedRocket card in these things all the time.
 
That would be too convenient now wouldn't it!

Though I know people run Dual NVIDIAs and a RedRocket card in these things all the time.
I think this is getting caught up in the minutia. That is simple switched vs. true simultaneous access to the installed cards in the enclosure (though it is interesting, and would be nice to have a clear answer regardless of the implementation). ;)

But from a usage POV, it's not like you have to plug/unplug cards. So install what you want and use it. Even if it's not as good as it could be performance wise, it does offer a user access to more devices than will fit inside the MP, and certainly better than running x16 or x8 cards in slots 3 & 4 in terms of performance (assuming the user is pushing the cards faster than what 4x lanes can provide if stuffed in slots 3 & 4).
 
Found this old post:
I have the Cyclone 600-2707 PCIe expander and it is great.
http://www.cyclone.com/products/expansion_systems/600-2707.php

Uses one 16x v2 PCIe slot and provides three 16x v2 slots and two 8x v2 slots. If you set it up right you can fill it with cards and not use more than the 16 v2 lanes of bandwidth it has with the host system.

I haven't used it with Resolve yet, but it seems like it will be perfect.

So far the only problem I have had is with the Atto R60F raid card. Seems like none of the ATTO 6G cards work with it, but the 3G cards do.

I have mine in a different enclosure with 10 drive trays and an LTO4.

For Resolve with the Cyclone, I would consider this config:

Mac Pro
Slot 1: GPU (GTX285)
Slot 2: Cyclone
Slot 3: Blackmagic (its only a 4x card)
Slot 4:

Cyclone
Slot 1: Atto Raid (x8 v1 card, only 3G cards work right now)
Slot 2: Red Rocket (x8 v1 card)
Slot 3: Secondary Video card for display
Slot 4:
Slot 5:

And that leaves room for three more cards. Plus the enclosure for the Cyclone can be set up to hold several drives to connect internally to your Atto raid card. And all your cards are getting their full bandwidth.

Dusty

Mahalo,
Dusty

www.sandust.com
Red One #973

This could be a nice way to make good use of the 40 lanes on a Mac Pro. I have my x8 Areca in an x16 lane right now, technically wasting a lot of lanes. I don't immediately have a need for it all, but if I were to need to add an nVidia card to do color grading with SpeedGrade or Resolve, for example, I could see needing a solution like this.

----------

And how much for this thing? $1000 or what?

----------

Whoops, found some pricing... and I'm out. Didn't like the rack-mount idea anyway.

600-2707-1-06-S 650 W Standard $2343.00
600-2707-1-06-L 650 W Low Profile $2343.00
600-2707-3-06-S 650 W Standard $2445.00
600-2707-3-06-L 650 W Low Profile $2445.00
600-2707-1-15-S 1500 W Standard $2679.00
600-2707-1-15-L 1500 W Low Profile $2679.00
600-2707-3-15-S 1500 W Standard $2781.00
600-2707-3-15-L 1500 W Low Profile $2781.00
The 1's and 3's are for cable length in meters.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.