Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Fiber Optic

Everyone this problem has been around for years on the enterprise end and i am surprised no one has said the common solution to this issue.

Fibre Storage Array

not only will this solve your problem it will also allow more then one user if you do it right

what he wants does exist but most of the time i build them per project and i advise you to do the same for your client

this card allows transfers of 40Gbs ~ 5GBs (almost twice what you need)
MHQH19B-XTR min 2 cards needed one at sending one at receiving

build a server or buy a vary high end nas (special order only so just build it yourself) (Linux or windows (use Linux)is probable os)

you will also need several LSI00396 (8120-4e) (high end raid cards there is many others too) what ever you choose make sure you buy at least 1 extra

and to top it off a ton of hard drives

Now the problem everyone has been thinking throughout this article how to add the card on the mac pro as said in a prior post you can't but what you could do is a variation you make the nas pay a programer to make a 4 way (thunderbolt driver) (2 on each device) so they are working as a raid 0

in short your host nas is a 50 or 60 configuration (60 if you can)

this will cost $$,$$$ and may get to the $$$,$$$ to get up and running but what you are asking is what i do on virtual machines everyday you will have to apply it to this location and do not go cheap and all raid and data transfer items you need at least 30% above what your min acceptance is if they ask you to beach mark this
 

Why does it only work with 2k and not 4k as is stated?
Guys what kind of footage are you actually speaking about? Just calling in 4k as a resolution could mean anything... I sure can edit 4k prores LT files off my usb 3 SSD :D

I've never had Dragon footage here so far, but Regular Epic Mysterium X footage runs in fcp x at "optimize for speed" settings fluidly from mac pros internal ssd.
 
the reason it is 2k not 4k is we are talking about editing movies uncompressed due to which one will need to read and write at all times keep in mind this person does this professionally due to which waiting is not acceptable as every time has to wait it costs him time and money.

1.3GBs works only 1 way one could read as needed but not be able to write in real time

yes the person who wants this actually does want this real time but what he is asking for is something normally only server farms want.

in short you need reads speeds of 1.1GBs and Write of 1.2GBs Min at same time that is why he is asking for 2.6GBs speeds (speed is lost as read and writes are not a 1:1 transfer)

the only times you get these speeds is enterprise equipment not thunderbolt that does not even have enough throughput to handle even if you could find the right device in 5 years this will probably be an easy solution but at this time what they need to do is make a fibre device storage and on that it needs to be mid to high end (from an enterprise standpoint not a normal users standpoint)

*when is say GBs i do mean GBs not Gbs as most products show when all other measurements in the industry use the Byte and not the bit i show all measurements as Byte not bit to keep the numbers the same
 
Last edited:
Can you provide the name of the enclosure? What disks were in that enclosure when it was tested?

ARECA: ARC-8050T2
http://www.areca.com.tw/products/thunderbolt2.htm

Promise: Pegasus2 R8
http://www.promise.com/promotion_page/promotion_page.aspx?region=en-global&rsn=101

I can not get it over 1150MB/s

Where DATOptic's T12-S6.TB2 with 12x Seagate 4TB 5900rpm
http://www.datoptic.com/ec/thunderbolt2-20gb-twelve-6gb-bay-hardware-raid5-6-quiet-tower.html

I get bit more than 1300MB/s on both RAID6 or RAID5
A strange thing is with Hitachi 7200rpm I get only about 1200MB, a 100MB less than Seagate 5900rpm?

I'm waiting for the 2nd box to create a RAID50/60... and I'm very confident that we can get 2600MB/s
 
Last edited:
1. Avid MC8 is to have a new playback engine between now and December. Sounds like he needs "it" now but it is in the works. This will enable 4K-6K to be played in real time.
2. Go to the Avid video forum and search for 4K. See if there is a workflow that includes AMA, transcoding and relinking.
3. Also search Avid pertaining to 4K. If finishing will be done in Resolve that's fine. MC can only output 1080 HD. Resolution independence is also due by Dec!:confused:

A screen shot of MC6 and R3D Raw. Also try changing to yellow/green during the test.
 

Attachments

  • R3D.png
    R3D.png
    601.9 KB · Views: 110
Last edited:
I get bit more than 1300MB/s on both RAID6 or RAID5
A strange thing is with Hitachi 7200rpm I get only about 1200MB, a 100MB less than Seagate 5900rpm?

I'm waiting for the 2nd box to create a RAID50/60... and I'm very confident that we can get 2600MB/s

RPM has ZERO to do with read write times it only has to deal with how long it takes the needle to get to that point of the disk. ie latency

for faster read writes you need to look over the formater board

ps do not use Seagate they fail faster then any other manufacture in the market at this time and for several years
WD, Toshiba (yes they do not make enterprise drives), Hitachi are normally the safe bets
if use consumer drives make sure it has a raid ability in the firmware or i promise you it will fail and warranty will have been void already (note most consumer drives do not support raid)


if you used 2 * Thunderbolt not on same controller and have the host computer make the Stripe and both clients use raid 60 you should get the speed but other then that your max bandwidth via the Thunderbolt is only 2.5GBs not the needed 2.6GBs but you get 5GBs with using 2 * Thunderbolt controllers

now if each of those controllers offer 2 ports that gives you 4 total Thunderbolt on host computer use this with 4 Storage devices and make it a raid 5,6, or 10 with each client a raid 5,6,10,50, or 60

this will allow 2*the bandwidth client wants (5GBs+)
the ability to lose one whole devices with no worries
and the ability on each device to lose 1 or 2 hard drive with ZERO loss of performance to client

in short you need 1 of the following choices
Thunderbolt 2 is not fast enough even at max theoretical bandwidth 20,000/8= 2500 = 2.5GB (100MBs slower then min requirement)
ie 2 controllers not 1 is needed (2 ports on same card in most devices does share bandwidth on Thunderbolt)

or

go to the tried and true faster Fibre cards where you can find 40Gbs to 200Gbs cards (8GBs to 40GBs) and do what i do every day to get the requested speeds times several.
 
Last edited:
Other information in this thread is wrong or misleading. Editing Epic footage doesn't require anything like 2600 MB/s of disk performance. That's something like the data rate for uncompressed 16-bit RGB 6K; Epic footage is raw (starts at 1/3 of the data rate of RGB) and is generally compressed at least 5:1 on top of that, meaning disk bandwidth requirements are something like 7% of what is being described here. Seriously, 6K Epic footage will play back from a dual-drive RAID 0. I work at a busy, successful feature film post production facility. I do this, personally, on a regular basis.

Even if you do need to go uncompressed in some parts of your workflow (when rendering out R3D clips for VFX, for instance), there is little reason to do so at anything above DCI full-container 4K (4096x2160), as there's no deliverable format with more resolution than that or any way to monitor at higher resolution than that. Except for rare specialty cases, you'd never be working in a timeline with higher resolution. The reason they build cameras with more resolution than that is largely because the way camera sensors work means there are significant benefits to oversampling, not because you're actually supposed to be finishing projects at 6K.

And honestly, there will be no visible difference if you use 4K ProRes 4444 rather than an uncompressed format, in which case, again, you're talking about clips you can play back from a dual drive RAID 0.

The only place you really might want more bandwidth than a Mac Pro can provide via Thunderbolt is if you're trying to use a RedRocket-X card for real-time 6K decoding; in that case the uncompressed feed coming back from the card will exceed the available bandwidth of Thunderbolt 2. However, real-time 6K workflows are not any sort of hard requirement. In fact, they're presently very rare in the industry. The Rocket-X is ludicrously expensive ($6750) and not even really widely available yet.

A maxed out 2013 Mac Pro can do a 'half res good' decode on Epic footage in real time, which produces a great looking image for a real-time 2K workflow, which is how most post facilities are still handling this stuff. These kinds of "I must have real-time 6K" notions often come from people relatively new to the industry, who get carried away with tech specs and don't really know enough to perform a sensible cost/benefit analysis. I see this happen all the time; people who think they need a more powerful system for their in-house projects than what major post facilities are using to finish Hollywood movies.

If you really need 2000+ MB/s of disk bandwidth on a new Mac Pro, you should be able to do this by taking two 1000+ MB/s RAID 5/6 arrays, hooking them up to different Thunderbolt controllers, and striping them using software RAID 0 (the overhead of which is pretty trivial on modern hardware). But if you need 2000+ MB/s of disk bandwidth for working with Epic footage, it's probably because you're doing something you don't actually need to be doing with your workflow.
 
ZnU frankly i agree as while i do with with systems capable of what he is asking the specs are through the roof and the limitations on host hardware make it near impossible while on an old Mac Pro that used ATX mobo this would be simple but the current setup does not offer the ability to add the needed hardware

If you really need 2000+ MB/s of disk bandwidth on a new Mac Pro, you should be able to do this by taking two 1000+ MB/s RAID 5/6 arrays, hooking them up to different Thunderbolt controllers, and striping them using software RAID 0 (the overhead of which is pretty trivial on modern hardware). But if you need 2000+ MB/s of disk bandwidth for working with Epic footage, it's probably because you're doing something you don't actually need to be doing with your workflow.

thank you for agreeing with my last post summery

FireWire2 now it is your turn to try these solutions
 
By Friday - this week I should know
Whether i can get 2600MB/s or not with 2x T12-S6.TB2 from DAToptic

I already figure out 8x drives system or Fiber wont cut it...
Eight drives: not enough spindles
Fiber: limited @ Thunderbolt port and the cost is lots more
 
By Friday - this week I should know
Whether i can get 2600MB/s or not with 2x T12-S6.TB2 from DAToptic

I already figure out 8x drives system or Fiber wont cut it...
Eight drives: not enough spindles
Fiber: limited @ Thunderbolt port and the cost is lots more

It is entirely possible to finish Epic projects on a 2013 Mac Pro at a professional level, but the only reason the client could possibly want the disk bandwidth in question if they're chasing a real-time 6K workflow (i.e. a higher end workflow than what most major name brand post facilities presently have deployed). That workflow, if mixing raw footage in with the uncompressed footage, basically cannot be implemented on the 2013 Mac Pro. The on-board GPUs will not do a real-time 6K decode. A RedRocket-X, which would do such a decode, requires more bandwidth than Thunderbolt 2 provides, and unlike with storage, there's no way to utilize the bandwidth of multiple Thunderbolt controllers to make up for this. In other words, if this is what the client is trying to do, the two options are:

1) Have someone who is familiar with Red workflow explain to them why they don't really need this, and how they can get great results without it,

2) Set them up with a dual processor Windows or Linux machine with a RedRocket-X card and/or with at least two or three GTX Titan cards and a high-end SAS RAID controller.

Between the two options, the first is almost certainly the way to go. It is exceedingly unlikely the client actually needs to do what they are apparently trying to do. (And note that this will easily cost more than the Mac Pro it replaces; possibly a lot more depending on the details.) Are you able to offer any details about what kinds of projects the client is trying to finish and what apps they'll be working with?

That said, I'd be very interested in the results of any testing you do striping across those two enclosures.
 
It is entirely possible to finish Epic projects on a 2013 Mac Pro at a professional level, but the only reason the client could possibly want the disk bandwidth in question if they're chasing a real-time 6K workflow (i.e. a higher end workflow than what most major name brand post facilities presently have deployed). That workflow, if mixing raw footage in with the uncompressed footage, basically cannot be implemented on the 2013 Mac Pro. The on-board GPUs will not do a real-time 6K decode. A RedRocket-X, which would do such a decode, requires more bandwidth than Thunderbolt 2 provides, and unlike with storage, there's no way to utilize the bandwidth of multiple Thunderbolt controllers to make up for this. In other words, if this is what the client is trying to do, the two options are:

1) Have someone who is familiar with Red workflow explain to them why they don't really need this, and how they can get great results without it,

2) Set them up with a dual processor Windows or Linux machine with a RedRocket-X card and/or with at least two or three GTX Titan cards and a high-end SAS RAID controller.

Between the two options, the first is almost certainly the way to go. It is exceedingly unlikely the client actually needs to do what they are apparently trying to do. (And note that this will easily cost more than the Mac Pro it replaces; possibly a lot more depending on the details.) Are you able to offer any details about what kinds of projects the client is trying to finish and what apps they'll be working with?

That said, I'd be very interested in the results of any testing you do striping across those two enclosures.

Thank you man for this clarification!! I'm really wondering who could possibly have a real need for what is being asked here.

Earlier you stated that a maxed out Mac Pro could do a real time halve res good decode of epic footage. My 8core maxed sure can't. the 12 core can?
 
I’d also like to know what sort of hero client this this… sounds like a case of wrong tools for the job, or even wrong purpose to start with.

While an excellent machine for what it is, the nMP is a joke in comparison to other workstations for the processing and speed requirements demanded here, if we are actually going to accept that the client must play back completely uncompressed 6k footage.

If they really had a reason to be this cutting edge I would think they would already have a bit more sense than to try and use a bunch of standalone RAID bays software raided together… and without use of a RED ROCKET-X card. The self contained simplicity of the nMP doesn’t look so elegant anymore!

How much footage are they planning to handle here - did they stop to think that at 2600MB/sec you would need about 9.6TB of space for only an hours worth of footage?

No offence to you FireWire2 I know you are just trying to give the client what they want, and perhaps this is a bit of an experiment for you both, but the client either needs to investigate the realistic advantages of the REDCODE compressed format, or accept the nMP is not up to the task.
 
i think the client may need to go to Linux, windows or Hackentosh this way they can use a mobo that allows for the needs here (and if the client really wants this setup over built the mobo as i do not think this will be the only time the client will ask something like this from you)

Thunderbolt to slow for spec
Fiber will work and is more then fast enough but cant interface with host computer
GPU processor not able to handle load
CPU may or many not be able to handle load

In short the client needs a server not a work station i have said it before what this client wants is something only used on high end enterprise equipment yes i have this but trying to use a proprietary mac will strongly limit how we can help you in this case




ps
8 drives unless all SSD will never get you the speeds you want you need min 16 and 24 or more would be wise
 
I'm not familiar with this kind of workflow and probably I'm missing something obvious like needing disk space for larger files ... so forgive me for my ignorance. Maybe this could be up to the task?:
http://barefeats.com/hard182.html
 
Last edited:
i think the client may need to go to Linux, windows or Hackentosh this way they can use a mobo that allows for the needs here (and if the client really wants this setup over built the mobo as i do not think this will be the only time the client will ask something like this from you)

Thunderbolt to slow for spec
Fiber will work and is more then fast enough but cant interface with host computer
GPU processor not able to handle load
CPU may or many not be able to handle load

In short the client needs a server not a work station i have said it before what this client wants is something only used on high end enterprise equipment yes i have this but trying to use a proprietary mac will strongly limit how we can help you in this case




ps
8 drives unless all SSD will never get you the speeds you want you need min 16 and 24 or more would be wise

Could also be done using 2012 Mac Pro with a Pcie expansion box as people have been doing with Resolve for many years. 12 cores in those machines will be acceptable, just add your fibre host card, 3 or 4 GTX Titan cards, then a REDROCKET and it can still be a beast of a machine.
 
Originally Posted by AidenShaw View Post
Oops - no SAS on the new Mac Pro.

http://www.areca.us/products/thunderbolt2.htm

There is a Fail here ..... just not on the Mac Pro side. There is huge difference between there not being a being able to order directly from Apple's store inventory and there being no SAS solutions.

Will need more than one, but since this thread is an "unrestricted budget" exercise... that isn't an issue.

There's also a huge difference between putting "technology X" in a PCIe slot on the system, vs putting "technology X" on a dongle bottlenecked by T-Bolt.

I should have said "no native SAS on the MP6,1".

On my hex core Dell T3610 (same CPU/chipset as the MP6,1 hex core), I can connect two HP P431 SAS controllers (or similar controllers from several other vendors). Each of these can have
  • two 48 Gbps mini-SAS connectors to disk cabinets
  • RAID 0/1/10/5/6/50/60 support
  • 4 GiB of flash-backed write cache
  • support for up to 200 separate drives in any supported RAID configuration

With 6Gbps SAS drives and 12 10K SAS drives per port, I see an easy 2.2 GB/sec read and write bandwidth from each port. Using both ports, that should be 4.4 Gbps. As 12 Gbps SAS drives roll out, that should double (although I'll need more disks per port to hit that bandwidth)

So - MP6,1 with the Areca controller on the 16 Mbps T-Bolt link hits a max of 1.3 GBps. On a Dell T3610 or other system with PCIe, SAS will give me 4.4 to 17.2 GBps per LUN. Or even more with multiple LUNs.

Let's see - 400 disks of 1.2 TB each, on 192 Gbps of bandwidth, or 8 disks on 16 Gbps. (Or maybe 24 disks on 48 Gbps is the right comparison.)

Since it is an "unrestricted budget" thought experiment - do you want the MP6,1 or a system with PCIe slots?

If you need a hardware-defined 200 TB volume, or 8.8 GB/sec to a single volume - only the PCIe system can give you that. T-Bolt bottlenecked SAS can't even play in the same ballpark.
 
Last edited:
Could also be done using 2012 Mac Pro with a Pcie expansion box as people have been doing with Resolve for many years. 12 cores in those machines will be acceptable, just add your fibre host card, 3 or 4 GTX Titan cards, then a REDROCKET and it can still be a beast of a machine.

Fire already said this is a new MacPro so after they stooped using ATX mobos so none of those upgrades are available to the client which is why i said what i did in my prior post

should he have an old one sitting arround it may be a idea to take it out and max its specs as that would give the needed interface
 
Simple solution that your client does not want to hear:
1. Even though I use it, dump Avid MC/Symphony as the NLE and use Premiere
2. Hack the line in Premiere to use those GPU's if not listed
3. With what ever Raid speed you have now you can test and it is probably enough.
4. With the Mercury engine and CUDA cores, ZERO over priced R3D cards are needed
 
There's also a huge difference between putting "technology X" in a PCIe slot on the system, vs putting "technology X" on a dongle bottlenecked by T-Bolt.

I should have said "no native SAS on the MP6,1".

On my hex core Dell T3610 (same CPU/chipset as the MP6,1 hex core), I can connect two HP P431 SAS controllers (or similar controllers from several other vendors). Each of these can have
  • two 48 Gbps mini-SAS connectors to disk cabinets
  • RAID 0/1/10/5/6/50/60 support
  • 4 GiB of flash-backed write cache
  • support for up to 200 separate drives in any supported RAID configuration

With 6Gbps SAS drives and 12 10K SAS drives per port, I see an easy 2.2 GB/sec read and write bandwidth from each port. Using both ports, that should be 4.4 Gbps. As 12 Gbps SAS drives roll out, that should double (although I'll need more disks per port to hit that bandwidth)

So - MP6,1 with the Areca controller on the 16 Mbps T-Bolt link hits a max of 1.3 GBps. On a Dell T3610 or other system with PCIe, SAS will give me 4.4 to 17.2 GBps per LUN. Or even more with multiple LUNs.

Let's see - 400 disks of 1.2 TB each, on 192 Gbps of bandwidth, or 8 disks on 16 Gbps. (Or maybe 24 disks on 48 Gbps is the right comparison.)

Since it is an "unrestricted budget" thought experiment - do you want the MP6,1 or a system with PCIe slots?

If you need a hardware-defined 200 TB volume, or 8.8 GB/sec to a single volume - only the PCIe system can give you that. T-Bolt bottlenecked SAS can't even play in the same ballpark.

I do not have ANY problem of create a RAID under Linux or Windows that can move data more than 2600MB/s.

The issue here is, a solution for nMP, i guess "There is will, there is way"

Last Friday, I was able to connect TWO T12-S6.TB2 from DATOptic create a RAID50, Here is what I get:
ry%3D480


Will post more detail later
 
Will post more detail later

Please do, and post IO parameters and whether that's hardware RAID-50 or two separate hardware RAID-5 LUNs striped into a software RAID-50 LUN.

And, do explain how you have confidence that 2.628 GB/s will comfortably meet the requirement of 2.6 GB/s. It seems to offer no margin of error.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.