Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Thanks for reply. Good to know. JBOD just presents individual disks?
Technically speaking it can, but the use of the acronym JBOD usually refers to concatenation, which is when multiple disks are strung end-to-end that appear as a single volume.

For example, if you have 4x disks (1TB, 1.5TB, 2TB, 2TB), that are assembled as a concatenated set, it would appear as a single 5.5TB volume to the computer.
 
Technically speaking it can, but the use of the acronym JBOD usually refers to concatenation, which is when multiple disks are strung end-to-end that appear as a single volume.

For example, if you have 4x disks (1TB, 1.5TB, 2TB, 2TB), that are assembled as a concatenated set, it would appear as a single 5.5TB volume to the computer.

Thanks. Will soon find out but I imagine one just creates a JBOD volume with each installed disk if the enclosure doesn't have a nominal single drive mode. In that case I'm hoping that any JBOD formatting will not preclude swapping a JBOD volume disk out and using it independently of the enclosure. Not that important but the fewer dependencies one operates with the better IMHO.
 
Thanks. Will soon find out but I imagine one just creates a JBOD volume with each installed disk if the enclosure doesn't have a nominal single drive mode. In that case I'm hoping that any JBOD formatting will not preclude swapping a JBOD volume disk out and using it independently of the enclosure. Not that important but the fewer dependencies one operates with the better IMHO.
It's created under Disk Utility (included card is just an eSATA card, not an actual RAID card), as it's software controlled in this case.

And once a disk is set up in a JBOD, then it would have to be reformatted in order to use it as an independent disk again.
 
Technically speaking it can, but the use of the acronym JBOD usually refers to concatenation, which is when multiple disks are strung end-to-end that appear as a single volume.

For example, if you have 4x disks (1TB, 1.5TB, 2TB, 2TB), that are assembled as a concatenated set, it would appear as a single 5.5TB volume to the computer.

RAID0 is concatenation.

JBOD is just disks ("J"ust a "B"ox "O"f "D"isks), no RAID. It means presenting each disk individually.

----------

The CalDigit FASTA-6GU3 looks like a great option. It's not cheap but I'm sure it's worth it. I'm considering one... :)

Not cheap? It's $140... on the OSX side of the house that's an awesome price... usually everything is pro-gear level flash/battery backed RAID $800 stuff.

----------

Thanks. Will soon find out but I imagine one just creates a JBOD volume with each installed disk if the enclosure doesn't have a nominal single drive mode. In that case I'm hoping that any JBOD formatting will not preclude swapping a JBOD volume disk out and using it independently of the enclosure. Not that important but the fewer dependencies one operates with the better IMHO.

That type of volume is called a RAID0 volume, not JBOD... JBOD is single disks in a case, not concatenated. You can software RAID0 a JBOD set of disks.

JBOD is generally used in context of a case for holding disks. When it is advertised as "JBOD" it means the case has no built in hardware RAID controller (RAID0 = concat, RAID1 = mirror, RAID10 = 1+0, RAID5 is parity-concat)

When a RAID capable case is advertised as supporting JBOD, it just means that the drives in it from a HARDWARE perspective (regardless of the computer it is connected to) can be treated as all separate disks (JBOD) or a RAID volue (RAID0 and/or RAID1 usually)
 
I contacted Caldigit about that card a few months ago. This is what they said in response to my questions about performance:

The FASTA-6GU3 maximum performance is around 250MB/s (regardless USB3.0 or eSATA, RAID 0 or Single drive)
- Our FASTA-6GU3 has two controller chips (Marvell for eSATA, NEC for USB3.0), and there's a PLX chipset to serve as the 'middle man' (there's no controller that can deliver both USB3.0 and SATA6 yet). While the Marvell and NEC can reach a higher performance, the PLX is the limited factor.
- For example: if you have two 3G or 6G SSD eSATA drives, the RAID 0 performance is about the same as one of those single SSD drives (around 250MB/s). However, if you have two standard 3.5" SATA drives (like WD MyBook's performance is about 110MB/s), then the RAID 0 (combining two MyBook together) will provide its full performance with the FASTA-6GU3.

So it appears you're not going to realize the full benefits of USB3 or SATA3!
 
RAID0 is concatenation.

JBOD is just disks ("J"ust a "B"ox "O"f "D"isks), no RAID. It means presenting each disk individually.
Concatenation is stringing/chaining multiple items from end-to-end, in this particular case, storage devices. Another term used for this is spanning.

RAID 0 however uses the technique of striping (aka stripe set).

From a technical POV, it can be claimed that JBOD only refers to single disk operation (this is even what's written in Wiki), never concatenated/spanned.

But most of the time, I've seen it used to include concatenation in it's meaning, and for good reason I'll explain shortly. So I just accept this as it's meaning, and make sure there's clarification to avoid any confusion.

Now the reason I say there's validity for JBOD to translate/include concatenation/spanning, is due to the fact when a user sets a true RAID card to RAID mode (hardware implementation), it can either operate whatever RAID levels it supports, or single disk operation (simultaneously as well). No concatenation in this mode at all.

Set it for JBOD however, and it can only do single disk or concatenated operation. No RAID levels whatsoever in this mode.

So I see the meaning of JBOD more so for concatenation/spanning as a more valid definition due to it's implementation in real world hardware based RAID products, and I suspect this is where the differing interpretations of it's meaning originate.
 
True, many times folks do span/concatenate volumes with LVMs as OSX does, but the term JBOD for it overall still isn't quite accurate...that's a SPAN or concatenated volume -on- a JBOD...the same way you can software RAID0/1/5 (ala Solaris Disk Suite for example) a set of JBOD drives as well without a physical hardware raid controller. If you work with a storage architect, they'll be apt to correct you on it. :) (I'm an enterprise infrastructure architect by trade)

JBOD, meaning "Just a Bunch Of Drives", is used to refer to one distinct concept:
all disks are independently addressed, with no collective properties. Each physical disk, with all the logical partitions each may contain, is mapped to a different logical volume: just a bunch of disks.
The concept of concatenation, where all the physical disks are concatenated and presented as a single disk, is NOT a JBOD, but is properly called BIG or SPAN. The usage of "JBOD" and BIG or SPAN are frequently confused. This can create confusion and frustration, given the significantly different logical arrangement of the various types. Concatenation, referred to by such unambiguous terms as SPAN or BIG, requires software to bond/append drives together, where JBOD does not.
http://en.wikipedia.org/wiki/Non-RAID_drive_architectures#JBOD
 
True, many times folks do span/concatenate volumes with LVMs as OSX does, but the term JBOD for it overall still isn't quite accurate...that's a SPAN or concatenated volume -on- a JBOD...the same way you can software RAID0/1/5 (ala Solaris Disk Suite for example) a set of JBOD drives as well without a physical hardware raid controller. If you work with a storage architect, they'll be apt to correct you on it. :) (I'm an enterprise infrastructure architect by trade)


http://en.wikipedia.org/wiki/Non-RAID_drive_architectures#JBOD
Actually, I am a storage architect (began as a hardware engineer, and moved into storage architecture as a side business, then full time after being laid off), so I definitely realize what you're saying.

To me however, the definition splits between hardware and software implementations, and JBOD's technical definition gets confusing as a result due to what's allowable (XOR gate based microcontrollers cannot span when set in RAID mode = causality between differences in hardware and software implementations <CPU's are constructed using NAND gates, and are also far more complex as to how they can be programmed>).

I'm no fan of traditional parity based arrays via a software implementation due to the write hole issue not having a solution. Instead, I gravitate towards ZFS in such instances, as it's architecture was intentionally changed to eliminate the write hole. Unfortunately, this implementation isn't officially possible under OSX vs. Linux or Open Solaris due to a lack of licensing issue between Apple and Oracle.

So when OSX users ask about parity based arrays, the only real way to go is a hardware implementation, thus the reasoning for the definition I use (contextually correct, if not from a textbook POV).
 
Ah, nice to meet another guy in the infrastructure architecture biz. :) It would be fantastic if we could run ZFS with real support, sadly, don't see that happening. lol.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.