all external enclosures that use thunderbolt bridgeboards produce a
significant amount of latency (just like FW and USB bridgeboards, perhaps just to a lesser extent).
Not. Nothing there is a demonstration at all of Thunderbolt infrastructure latency. A demonstration of different implementations, but not of latency.
The LaCie device is being attached to another box with its own port-multiplier overhead. First, none of that is necessary. Thunderbolt doesn't demand that. So it can't possibly be a "tax" or onerous latency layered by TB. Thunderbolt isn't going to make any possible choice of equipment work faster.
If we're then relegated to stuff all our drives into a box and run it through a TBird controller, we're getting worse performance and ultimately a larger footprint on our desk at a much higher cost than having internal drives hooked directly to the on-board Sata Controller (thanks Apple!).
If you buy the wrong product or there isn't a "right" product in the TB market that isn't Apple's fault.
There is a
$300 consumer ASUS motherboard with TEN Sata ports,
There is zero elements of Thunderbolt that prohibit a peripheral vendor of doing exactly the same thing inside of their external box. Nothing.
For better or worse no-one has build a JBOD Thunderbolt box yet. I'm not sure why it is taking so long. I guess because aimng for the higher end of the market were folks want to export a substantially more expensive SATA controller (e.g., RAID controller) out to the external box as opposed to the "motherboard breadth" controller like offerings.
It will cost a bit more becuse the 6 "free" ports from the chipset would have to be replicated by a discrete controller but that's isn't going to incur any latency problems. Perhaps bandwidth if stack enough SSDs on the resulting SATA network, but not latency.
We are now at a year after this article
http://www.anandtech.com/show/5956/qnaps-jtb400-a-byod-4bay-thunderbolt-enclosure
and still no JBOD box. Perhaps folks are waiting for TB pricing to settle down and chasing higher priced, lower volume products first during this initial phase of TB adoption. Eventually they will show up.
all of which are 6gbps hot-swappable
No hot swappable TB perherperials? Don't think so. Again this is a matter of vendor implementation; not Thunderbolt.
and can be converted to ESATA with a simple adapter,
Rather dubious when placed inside of a external TB peripheral since already outside the primary host box. Also, it is only going to lead to exactly the overhead shown in the benchmarks you used to poo-poo TB above.
subtracting no performance at almost no additional cost.
The almost no additional cost has only to do with the chipset ports which you have to buy anyway. The chipset has to be bought with the CPU. That has jack squat to do with with Thunderbolt latencies, speed, or suitability to transport SATA pci-e data.
If this was a Z77 chipset then it would be suck with just 6 Gb/s SATA ports. It is only 10 because the chipset moved up. If had to deliver 10 6b/s ports in previous version then would have paying substantially more ( either in a tradeoff in board features/space or discrete controller).
All of that is relatively independent of Thunderbolt speeds, costs, and latencies.
This is what professionals want.
As a whole group? Not really. Throwing digits number of drives into a box that isn't primarily a storage node is questionable. As the drive number goes up the likelihood of a failure goes up. That hot swap isn't useful unless can get to the drive. Making >10 drives all easily accessible for hot swap soaks up alot of room. Like these systems.
http://www.supermicro.com/products/nfo/Xeon_X9_E5.cfm?pg=SS&show=SELECT&type=SSG
Enough to probably exclude in a reasonably sized box multiple high TDP cards. The focus with interactive workstations is gong to be weighted more so on the high TDP cards.
Large data storage pro break out storage boxes into separate units specialized just for storage. This whole "everything and the kitchen sink has to fit inside of one box" isn't "pro". "If all you have is a hammer everything looks like a nail" isn't being a professional.
The tweaker/builder/'experiment in the basement' crowd... yes.
Spec porn chasers ... yes to those folks too. ( my box has more xxxx than yours.... it is more powerful. ).
You can't get better performance than that. Oh, by the way, it has 2 thunderbolts too.... For $2500, why the heck not?
Why not? It is capped at 4 x86 cores.
It has subsantially less I/O bandwidth than a Xeon E5 solutions would have ( capped at x16 PCI-e v3.0 lanes versus the x40+ a E5 system would have). The all that "mega" SATA capacity is going to get bottlenecked if actually try to walk and chew gum at the same time. There is a decent layer of PCI-e switches in there to "fake the funk" to hook up all those high bandwidth PCI-e v2.0 controllers.
Throw one or two PCI-e SSD cards at it, hook up and a couple SuperSeed USB 3.0 devices, 10GbE Thunderbolt device , and fire up those 10 drives and watch it wheeze. Juggle one or two of those? sure. Do all at the same time? no.