With PCI-e v3.0 you can because the above amounts to 2 x8 slots and 1-2 x8 slots. In the latter case, if it is 1 store card then with v3.0 in 16-8-8-4(-4) setup this fits exactly. It is not lanes but bandwidth that people need. If somehow the E5 did slip back to v2.0 in deployment even 4 slots with no embedded switches would be an improvement.
According to the Toms Hardware article, Intel is not going to ship PCIe 3.0 in the early chip releases. No explanation for why, but, this is not official either. Maybe they will get 3.0 working in the next couple of months.
In short, v3.0 reduces the need for x16 physical slots.
Disagree completely. AMD is counting on v3.0 to reduce the immediate need for
x32 slots, which otherwise they would want to support the next gen GPUs. See, e.g.,
http://www.legitreviews.com/news/11105/
Furthermore there aren't any IB cards now at any speed so I extremely doubt 40Gb ones will show for new Mac Pro during its one year tour as "top of the line". I suspect it will be a while before any 40GbE cards show either since have to hook to a switch which few will be willing to pay for.
I'm not a big fan of IB, but, lots of people are using it instead of FibreChannel. Just because Apple doesn't support it now doesn't mean they won't need to in the future.
Bottom line is, with PCIe 2.x, 40 lanes is constraining. I agree that with PCIe 3.0, the pressure to exceed 40 lanes would be significantly diminished.
That task that would more likely heavily drive a 16x v3.0 set-up is something like a high end GPGPU card where constantly shuttling new data and new results back and forth at lower latencies. Again, none of those currently supported and widely deployed.
But,
could be in the next two years.
Chuckle... if you paid $10,000+ to access some disk over IB I seriously doubt you are going to screw around with a large number of local SATA drives. Pick one or the other. Both is getting into the blowing money just because you can range in the workstation context for 99% of workloads.
One user wants a standalone local RAID system, another set of users is using an IB or xGbE SAN in a studio environment, perhaps with a special node or two with local scratch disk built out of SSD. It would be nice to have the flexibility to build all three types of systems.
Now, you may disagree, but it seems clear that SATA3, PCIe3, and USB3 are all just around the corner... probably Ivy Bridge. To buy an SB-E system that's largely based on SATA2, PCIe2, and USB2 right now, seems foolish to me.
If you are waiting anyway, then, I agree about SATA3 and PCIe3, but, some folks seem to be betting that TB will be a better option than USB3, since you will have a lot of the flexibility of FireWire and higher speed than USB3.
But Mac Pro already offers alternative solutions, some of which are even faster than TB. You could buy for example a 6Gb/s eSATA card and get 6Gb/s per port. eSATA enclosures go for pennies compared to TB ones.
For some people, TB might be a big deal but with current product offerings and pricing, it really seems like a fancy version of mDP.
I don't see this as either/or. I think of TB as the FW replacement. It might be either/or TB/USB3. Not clear that USB3 can replace FW.
USB 3.0 is "just around the corner"? You should lay off the drugs. USB 3.0 is here. I can walk into common office supply stores and find USB 3.0 devices Go search for "USB 3.0" at
www.walmart.com. The grossly artificial distinction you are making is whether it is in the core chipset or implemented in a discrete component. That has very
little to do with having significantly arrived in deployed systems and peripherals. The 3rd generation USB 3.0 controllers are coming soon.
USB 3.0 appears to have the shortcomings of USB 2.0, only faster. You might get throughput similar to FW800, and, the latency and jitter will still be there.