is there a limitation to how many pci-lanes you can have on a similar motherboard to what the current mac pro has? Couldn't apple just add 10-20 lanes (pcie 3.0 of course) and use these extra lanes for all expansions?
The number of real PCI-e lanes is set by the core chipset and/or the CPU package. For Sandy Bridge and Ivy Bridge the number of PCI-e v3.0 lanes is set solely by the CPU package.
The system vendors can perpetrate that there are more by adding switches. This is usually where the PC vendor boards with oodles of slots implement that. Slots share lanes so if wanted to put two high bandwidth cards into a slot they'd actually get 1/2 or 1/3 or some other fraction of what a it looks like they get.
2 TB-ports on the graphics cards (for displays etc)
TB ports on discrete PCI-e cards doesn't make much sense.
a. The TB controllers need for 4x PCI-e lanes competes with the GPU's need for bandwidth. So putting the two on the same card just means they directly compete with each other.
b. The TB controller is in part a PCI-e switch. Dual GPU cards have a PCI-e switch so that work gets sent to the appropriate GPU. Can imitate that by replacing a TB controller for a GPU. There are two problems. In a Dual GPU card only half the work is sent to the "other" GPU (that includes splitting the PCI-e bandwidth). Here the workload is not being split. Second, have also introduced a switch immediately behind a switch. That should be a clue that something is amiss with the design.
4 extra ports for expansion replacing all ports (but one of the TB ports is used internally for 3-5 usb-ports on the chassis itself, don't want to be forced to use converters to get usb).
4 more external TB ports??? That will take two
more TB controllers and another x8 PCI-e lanes. At this point have ripped another slot worth of lanes bandwidth out of a box that only had 4 PCI-e slots in it in the first place. With the new core chipset may be able to snag 4x PCI-e v2 lanes from it (if any left after ethernete, FW, audio , etc. ), but two controllers would be impacting slot availability and bandwidth.
TB controller used internally? Loopy. PCI-e lanes are used internally. It is cheaper and less complicated. Routing TB singals inside the box to another TB controller is gratuitous encoding of the data. It just adds complexity and cost with no performance improvements and adds latency.
This way you could buy a "TB-hub" if you want firewire, extra usb, extra ethernet etc.
When a Mac Pro needs a docking station.... that again should be a clue that something is amiss with the design.
This way all communications inside the mac pro could be through the PCI interface, beginning the transformation to a truly modular computer
All comm between major components inside the Mac Pro is already in PCI-e interface. TB adds nothing new to the situation.
TB is a somewhat dubious interconnect for a modular computer. It is too slow for anything that requires more than 4x PCI-e v2.0 worth of bandwidth.
(a bit OT: imagine a "cpu and memory" controller box with lots of PCI-lanes to which you add gpu-TB-boxes and TB-storage boxes.. no more SATA, no more Firewire, no more usb-drives.. yummy)
How many gallons of TB marketing kool-aid did you drink???
SATA is not going away. eSATA may drop substantially over time, but TB is not going to displace SATA. Nor USB.
TB's modular traction is going to come largely from systems that are heavily space constrained (e.g., the ever thinner laptops) that have historically lacked PCI-e slot like flexibity. For physical system volumes the size of the Mac Pro, that doesn't make much sense. You could shrink the Mac Pro so that it was space constrained so then have to add space back with modules. However, that is an awfully circular rationale. "Shrink it so you can grow it". Stripping away the PCI-e slots to add them back in as TB ports isn't likely to be as effective in bandwidth or cost.