OK, I read it. I'm still confused tho. That's not Falcon Ridge (MacPro). They're talking about Cactus Ridge in that article (LapTop).
http://en.wikipedia.org/wiki/Thunderbolt_(interface)#Controllers
Here's some additional reading on the difference between Cactus and Falcon...
http://www.anandtech.com/show/7049/intel-thunderbolt-2-everything-you-need-to-know
(see the 2nd to last paragraph)...
Thunderbolt 2/Falcon Ridge still feed off of the same x4 PCIe 2.0 interface as the previous generation designs. Backwards compatibility is also maintained with existing Thunderbolt devices since the underlying architecture doesn't really change.
Here's my attempt to explain/understand it...
The only thing Falcon Ridge does over Cactus Ridge is aggregate the two 10Gbps TB channels on each connector/cable into a single 20Gbps channel so that it can pass 4K DisplayPort signals.
In more details... With TB1, one channel is reserved for PCIe, the other for DisplayPort. 10Gbps for each type of signal. The problem with that is a 10Gbps channel is not enough bandwidth for a 4K display signal (which is about 16Gbps). So in order for Intel to support 4K displays, they needed to combine the two 10Gbps channels in TB1 into a single 20Gbps channel, and they called that TB2. Now instead of PCIex4 and DisplayPort having their own 10Gbps channel, they are now muxed together on a single 20Gbps channel.
Now, even though the controller can toggle each connector at 20Gbps, it still appears that it only has a PCIex4 input from the computer to send across either of those connectors. So I assume (don't know this for sure) that it's switching PCIe x4 across both connectors. So you could hook up a x4 peripheral to either connector and it would work at full speed as long as they weren't trying to saturate the bus at the same time. If you hooked up a x4 peripheral to both connectors at the same time and they were both saturating their bus, they would be fighting for that same x4 connection with the computer and would be bottlenecking each other. It's the only way that I can see it making sense.
EDIT: With this design, three TB2 controllers in the new Mac Pro would utilize a total of 12 PCIe 2.0 lanes (x4 for each controller). This makes sense from a PCIe lane budget perspective...
Lanes available: 40 PCIe 3.0 lanes on CPU, 8 PCIe 2.0 lanes on PCH
- GPU 1 = 16 lanes (3.0)
- GPU 2 = 16 lanes (3.0)
- TB Controller 1 = 4 lanes (3.0 or 2.0)
- TB Controller 2 = 4 lanes (2.0)
- TB Controller 3 = 4 lanes (2.0)
- PCIe SSD 1 = 2 lanes (3.0)
- PCIe SSD 2 (?) = 2 lanes (3.0)
That's 48 lanes which is all the system has.