Though it's there, it's possible to leave it unconnected (data only configuration), only connect DP up to a single TB chip (& leave the remaining TB chips' DP signals unconnected), or share the same DP signal between all TB chips (switched DP signal).
There is nothing that indicates that will pass Intel TB certification tests.
There have been several gimmicky TB "solutions" show up as demo as various trade shows that could be user de-configured by the user to be "data only". Not a single one of those have passed certification. None. Zip. Nada.
For a peripherals, yes. No need to hook up at all. But to guarantee that is at least one video source on any TB network, one class of device has to have additional constraints assigned. A personal computers all have GPUs so it is the logical choice for that constraint. A GPU is mandated to be present anyway so generally not an additional load.
The last option of course, allowing a video signal all the way around to prevent confusion over what TB ports can/can't do. But in a desktop setting, the primary use would be for data transfer with shared peripherals than handling a video signal IMHO.
The desktop is not the primary target of Thunderbolt. For desktops with lots of internal expansion bays it is a solution in search of a problem. That just means it is going to be and odd ball implementation fit. Not that the certification rules can be changed.
Unless that signal would be mirrored, I doubt it would pass the specifics of what is being looked for. A GPU PCi-e would be funny if ports didn't all work without user twiddling. It is an obvious stunt. Replicated the signal for mirroring is a bit weird too.
With two TB ports users can hand 12 devices off the back of a machine. That is more than most folks are going to pay for since they cost more than average.
The gimmick here of soaking up more internal PCi-e lanes is primarily just that a gimmick. If that much more bandwidth is needed a x8 or x16 card and already standard connectors more than likely solve the problem far more elegantly that gimmicky addition of DisplayPort switch/replicators.
It's just not suited for the type of storage use he needs at this time (proper RAID configuration on enterprise grade media).
There is really nothing that Thunderbolt does that inhibits what is done on the other side of the SATA/RAID controller from the PCI-e (and Thunderbolt ) side. Besides some driver support for hot-plug-and-play those are decoupled.
Pragmatically short term the implementation R&D costs are high enough that more has to go into the TB control part of the external box than that other side of the controller. But as the technology matures and the costs and software is already in place vendors could use those savings to make the external box do more. Including "real RAID" or use different parts.
Thunderbolt could do MAID interconnect quite well. The back end for a 100TB data warehouse? No. But it never was designed to be that.
Given how new TB is, I don't see this happening that quickly in order to maximize profits.
Well it is actually more availability of PCI-e v3. The standard canonical designs for TB implementations use 4 of the 8 PCI-e v2 lines of the IOHUB chipset. To move these standard designs forward the chipset would have to move to v3 lanes. That pragmatically causes choking problems in the bandwidth to the CPU/GPU/memory contrller package. And like I said the rest of the stuff the IOHUB chipset is typically hooked to is stuck at v2 also.
( well maybe Ultra deluxe super duper speed USB 3.0++ but I think they are jockeying to kick the TB controller off its x4 v2 connection as the preferred "data only" 10Gbps solution that doesn't have video entanglements.