Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Though it's there, it's possible to leave it unconnected (data only configuration), only connect DP up to a single TB chip (& leave the remaining TB chips' DP signals unconnected), or share the same DP signal between all TB chips (switched DP signal).

There is nothing that indicates that will pass Intel TB certification tests.

There have been several gimmicky TB "solutions" show up as demo as various trade shows that could be user de-configured by the user to be "data only". Not a single one of those have passed certification. None. Zip. Nada.

For a peripherals, yes. No need to hook up at all. But to guarantee that is at least one video source on any TB network, one class of device has to have additional constraints assigned. A personal computers all have GPUs so it is the logical choice for that constraint. A GPU is mandated to be present anyway so generally not an additional load.

The last option of course, allowing a video signal all the way around to prevent confusion over what TB ports can/can't do. But in a desktop setting, the primary use would be for data transfer with shared peripherals than handling a video signal IMHO.

The desktop is not the primary target of Thunderbolt. For desktops with lots of internal expansion bays it is a solution in search of a problem. That just means it is going to be and odd ball implementation fit. Not that the certification rules can be changed.

Unless that signal would be mirrored, I doubt it would pass the specifics of what is being looked for. A GPU PCi-e would be funny if ports didn't all work without user twiddling. It is an obvious stunt. Replicated the signal for mirroring is a bit weird too.

With two TB ports users can hand 12 devices off the back of a machine. That is more than most folks are going to pay for since they cost more than average.

The gimmick here of soaking up more internal PCi-e lanes is primarily just that a gimmick. If that much more bandwidth is needed a x8 or x16 card and already standard connectors more than likely solve the problem far more elegantly that gimmicky addition of DisplayPort switch/replicators.

It's just not suited for the type of storage use he needs at this time (proper RAID configuration on enterprise grade media).

There is really nothing that Thunderbolt does that inhibits what is done on the other side of the SATA/RAID controller from the PCI-e (and Thunderbolt ) side. Besides some driver support for hot-plug-and-play those are decoupled.

Pragmatically short term the implementation R&D costs are high enough that more has to go into the TB control part of the external box than that other side of the controller. But as the technology matures and the costs and software is already in place vendors could use those savings to make the external box do more. Including "real RAID" or use different parts.

Thunderbolt could do MAID interconnect quite well. The back end for a 100TB data warehouse? No. But it never was designed to be that.

Given how new TB is, I don't see this happening that quickly in order to maximize profits.

Well it is actually more availability of PCI-e v3. The standard canonical designs for TB implementations use 4 of the 8 PCI-e v2 lines of the IOHUB chipset. To move these standard designs forward the chipset would have to move to v3 lanes. That pragmatically causes choking problems in the bandwidth to the CPU/GPU/memory contrller package. And like I said the rest of the stuff the IOHUB chipset is typically hooked to is stuck at v2 also.
( well maybe Ultra deluxe super duper speed USB 3.0++ but I think they are jockeying to kick the TB controller off its x4 v2 connection as the preferred "data only" 10Gbps solution that doesn't have video entanglements. :) )
 
There is nothing that indicates that will pass Intel TB certification tests.
I wasn't focusing on the testing, but it should be possible to create a desktop solution without an incredible amount of effort and additional cost. Just a bit more difficult for anything that doesn't either use a CPU that includes a GPU on the die, or a separate GPU soldered to the main board (need some sort of header to pass the data from say a PCIe GPU card to either the main board <presumes TB chip is soldered to this> or a separate TB card).

The desktop is not the primary target of Thunderbolt. For desktops with lots of internal expansion bays it is a solution in search of a problem. That just means it is going to be and odd ball implementation fit. Not that the certification rules can be changed.
I wasn't insinuating that the desktop was TB's primary market. It clearly isn't, but it does have a use in regard to attaching peripherals users want to share with their portable systems.

One area it would be useful, would be shooting digital video in the field, then bring that data back to the office for editing for example. Just plug the storage peripheral that was used with the laptop into the desktop, and go.

Unless that signal would be mirrored, I doubt it would pass the specifics of what is being looked for. A GPU PCi-e would be funny if ports didn't all work without user twiddling. It is an obvious stunt. Replicated the signal for mirroring is a bit weird too.
I wasn't indicating this would be an ideal situation, but a realistic, albeit difficult, means of accomplishing getting a DP signal on all TB ports (more than one TB chip in order to allow for parallelism for data throughputs).

With two TB ports users can hand 12 devices off the back of a machine. That is more than most folks are going to pay for since they cost more than average.
My proposal wasn't about the number of devices, but rather using the available bandwidth simultaneously (parallelism) to get close to what's already possible in a desktop via PCIe slots. Specifically, getting storage throughputs up to what's already possible via PCIe slot based RAID cards in a professional workstation.

I'm in no way in favor of trying to displace PCIe slots in favor of TB. But I do believe that this is something that Apple would strongly consider as a means of creating a new product that can cover both their professional workstation users as well as pick up consumer users that either don't want an existing product, or where an existing product isn't viable for their needs (purely across Apple's product lines, no competitors).

The gimmick here of soaking up more internal PCi-e lanes is primarily just that a gimmick. If that much more bandwidth is needed a x8 or x16 card and already standard connectors more than likely solve the problem far more elegantly that gimmicky addition of DisplayPort switch/replicators.
As a general rule (any common sense), I absolutely agree here.

Just keep in mind, once you take in the "Kool-Aid" factor at Apple, things may not follow the sort of logic anyone else would follow, let alone an electronic engineer tasked with a professional workstation design.

That pragmatically causes choking problems in the bandwidth to the CPU/GPU/memory controller package. And like I said the rest of the stuff the IOHUB chipset is typically hooked to is stuck at v2 also.
The PCIe controller (CPU location, not IOHUB), can step down to v. 2.0 though. So from a technical standpoint, it will work.

Really, it's no different than using 2.0 spec PCIe devices on PCIe slots running v. 3.0 lanes.
 
I wasn't focusing on the testing, but it should be possible to create a desktop solution without an incredible amount of effort and additional cost.

You have to pass the Thunderbolt certification tests to be able to use the Thunderbolt tradmark/symbols/lableling.

Vendors could say "screw them" ( e.g., USB/eSATA combo sockets approved by nobody) only the fact that Intel also entirely controls distribution of TB controllers. No pass good luck trying to purchase 1K or 10K controllers.

To not focus on passing the test is miles deep in the swamp in a boat with a 5 huge leaks.

One area it would be useful, would be shooting digital video in the field, then bring that data back to the office for editing for example. Just plug the storage peripheral that was used with the laptop into the desktop, and go.

The problem is that is largely done now without Thunderbolt. Cameras with 2.5" drives. SATA/eSATA connections. That infrastructure is already bought and deployed.

There are some niceities a TB solution could bring, but really doesn't push past the need for 2 TB ports.


I'm in no way in favor of trying to displace PCIe slots in favor of TB. But I do believe that this is something that Apple would strongly consider as a means of creating a new product that can cover both their professional workstation users as well as pick up consumer users that either don't want an existing product, or where an existing product isn't viable for their needs (purely across Apple's product lines, no competitors).

I don't think Apple necessarily needs a new product. Take the embedded discrete solution from the iMac use that to implement a source for a TB controller. Dump the two analog auidio sockets for TB ports and ta-da done.
[ Although, AMD has got a nice fit solutions for the limited x40 lane budget of a single package E5 offering.

"... Interestingly, all Mars products only offer a PCIe 8x bus instead of the GPU industry standard 16x. ...."
http://www.anandtech.com/show/6571/amd-releases-full-product-specifications-for-radeon-8000m-series

So x16 (card) , x8 (open ) , x8 (AMD 8790M ) , x4 (open ) , x4 (open) , and TB controller on x4 of chipset and basically done.

It is good enough to drive two decent monitors well if someone needed to assign the x16 slot to something else and didn't need high power 3D. And two ports means up to 12 TB devices. Which is good enough for most scenarios. ]

None of that disrupts significantly the core target market of the Mac Pro. Still can be 4 slots. Going from v2 to v3 PCI-e so the net overall bandwidth is up. Going past that though with TB is drinking kool-aid mode though. Throwing out more slots doesn't really "buy" anything that is an improvement because taking away about as much if not more than adding.


The PCIe controller (CPU location, not IOHUB), can step down to v. 2.0 though. So from a technical standpoint, it will work.

Technically it can work but it is a waste of resources to essentially permanently lock an additional 4 or 8 v3.0 lanes into v. 2.0. It is one thing to put a legacy v2.0 card in a slot. It is another to hardwire it. Generally, Apple's designs don't try to negate new functionality.

The primary problem for the mainstream standard designs though is that their x16 lanes are already over subscribed. hard wiring to TB controller would make it worse. A Xeon E3 would have an extra 4.

If Apple was wiling to add to the line-up I could see a single slot , TB focus , smaller box with an Xeon E3 ( and perhaps 10GbE so more oriented to being on SAN network. ) that filled the $2000-2500 price gap. That is really an expansion than filling the Mac Pro slot.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.