Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

mherrick

macrumors newbie
Original poster
Jul 22, 2002
6
0
Every other mac has a fixed graphics card, so Apple can route data and video through Thunderbolt as it pleases. In the Mac Pro, will thunderbolt ports be via an special apple custom "Thunderbolt" PCIe graphics card? It think the new Mac Pros will have something like a AMD Radeon HD 6970M as part of the motherboard so that Apple can have thunderbolt ports on both the front and back of the machine. This would leave the traditional graphics PCI slot open for other/additional cards. I don't think apple would release a machine that could have thunderbolt disabled by the user swapping out a graphics card. Am I missing something here?
 
Every other mac has a fixed graphics card, so Apple can route data and video through Thunderbolt as it pleases. In the Mac Pro, will thunderbolt ports be via an special apple custom "Thunderbolt" PCIe graphics card? It think the new Mac Pros will have something like a AMD Radeon HD 6970M as part of the motherboard so that Apple can have thunderbolt ports on both the front and back of the machine. This would leave the traditional graphics PCI slot open for other/additional cards. I don't think apple would release a machine that could have thunderbolt disabled by the user swapping out a graphics card. Am I missing something here?

I would guess they will come up with a way to integrate it in the motherboard?
 
They'd have two primary options:
  1. Include a TB chip on the graphics card itself.
  2. Place the TB chip on the backplane board, and use a cable to get the DP signal from the graphics card to the backplane board (gets the DP output from the GPU to the TB chip).
 
Option 2 seems far more viable than 1. Mac graphics cards already cost way too much, no reason to further increase their cost.
From Apple's POV, it wouldn't differ much, if at all (still need a TB chip, regardless if it's on the backplane board or graphics card).

But for the consumer, having TB on the backplane board is cheaper in the long run, as new GPU cards won't include a TB chip (paying for a new TB chip every time they upgrade graphics cards). It also alleviates additional pressures on GPU card makers (all ready have to deal with PCB real estate issues, as the TB chip would need space and add additional heat, as well as costs).

I've thought about this a lot, and all that's really needed to keep costs low (generally speaking in the industry, as Apple isn't the only vendor), is that an open standard is created for such a cable (could be an edge connector similar to SLI or CrossFire cables), as it would allow for economy of scale to reduce costs.

BTW, last I saw, the currently available TB chips go for ~$90 per in quantity, so that would increase the cost of graphics cards by quite a bit (margin will be added). Devices I've seen so far, are $300 (specialty products), so even if economy of scale kicks in due to an open standard, I'd figure ~ $150 over normal costs (no TB chip). Not exactly cheap, and would make users upset IMO (presumably, future versions will increase performance, such as being PCIe Gen 3.0 compliant as well as increases in TB itself, as well as cheaper than current parts, which is usually the case).
 
They'd have two primary options:
  1. Include a TB chip on the graphics card itself.
  2. Place the TB chip on the backplane board, and use a cable to get the DP signal from the graphics card to the backplane board (gets the DP output from the GPU to the TB chip).

I agree with option 2 from a simple marketing stand point -- SELL more MacPros ! but it does seem possible, as you mention putting the chip on the GPU card. It could be done but Who the hell would do it? Not Apple for sure.
 
I agree with option 2 from a simple marketing stand point -- SELL more MacPros ! but it does seem possible, as you mention putting the chip on the GPU card. It could be done but Who the hell would do it? Not Apple for sure.

It could be done the same way it's done on the Macbook Pro. Lighter stuff like the desktop uses integrated and everything else uses the card.
 
They'd have two primary options:
  1. Include a TB chip on the graphics card itself.
  2. Place the TB chip on the backplane board, and use a cable to get the DP signal from the graphics card to the backplane board (gets the DP output from the GPU to the TB chip).

thats what I would expect
 
They'd have two primary options:
  1. Include a TB chip on the graphics card itself.
  2. Place the TB chip on the backplane board, and use a cable to get the DP signal from the graphics card to the backplane board (gets the DP output from the GPU to the TB chip).

3. Provide TB on the backplane, but with no integrated DP, and an external breakout dongle which merges TB (from the backplane) DP (from the video card). Advantage: will work with any video card which has DP.
4. Don't provide TB on MacPro because it doesn't make sense there, given the availability of eSATA and SAS cards for those who need screamingly fast disk access, and the need to support existing video cards. Also note a principal driver behind TB is the ability to get fast interfaces to a bunch of different services into a single small connector which is friendly to laptops and other small devices. This is not a factor for MacPro.

I think 4 is most likely.

Spidey!!!
 
Last edited:
They could also have the TB on integrated graphics, and leave the MDP alone for the GPUs.
 
either there will be a chip on the motherboard that communicates with the GPU, or Intel has another, DP-less TB version.
 
They could also have the TB on integrated graphics, and leave the MDP alone for the GPUs.

Integrated graphics means giving up core and/or cache space for the GPU. At least so far Intel isn't playing that for mid-range Xeon chips. You could do this with and E3 but that is not an E5.

TB solves a problem that Mac Pro's really don't have for the vast majority of Mac Pro users: PCI-e expandability.

The mythical xMac would have TB because it is just a headless iMac. Surprise the iMac has TB so if you just removed the monitor it would still be there. However, that isn't a Mac Pro. It is a iMac.
 
Integrated graphics means giving up core and/or cache space for the GPU. At least so far Intel isn't playing that for mid-range Xeon chips. You could do this with and E3 but that is not an E5.

TB solves a problem that Mac Pro's really don't have for the vast majority of Mac Pro users: PCI-e expandability.

The mythical xMac would have TB because it is just a headless iMac. Surprise the iMac has TB so if you just removed the monitor it would still be there. However, that isn't a Mac Pro. It is a iMac.

Of course a lot of this comes down to if Apple's new displays will work with DP. If they don't, Apple has to integrate TB.
 
3. Provide TB on the backplane, but with no integrated DP, and an external breakout dongle which merges TB (from the backplane) DP (from the video card). Advantage: will work with any video card which has DP.
4. Don't provide TB on MacPro because it doesn't make sense there, given the availability of eSATA and SAS cards for those who need screamingly fast disk access, and the need to support existing video cards. Also note a principal driver behind TB is the ability to get fast interfaces to a bunch of different services into a single small connector which is friendly to laptops and other small devices. This is not a factor for MacPro.

I think 4 is most likely.

Spidey!!!
Though there is the possibility of using an external cable, I don't expect this is how Apple would do it.

I could see it however with a PCIe TB card and older graphics cards (those that wouldn't include a yet unpublished specification for a cable/connector to get DP data over to the TB chip).

As per your option #4, it seems Apple will more likely drop the MDP based monitors in favor of the newer TB based models that have already been hinted at, so even the MP would need a TB connector in order to get a signal to the newer monitors (and likely offer a MDP port as well for the currently offered displays).

They could also have the TB on integrated graphics, and leave the MDP alone for the GPUs.
The SB E5's don't have an active IGP on them, so to do such an implementation, would require another GPU embedded on the backplane board, which would increase costs, and reduce available PCIe lane counts for slot configurations.

So I don't see this as a realistic option in a PCIe equipped desktop.

AIO's that don't use PCIe slots of any kind are another matter (deactivate via firmware an IGP in favor of an embedded GPU that offers higher performance if such CPUID's are even offered in such a system). But connecting up the PCB traces are easy to get a DP signal to the TB chip on such a system and laptops (both being AIO's; just one format is portable while the other isn't).

either there will be a chip on the motherboard that communicates with the GPU, or Intel has another, DP-less TB version.
It's certainly possible to do a TB port without graphics, but it seems Intel is trying very hard to avoid this scenario. The reasoning it seems, is as a means of reducing confusion over mixed support, which has a direct bearing on initial adoption rates.

Makes good sense from their POV, as it's primary target is AIO's according to their published literature/product pages (laptops and slot-less desktops).

Sorry are you referring DP to Display port or Mini display port?
Same electrical specifications, so the difference between DP and MDP, is the connector itself.
 
4. Don't provide TB on MacPro because it doesn't make sense there, given the availability of eSATA and SAS cards for those who need screamingly fast disk access, and the need to support existing video cards. Also note a principal driver behind TB is the ability to get fast interfaces to a bunch of different services into a single small connector which is friendly to laptops and other small devices. This is not a factor for MacPro.

I think 4 is most likely.

I highly doubt the would leave TB off of their premier line primarily used by pro users. TB storage for video projects alone would be worth it.
 
Whatever Apple does, I hope it doesn't take away from the available PCIe lanes. I'm not sure how many Sandy Bridge-E supports, but Nehalem supports 40 lanes, so Thunderbolt's 4 lanes is 10% of what we have. :/
 
Whatever Apple does, I hope it doesn't take away from the available PCIe lanes. I'm not sure how many Sandy Bridge-E supports, but Nehalem supports 40 lanes, so Thunderbolt's 4 lanes is 10% of what we have. :/

It will. Not really much else of a way to do it.
 
Whatever Apple does, I hope it doesn't take away from the available PCIe lanes. I'm not sure how many Sandy Bridge-E supports, but Nehalem supports 40 lanes, so Thunderbolt's 4 lanes is 10% of what we have. :/

What do you want 40 lanes for? You don't own a MacPro anyway.

With a high speed external expansion bus, I think (and hope) the next MacPro will be a smaller form factor anyway, with less card slots.
 
What do you want 40 lanes for? You don't own a MacPro anyway.

With a high speed external expansion bus, I think (and hope) the next MacPro will be a smaller form factor anyway, with less card slots.

Maybe. Thunderbolt is still slower than the two 4x slots. Thunderbolt is not nearly as useful in low latency applications, making it a poor candidate for stuff like latency sensitive things.

Honestly, I see them doing things like sacrificing a USB or Firewire controller instead of PCI slots. The 2U Mac Pro rumors are... interesting. But I see a lot of issues with that besides the PCI slots. Even though a 2U Mac Pro would make a lot of enterprise people happy.
 
I expect we will gave this free on thunderbolt Mac Pro.

13-52-24_0.688935.jpg
 
Last edited by a moderator:
well, since DP signal is a packets, they will need some IC to mix the signal
 
BTW, last I saw, the currently available TB chips go for ~$90 per in quantity

VR-Zone is claiming $10-15 for the host chip.

Whatever Apple does, I hope it doesn't take away from the available PCIe lanes. I'm not sure how many Sandy Bridge-E supports, but Nehalem supports 40 lanes, so Thunderbolt's 4 lanes is 10% of what we have. :/

PCIe is the only possibility, there are no other interfaces that could be used. When Thunderbolt gets integrated into the chipset (which seems like 2013 at the earliest), then it will no longer steal your PCIe lanes. However, it will take a good share of the connection (DMI or whatever it will be then) between the CPU and PCH.

goMac said:
Thunderbolt is still slower than the two 4x slots.

PCIe 2.x x4 = 16Gb/s
Thunderbolt = 2x10Gb/s

Light Ridge provides up to four channels, which means it's good for up to two ports and total of 40Gb/s of bandwidth. That is more than two x4 slots, or one x8.
 
VR-Zone is claiming $10-15 for the host chip.
Interesting.

I tried using ark.intel.com both when TB was announced, and even just before posting for official quantity pricing. Unfortunately, they still don't have a listing (did for LightPeak, but it comes up as an error as it seems they've removed it). Just product announcement and Tech Briefs. :rolleyes:

I've seen other articles on TB chips at the ~$90 mark (under the impression the figure was from an inside source, as that aspect wasn't elaborated on). Looking at this logically, it makes sense to me that such a figure is more accurate, as they went over budget (optical variant, LightPeak, was supposed to make the $50 mark, and missed). Figure in the "lost" R&D, and ~$90 seems realistic.

So it makes me think that the figure VR-Zone is quoting is either wrong, or perhaps for one of the chips in the device end or even the cable (remember, there are modulator chips in each end of TB cables). This is just a guess of course, but I think it's possible.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.