Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
And you know that because you have the hardware and have tried it right? :p

Probably by looking at the specs.

Thunderbolt 2 spec will give you 2.5 GB/s throughput.

A single 16 lane PCI-E 3.0 slot will give you 15.75.

Modern PC workstations offer around 80 total lanes of PCI-E 3.0 bandwidth or roughly 78 GB/s.

So relying on Thunderbolt 2.0 for, say, graphics cards would cap your performance at slighter faster than AGP port max speeds, the long obsolete interface introduced in 1996.
 
  • Like
Reactions: ssgbryan
Right sure, I'd like to contend that there is almost no time when a GPU actually uses that kind of bandwidth. Bus frequency is far far more important. Really, there's no way to tell how it would behave unless you hook it up and try it. From the demos I've seen I guess no one could tell the difference between TB 1.0 and using the PCI bus in the MP5,1 given two GPUs and an HDD or two. And probably double that with TB 2.0 - again tho you'd need to try it to know.
 
Probably by looking at the specs.

Thunderbolt 2 spec will give you 2.5 GB/s throughput.

A single 16 lane PCI-E 3.0 slot will give you 15.75.

Modern PC workstations offer around 80 total lanes of PCI-E 3.0 bandwidth or roughly 78 GB/s.

So relying on Thunderbolt 2.0 for, say, graphics cards would cap your performance at slighter faster than AGP port max speeds, the long obsolete interface introduced in 1996.

"Thunderbolt (codenamed Light Peak)[1] is a hardware interface that allows for the connection of external peripherals to a computer. It has a transfer speed of 10 Gbit/s per channel over copper wire and 20 Gbit/s per channel using optical cabling.[5] It uses the same connector as Mini DisplayPort (MDP). It was released in its finished state on February 24, 2011.[2]
Thunderbolt combines PCI Express (PCIe) and DisplayPort (DP) into one serial signal alongside a DC connection for electric power, transmitted over one cable. Up to seven peripherals may be supported by one connector through various topologies."
http://en.wikipedia.org/wiki/Thunderbolt_(interface)
"Snyder says Thunderbolt 2 will enable 4K video file transfer and display simultaneously by combining two previously independent 10Gbs channels into one 20Gbs bi-directional channel that supports data and/or display."

http://www.tomshardware.com/news/Thunderbolt-2-Falcon-Ridge-Official-Products-Speeds,22932.html

"PCI Express Base 2.0 specification doubles the interconnect bit rate from 2.5 GT/s to 5 GT/s in a seamless and compatible manner. The performance boost to 5 GT/s is by far the most important feature of the PCI Express 2.0 specifications. It effectively increases the aggregate bandwidth of a 16-lane link to approximately 16 GB/s."

PCI 2.0 Spec



PCI 3.0 Spec

TB 2.0 has 4 lanes at 10Gbits p/s, two in each direction which =20Gbits p/s in each direction. Using your math that's 40Gbits p/s.

It looks as if you are comparing multiple PCIe 3.0 slots aggregate bandwidth to TB 2.0.
I was suggesting one (1) TB 1.0 port to one (1) PCIe 2.0 slot.
When you step up to TB 2.0 and PCIe 3.0 each effectively doubles, then it is the same comparison.
You can see from the image they are quite close. Close enough, so that modules that included more PCIe 3.0 slots would have no performance degradation.
BTW AGP 8x is about 1Gbit p/s. Not even close to PCIe 1.0 spec. Not to mention it was quad pumped (every 4 cycles) and not double pumped (every two cycles)
 

Attachments

  • Screen Shot 2013-06-07 at 4.05.25 AM.png
    Screen Shot 2013-06-07 at 4.05.25 AM.png
    64.3 KB · Views: 103
"Snyder says Thunderbolt 2 will enable 4K video file transfer and display simultaneously by combining two previously independent 10Gbs channels into one 20Gbs bi-directional channel that supports data and/or display."

That's a somewhat deceptive description of where most of the Thunderbolt 2.0 increase is going. The file is being transferred but probably no faster than it was before. Most of the increase in bandwidth is being consumed by the real-time data transfer of the 4K video signal.

It looks as if you are comparing multiple PCIe 3.0 slots aggregate bandwidth to TB 2.0.
I was suggesting one (1) TB 1.0 port to one (1) PCIe 2.0 slot.

Not sure why you would suggest that. Pragmatically data on a Thunderbolt network has to get off at some point. The Thunderbolt controllers have a x4 PCI-e v2.0 interface on them. That actually put the cap on real system bandwidth for PCI-e data traffic.

The Thunderbolt network has to deal with more than just PCI-e data traffic. The higher bandwidth/throughput has to do with the other traffic and/or with lowering latency to more distance locations. "File transfers" will get capped by the x4.

It doesn't appear that Intel is going to change Thunderbolts x4 PCI-e v2.0 limitation any time soon. On most laptops the TB controller is hooked to 4 of the IOHub chip's 8 PCI-e v2.0 connections. It is somewhat doubtful that IOHubs are going to move to PCI-e v3.0 any time soon. (eventually perhaps but not soon.)

Thunderbolt 2.0 is really about more video traffic. They shuffled the deckchairs but there is little "data only" throughput increase here.



When you step up to TB 2.0 and PCIe 3.0 each effectively doubles, then it is the same comparison.

Not necessarily. The increased Thunderbolt traffic is far closer to to making TB a "fat tree" kind of network where the box-to-box interconnect is faster than what the local on-ramp / off-ramp traffic is. Similar to how the Internet backbone is 10+ GbE and what goes into the average home is far closer to 10 MbE. The backbone is much faster because it aggregates traffic into larger workloads.




You can see from the image they are quite close. Close enough, so that modules that included more PCIe 3.0 slots would have no performance degradation.

Not going to be anywhere near close. Largely because TB 2.0 isn't really moving forward. What they are doing is taking was was already there and multiplexing the traffic more. It is increased multiplexing, not increased throughput. PCI-e v3.0 is double speed of v2.0 ; a true increase on throughput.

Not that increased multiplexing isn't a significant accomplishment. I'll be presently surprised if they can keep their isochronous requirements high while mixing bursty PCI-e data traffic with 4K video traffic. Sending data and video largely over independent channels might that is was significantly easier for the timing of any one of those not to get messed up.

They probably have a much bigger transistor budget at the problem for Falcon Ridge ( at least one process shrink) and throw more "hardware" at it without increasing power , cost , and size demands. But it is not keeping pace with PCI-e v3.0 at all in the PCI-e data traffic space.
 
You want InfiniBand.

It works much better as a system interconnect. The downside is that it costs almost 10x as much. People already scoff at Thunderbolt prices. 10x more isn't going to be a viable solution in a market this close to "entry level".

Most of the 'spin' on these excessive modular solutions is about how they are going to deliver better pricing................. which is usually where the smoke and hand waving starts.
 
It works much better as a system interconnect. The downside is that it costs almost 10x as much. People already scoff at Thunderbolt prices. 10x more isn't going to be a viable solution in a market this close to "entry level".

Most of the 'spin' on these excessive modular solutions is about how they are going to deliver better pricing................. which is usually where the smoke and hand waving starts.

It's not a mass-market technology, the price should drop if implemented in a Mac Pro.
 
Uh no.

Imagine how many wires that would be. It'd be like a Dell, but a million times worse. A separate power cable for each piece, a separate lock for each piece...

Why not just make a mac mini with an ATI graphics card for prosumers, and another tower for professionals.
 
Modular makes sense in so many ways, but it is different and it is difficult to do. This is probably why it took so long to come out.

Apple does not want to get into traditional tower wars and a modular design sets them apart from that.

This could be quite exciting and a big seller, much more so than yet another tower design.

In what ways? It seems to make sense in the imaginations of other people who have little grasp on logistics. One person earlier in the thread basically suggested a concept that turned the mac pro into a mainframe with a notebook or imac as its terminal, thinking they would somehow behave as one machine. Computing typically involves integration. Cpus absorbed sata and pci express controllers recently in computing history. Gpus will probably be one of the next items for a very large portion. Given the stratification in performance levels, depending on pricing, discrete gpus might hold out parallel to APU solutions. Storage has remained external for markets that need more than what can be contained internally and for local backups. Backups alone will maintain those solutions for a number of years with sensitive data and the just the amounts of data some people have to back up on a nightly basis.

The desire for modularity implies we don't already have it. You can plug things in. It won't allow you to make cpus in different boxes act like one machine, as intel wouldn't cannibalize the sales of some of their expensive cpus. I'm skeptical regarding the concept of egpus, as I think too many people will just opt for computers with embedded graphics that hit the point of good enough rather than high markup solutions. You seem to just see this as a way to backport things designed for notebooks.

I think that with Thunderbolt 2 coming up this design is the only way to go. The ultimate in expandability is modular. A base enclosure with two CPU's + RAM + maybe one GPU and some storage for the OS. With Thunderbolt 2 a modular system makes a great choice for everyone.
If you try to put all of it in one enclosure your going to need a lot more cooling which creates more noise. Those GPU's create just as much heat as the CPU's. To put them in a separate enclosure makes more sense.
And what about the people who need it for a server. They don't need a whole bunch of GPU's or HD space that's all done externally.
I for one would welcome this.

Thunderbolt 2 doesn't change much of anything. Two cpus still uses a more expensive board and cpu options. The realistic reason for using many gpus would be computation, in which case you'll still be bandwidth constrained by thunderbolt. HDs in my opinion still need both solutions. Backup will be external anyway. Your way means you need more than one external box for smaller storage solutions as you lose the internal bays. The internal bays themselves add very little to the cost of the machine, so axing them would be kind of neurotic. There are many smaller machines that still contain at least 4 bays, and the sata controller is built into the cpu. The imac went up in internal storage a couple cycles back. It used to only house one drive. Without the space to do two (used to be a 3.5+2.5) they wouldn't be able to use the fusion drive solution.
 
It's not a mass-market technology, the price should drop if implemented in a Mac Pro.

Mac Pro's by themselves can't move the overall Infinband market. Never mind the fact that no one has ever actually delivered Infinband drivers for OS X. ( At one point there was talk of doing some, but they never materialized years ago. I would be surprised if some showed up now. ) I have a suspicion that the Mach microkernel may not be so friendly to the implicit Infiniband driver model (i.e., RDMA ).

If the vendors don't want to go there ( compete in the low end in the market) it isn't going to go there. Almost all of the vendors now have a Ethernet/Infiniband mix. Ethernet 10GbE looks to be ramping up onthe "for the masses" push, but Inifiniband is still primariy concentrating on putting distance between itself and the major competitors Fibre Channel and lower speed Ethernet. Faster than than cheaper is where the major investment is going.

As long as fiber cabling is relatively high prices that is right move. Thunderbolt has done nothing to push the cost into the affordable range ( even after all of the Lightpeak and "real soon now" Thunderbolt promises. ) Where can find TB fiber cable it is quite expensive.
 
There are many smaller machines that still contain at least 4 bays, and the sata controller is built into the cpu.

Not yet, technically. You have to buy an IOHub/Southbridge from the CPU vendor though to match your CPU. That chipset these days will probably have at least 6 SATA lanes coming out of it. It isn't inside the CPU package, but you have to buy it with the CPU. [ There are no 3rd party chipsets anymore. ]

There are some new Haswell ultramobile packages that have the IOHub bundled into the CPU package (two dies: one 'CPU' (and other stuff) and one IOHub die). The IOHub is down to be just another 5-7W which isn't all that much. You can put them much closers together now and not cause a thermal management headache. Things are going in that general direction though. It just isn't mainstream implementation practice right now.


It is extremely dubious system design though to pay for 6+ lanes and use exactly zero of them. Even using just one is rather dumb (unless there are extremely high space constraints ) is extremely peculiar.

It is even more dubious to ignore them just so can inject Thunderbolt into the mix. So instead of the SATA controller you have to buy anyway its add this additional bill of materials to the system:

1. TB controller for host box
2. TB controller for module adding.
3. TB cable
4. Yet another SATA controller (not any particularly faster or better than one already have).

Just to provide the 2nd or 3rd disk that the host already has infrastructure for. That isn't cost effective. This is not great, inspired system design. It is straight Rube Goldberg hackery.

The imac went up in internal storage a couple cycles back. It used to only house one drive. Without the space to do two (used to be a 3.5+2.5) they wouldn't be able to use the fusion drive solution.

Again technically the ODD drives went SATA also a while back, so the iMac had two for several years. The internal count went to 3 though before the great 2012 change. That is in the reasonable range ( 6 available use 3 ). The move back to two (like the Mini ) is a bit dubious but far better than an extremely odd-ball 0-1 for a desktop. ( given the low access trying to minimize the HDDs present is probably a good trade-off in context.)
 
The info I posted was quotes from SIG and others.

Personally it does not matter to me, my next computer will be a hackintosh, no question about it.

Apple never has kept up, they have always tried to make a pretty box that works with little intervention.

The direction that is going is of little interest to me anymore.

I love my pro, but I will not buy another. When it can no longer do what I ask of it, then and only then I will replace it with something that can.
 
Not yet, technically. You have to buy an IOHub/Southbridge from the CPU vendor though to match your CPU. That chipset these days will probably have at least 6 SATA lanes coming out of it. It isn't inside the CPU package, but you have to buy it with the CPU. [ There are no 3rd party chipsets anymore. ]

There are some new Haswell ultramobile packages that have the IOHub bundled into the CPU package (two dies: one 'CPU' (and other stuff) and one IOHub die). The IOHub is down to be just another 5-7W which isn't all that much. You can put them much closers together now and not cause a thermal management headache. Things are going in that general direction though. It just isn't mainstream implementation practice right now.
/QUOTE]

Thanks for the correction. I thought it was bundled for some reason as of Sandy Bridge. As you point out you still have to buy it as part of the chipset. The typical argument on here is that you're paying for the bays when you buy the machine, and this would alleviate that cost for others. It costs something to implement them, but I've found Apple's pricing to be highly contrived in terms of its starting points. I've gone over that before. I also suspect that a high percentage of mac pro users populate more than one bay. Pushing it out to another box means at least 2 external boxes, given the need for backups. This just increases price and footprint overall.

I think all of this is just morphed from users who want to plug in peripherals that were initially designed with notebooks in mind. It skews off into weirdness when it becomes an attempt to turn the mac pro into an Xgrid solution. Before anyone says it, I'm aware Xgrid support was deprecated.

Again technically the ODD drives went SATA also a while back, so the iMac had two for several years. The internal count went to 3 though before the great 2012 change. That is in the reasonable range ( 6 available use 3 ). The move back to two (like the Mini ) is a bit dubious but far better than an extremely odd-ball 0-1 for a desktop. ( given the low access trying to minimize the HDDs present is probably a good trade-off in context.)

Yeah I wasn't thinking of the optical drive, but my point was more along the lines that they didn't limit a recent design to a single drive. People on here keep trying to base the design of the mac pro off the rmbp rather than a closer parallel due to nonsensical requirements.
 
Mac Pro's by themselves can't move the overall Infinband market. Never mind the fact that no one has ever actually delivered Infinband drivers for OS X. ( At one point there was talk of doing some, but they never materialized years ago. I would be surprised if some showed up now. ) I have a suspicion that the Mach microkernel may not be so friendly to the implicit Infiniband driver model (i.e., RDMA ).

If the vendors don't want to go there ( compete in the low end in the market) it isn't going to go there. Almost all of the vendors now have a Ethernet/Infiniband mix. Ethernet 10GbE looks to be ramping up onthe "for the masses" push, but Inifiniband is still primariy concentrating on putting distance between itself and the major competitors Fibre Channel and lower speed Ethernet. Faster than than cheaper is where the major investment is going.

As long as fiber cabling is relatively high prices that is right move. Thunderbolt has done nothing to push the cost into the affordable range ( even after all of the Lightpeak and "real soon now" Thunderbolt promises. ) Where can find TB fiber cable it is quite expensive.

If Apple wants to use a technology, they make a way.
 
I am sorry, but is this really your counter point?

Yes indeed... But not so much a counterpoint... more like a common sense refutal.


"Thunderbolt (codenamed Light Peak)[1] is a hardware interface that allows for the connection of external peripherals to a computer. It has a transfer speed of 10 Gbit/s per channel...

http://www.tomshardware.com/news/Thunderbolt-2-Falcon-Ridge-Official-Products-Speeds,22932.html

"PCI Express Base 2.0 specification doubles the interconnect bit rate from 2.5 GT/s to 5 GT/s in a seamless and compatible manner.

PCI 2.0 Spec

PCI 3.0 Spec

TB 2.0 has 4 lanes at 10Gbits p/s, two in each direction which =20Gbits p/s in each direction. Using your math that's 40Gbits p/s.

It looks as if you are comparing multiple PCIe 3.0 slots aggregate bandwidth to TB 2.0.
I was suggesting one (1) TB 1.0 port to one (1) PCIe 2.0 slot.
When you step up to TB 2.0 and PCIe 3.0 each effectively doubles, then it is the same comparison.
You can see from the image they are quite close. Close enough, so that modules that included more PCIe 3.0 slots would have no performance degradation.
BTW AGP 8x is about 1Gbit p/s. Not even close to PCIe 1.0 spec. Not to mention it was quad pumped (every 4 cycles) and not double pumped (every two cycles)

And that's the crucial issue as I see it. I mean we're talking about throughput bandwidths of over 1GB (gigabyte) per second. That's like 15min. of 4:4:4 uncompressed 1080p video - IN ONE SECOND. What card uses that or does that? Sure some load up data and such like that and then use it for whatever but it has little to do wither overall performance and just about nothing to do with frame rates for either video or CG.

I haven't tested any of this - it's just my opinion - but it makes sense to me. It shown also when someone places a fast card in an upper slot and repots here and elsewhere that there was absolute no performance hit - as so many have already.
 
And that's the crucial issue as I see it. I mean we're talking about throughput bandwidths of over 1GB (gigabyte) per second. That's like 15min. of 4:4:4 uncompressed 1080p video - IN ONE SECOND. What card uses that or does that? Sure some load up data and such like that and then use it for whatever but it has little to do wither overall performance and just about nothing to do with frame rates for either video or CG.

Your math is a little off. 4:4:4 uncompressed 16 bit 1080p has a rate of approximately 300 MB/sec. Make it a few streams, or make it 4 or 5k, and you can hit that 1 gig bottleneck easily.
 
Your math is a little off. 4:4:4 uncompressed 16 bit 1080p has a rate of approximately 300 MB/sec. Make it a few streams, or make it 4 or 5k, and you can hit that 1 gig bottleneck easily.

Ya, I was being general. Not only is most video NOT 4:4:4, (NOT uncompressed) but the limits are considerable higher than 1GB/s. Can it be maxed out? Sure but pretty much only someone working in 2K on a hollywood (type) film might see it.

it just simply almost never happens - even with 4 monitors. Something like PCIe v3 x16 is so over-speced it's just dumb. Buss frequency on the other hand could still stand to be improved even considering the very fastest hardware.

HDD I/O is a similar case. The 1.5GB/s of a 2-SSD RAID0 is almost never reached by any wares and when it is there's almost never any difference (to the user) from 0.8GB/s to 3GB/s (it all feels about the same - usually). Freq. and Latency OTOH could be GREATLY improved as you well know.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.