Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
No, and I never said that it did. But each individual port has some bandwidth, which means that n ports has n bandwidth, in total.

This is a switch. You are double counting the bandwidth.

A <-----> B < ------> C

If device A and C are communicating with each other at 20Gb/s bidirection that means one port or B is getting 20Gb/s and another port is getting 20Gb/s bidirection. It is effectively the same bandwith. There is not 40 GB/s data going back and for it is 20Gb/s. That 20 makes multiple stops along the way but it isn't "bigger" if just jump in the middle and look both ways.

And no you can not just count the points on a switch and declare the aggregate througput of the switch solely based on the number of ports. That is deeply flawed and incorrect methodology. What would be looking for is the aggregate/crossbar/bisection bandwith from the specs not merely counting the ports.

In Thunderbolt 2, two lanes are combined to give a 2Gb/s instead of 2 x 1Gb/s. If a controller only has x4 PCIe lanes (two in each direction) then how can it support two ports per chip?

None of that is particular correct at all. First Thunderbolt calls them channels. The Thunderbolt channels were always bidirectional.

" ... It is achieved by combining the two previously independent 10Gbs channels into one 20Gbs bi-directional channel that supports data and/or display. ..."
http://blogs.intel.com/technology/2...ndwidth-enabling-4k-video-transfer-display-2/

I think you are muddling the 4 lanes down in the mini display standard interfance. Thunderbolt always coupled those into pairs to form a channel. The TB controllers provision channels. For example the 4 , 2 , 1 channel controllers here.

http://arstechnica.com/gadgets/2011...rbolt-controller-could-broaden-reach-of-spec/

1 or 2 channels is a single port. 4 channels means can support two physical ports ( or just one port and 2 channels aren't hooked to anything. )


Thunderbolt v2 takes two 10Gb/s TB channels and logically, not physically, merges them into one 20b/s. The number of physical channels is still EXACTLY the same as it was in TB v1. It is the exact same infrastructure in the cables.

PCI-e lanes are also bidirectional. I have no clue where this "tow in each direction" comes from at all. A bundle of 4, x4, PCI-e lanes is bidirectional across the whole bundle.

The thunderbolt controller has been and still is a PCIs switch that consumes/produces PCI-e data to be ecoded/decoded from the Thunderbolt network. x4 worth of data can go into the network and likewise x4 worth of data can come out. It is a switch. Directing it to which every port is necessary is what it did do and what it will do in the future.

The only difference between TB v1 and TB v2 is that in v1 PCI-e data was segregated. The internal switch didn't have much choice as to which TB channel to put the data onto. If destine for a TB controller off Port 1 then one of the channels of that port was the only choice. If destine for a TB controller off Port 2 then same simple choice.

In v2 the switch now how as choice between channel 1 or channel 2 off of whichever port is heading in the correct direction. It is going to have to figure out which of the two is the best balance between that and the video data it is transporting also plus any backbone network data transmissions it has to do ( TB data coming in that is not for this TB controller but another one on the network. )

In term of the put data/on off the TB backbone network there are no changes with respect to PCI-e data. It is still a x4 v2 connection. The theoretical top end bandwidth is exactly the same.

The change in TB v2 is actually primarily to allow more video data onto the backbone network. There is small side effect of being able to get closer to the x4 PCIe v2 limit of 16Gb/s ( 2GB/s ), but it won't quite get there on average networks.


Is USB additive on a protocol level? No, but that doesn't mean that you can not combine several sources in software to get the combined bandwidth of many sources.

But if those sources combine to something bigger than USB you are stuck with a max of USB. And frankly USB is a horrible example because it is so chatting if not using the new SuperSpeed bus. The actual overhead of the protocol goes up the more sources you add. The real data throughput actually goes down with more sources.

However, again though the theoretical 5Gb/s of USB 3.0 means not much if the USB 3.0 controller is hooked to a x1 PCI-e v2 lane with a limit of 4Gb/s (500MB/s ). Not that you were every gong to see the whole 5Gb/s anyway.


It has nothing to do with intel, protocols, or their press release. I was referring to the new Falcon ridge controllers.

Horse hooey. Falcon ridge has the same physical channels as v1. There is nothing new in terms of physical channels.




Ok, so in Thunderbolt v1 a two port controller had x4 lanes. Meaning each port got 1 lane in each direction.

No. You are twisting up lots of terminology. All thunderbolt controllers 1 or 2 physical ports ( 1, 2, or 4 TB channels ) have a x4 PCI-e bundle. The PCI-e "side" of the Thunderbolt controller is not particularly material at all to what is on the other side ( facing the Thunderbolt network ).

That the DisplayPort cables has four pairs of wires in also doesn't particularly make it a x4 bundle either electrically. Obviously it is bundle of 4 bi-direction pairs but it has no connotation at all to PCI-e's notion of x1 , x4 , x8, x16.



Now Thunderbolt v2 combines these lanes to reach 2Gb/s, so how is that going to work out if that needs to be shared between two ports?

Primarily the same way it gets down now. thunderbolt controllers are primarily switches. It is a switch's general job to move data from one set of wires to another set of wires. In v2 is it more complicated because there are more on-the-fly dynamic decisions to be made but it is the same stuff as before in encoding/decoding traffic while at the same time routing it to the correct destination.


Of course not, which is why your talk about amount of lanes to the Falcon ridge controllers in not from intel.

Misdirection.

The low level pair lanes in Falcon Ridge is exactly the same as it is now. That's way the current cables are 100% captable with the new system, because physically there is not difference. Falcon Ridge controllers ( v2 ) does a different purely logical/virtual bundle and a larger degree of multiplexing than v1. The overall aggregate bandiwidth is exactly the same. The interface bandwidth to-form the native PCI-e data is exactly the same (x4 PCI-e v2 ). There is only minor changes in overhead and latency to getting to maximizing that.

----------

deconstruct60:

Let's make an example. I add two PCIe disks, each on an individual TB2 port to the computer. To limit the discussion to if it's possible to use bandwidth from two ports, let's say we make sure that each port is on a separate controller.

Adding the additional controller is what is giving the addition bandwidth to-from the host computer. Not the physical ports. The ports are provisioned from the controller's switch. Ultimately they are bounded by the switch's caps.
 
So I'm going out on a limb and project that Tyans' total potential market would be at least .02% of workstation or desktop operators...
Sure, I'd buy that. :)

You do know there are LGA2011 boards with 6 PCIe 3.0 slots (four 16x, one 8x, and one 1x), available off newegg of all places right now, right?
Well if I didn't before I do now. :) I'm mostly just saying that the overwhelming majority of machines I see or hear about on the net have only 4 full length slots and they're usually situated such that two double-wide cards eats up all the space.

If you're saying we need to put all our drives externally to reduce heat, I don't buy it. That's what fans are for. With low power drives, I'm not sure how much heat the extra heat could really contribute.
That's not what I'm saying, no. Just the obvious fact that the more one adds internally the faster the fans need to go and thus the more noise they make.

I'm clearly just talking theoretical here (arent we all?), but even the Mac Pro with Six TB2 ports will not have as much drive throughput two or three 4 port SAS cards and dual GPUs--a config that, again, I can buy off NewEgg this afternoon. That may be an overkill for anything 99% of people are doing, but the same was said about a lot of the technology we use commonly today.
Of those things said long ago most of them are still true today. TB/TB2 all the more-so of course.

Again, TB is clearly a great technology, but as far as TB taking the place of PCIe or even becoming a standard on PC, I'm not seeing the need (except maybe on laptops?).
Well I dunno what to say to this. There is USB3 and TB2 ports on the MP6,1 so if you don't see a need for the TB2 ports then I guess you plan to use the USB3 ones? We're stuck with the choice between those two - or maybe BlueTooth 4.0 or EtherNet... so you now have to pick one of those. Or dump Apple maybe...

So. I'm grooving to this as well. IF Apple had been as smart as they pretend to be; they would have designed the New Mac Pro as a box-shaped module to retrofit the legacy towers (and interface with internal storage / cards to fill up the remaining space in the Old Mac Pro Cases. They they would therefore rack mount (Pro Studios use racks right ?) and Apple could pat itself on the back for offering an ingenious & environmentally sustainable computing upgrade path.
As is the cylindrical MacPro is just a cup-holder friendly CPU for Hipsters to edit 4K in their cars with .:D Doesn't play nice with Edit Suite Space space or peripherals, unless Apple plans to release a donut-shaped external storage unit for the towers to 'stick into'.
Sounds phallic. :p I love your choice of words tho: "cup-holder friendly CPU for Hipsters" :D As always we could play the shouduv-coulduv game endlessly. But we've already been given the basic spec - so now it's just about figuring out how it might be better, where it might be lacking, and how best to use what's been served up to our greatest respective advantages.

Where these seem to shake out in my estimation:
Better - TB2 for enclosure storage units (up to 12 rotational drives or 4 SSDs per TB2 port).
Better - TB2 for PCIe card attached storage (up to 12 rotational drives or 4 SSDs per TB2 port).
Better - TB2 for additional GPU compute nodes (up to 12 additional over the two included).
Better - TB2 for additional CPU compute nodes (up to 12 additional over the included CPU).
Better - USB3 for stuff like audio interfaces, MIDI, single and dual drive enclosures, card readers, etc..
Better - BlueTooth 4.0 with 24Mb/s and all kinds of extra security and protocol options.
Better - Smaller footprint where between 6 and 8 MP6,1 machines occupy the same space as one MP5,1

Lacking - Only two internal drives can be added and only proprietary (Apple?) types.
Lacking - WWDC unit shows only 4 RAM slots. Not a huge deal but still a down-spec.
Lacking - GPU user selection (Limited and proprietary) - I guess only important to gamers.​

Some of the "Better" storage solution listings might actually be the same speed-wise but are better because there are 6 to 12 connections available instead of only the 3 found on the MP5,1 and previous models. The above shake-out offers many many more configuration options over previous MP models.
 
Last edited:
I disagree..

Call it as you see it, just like you did.

Please explain how I'm a shill.

"A shill, also called a plant or a stooge, is a person who publicly helps a person or organization without disclosing that they have a close relationship with the person or organization.

"Shill" typically refers to someone who purposely gives onlookers the impression that they are an enthusiastic independent customer of a seller (or marketer of ideas) for whom they are secretly working. The person or group who hires the shill is using crowd psychology to encourage other onlookers or audience members to purchase the goods or services (or accept the ideas being marketed). Shills are often employed by professional marketing campaigns. "Plant" and "stooge" more commonly refer to any person who is secretly in league with another person or organization while pretending to be neutral or actually a part of the organization he is planted in, such as a magician's audience, a political party, or an intelligence organization "


So Everyone who happens to like this machine is a dishonest lying scumbag?
How very mature.
 
Last edited:
Adding the additional controller is what is giving the addition bandwidth to-from the host computer. Not the physical ports. The ports are provisioned from the controller's switch. Ultimately they are bounded by the switch's caps.

Can two Thunderbolt ports carry more data than one?
 
Last edited:
Can two Thunderbolt ports carry more data than one?

For drives and raid arrays and such like that it can. In exactly the same way RAID would otherwise work. For example maybe there is a 3-SSD RAID0 on TB-Port1 Called Speedy01 and another identical one on TB-Port2 called Speedy02. You could combine Speedy01 and Speedy02 into a RAID0 array and call it SuperSpeedy. :D

Alternatively you could just add the 3 individual SSDs connected to each of the TB2 ports one and two, all into a 6-drive SSD RAID0.

Either way you could get about 4GB/s out of it and that's twice the 2GB/s that one port alone delivers. Do the same with another 3 SSDs on another TB2 port and you're looking at 6GB/s... and so on...
 
Last edited:
For drives and raid arrays and such like that it can. In exactly the same way RAID would otherwise work. For example maybe there is a 3-SSD RAID0 on TB-Port1 Called Speedy01 and another identical one on TB-Port2 called Speedy02. You could combine Speedy01 and Speedy02 into a RAID0 array and call it SuperSpeedy. :D

Alternatively you could just add the 3 individual SSDs connected to each of the TB2 ports one and two, all into a 6-drive SSD RAID0.

Either way you could get about 4GB/s out of it and that's twice the 2GB/s that one port alone delivers. Do the same with another 3 SSDs on another TB2 port and you're looking at 6GB/s... and so on...

Thank you! This is what I have tried to hammer home. :)


This is a switch. You are double counting the bandwidth.

A <-----> B < ------> C

If device A and C are communicating with each other at 20Gb/s bidirection that means one port or B is getting 20Gb/s and another port is getting 20Gb/s bidirection. It is effectively the same bandwith. There is not 40 GB/s data going back and for it is 20Gb/s. That 20 makes multiple stops along the way but it isn't "bigger" if just jump in the middle and look both ways.

And no you can not just count the points on a switch and declare the aggregate througput of the switch solely based on the number of ports. That is deeply flawed and incorrect methodology. What would be looking for is the aggregate/crossbar/bisection bandwith from the specs not merely counting the ports.

This is a straw man argument. The scenario presented is not one in where A communicates with C, neither one where devices are daisy chained, but one where 6 devices are connected directly to the Mac Pro, one port each exclusively.

Lemme draw it for you:


Code:
.---------.
|         |
| Mac Pro | <-----> A
|         | <-----> B
|         | <-----> C
|         | <-----> D
|         | <-----> E
|         | <-----> F
|         |
'---------'
 
Lemme draw it for you:


Code:
.---------.
|         |
| Mac Pro | <-----> A
|         | <-----> B
|         | <-----> C
|         | <-----> D
|         | <-----> E
|         | <-----> F
|         |
'---------'
Ooh, since we're drawing pictures, let me draw what *I* understand the TB2 situation to be...
Code:
.---------.
|         |
| Mac Pro |<-> x4 lane TB2 controller   \<-----> A
|         |                             /<-----> B
|         |
|         |<-> x4 lane TB2 controller   \<-----> C
|         |                             /<-----> D
|         |
|         |<-> x4 lane TB2 controller   \<-----> E
|         |                             /<-----> F
|         |
'---------'

That is, there are three TB2 controllers, each of which have PCIe v2 x4 lanes of bandwidth, each two TB2 ports sharing that single x4 lane for each controller.
Thus, x4 times three = x12 lanes total bandwidth.

Is this not currently true?
 
That is, there are three TB2 controllers, each of which have PCIe v2 x4 lanes of bandwidth, each two TB2 ports sharing that single x4 lane for each controller.
Thus, x4 times three = x12 lanes total bandwidth.

Is this not currently true?

Yes, currently.

Thunderbolt v2 is pretty much this:

TBT-pic.png


What's the configuration of the Falcon Ridge controllers? Let's see:

http://en.wikipedia.org/wiki/Thunderbolt_(interface)#Controllers

Oh, no information yet.
 
Yes, currently.

Thunderbolt v2 is pretty much this:

Image

What's the configuration of the Falcon Ridge controllers? Let's see:

http://en.wikipedia.org/wiki/Thunderbolt_(interface)#Controllers

Oh, no information yet.

So if it's true that:
"Thunderbolt 2/Falcon Ridge still feed off of the same x4 PCIe 2.0 interface as the previous generation designs. Backwards compatibility is also maintained with existing Thunderbolt devices since the underlying architecture doesn't really change."
...then my picture is also true, and there are only three PCIe v2 x4 lanes of bandwidth passing through six TB2 ports.

Until they double the bandwidth of the controllers, TB2=TB1=no change=only x12 lanes of v2 PCIe bandwith= less than current Mac Pro's x20 lanes via PCIe slots after using one x16 lane slot for a GPU.

At best, that's a sidestep for this future generation. I hope for an improvement in bandwidth in the Mac Pro 7,1 or beyond.

That clears this debate up nicely!
 
So if it's true that:
"Thunderbolt 2/Falcon Ridge still feed off of the same x4 PCIe 2.0 interface as the previous generation designs. Backwards compatibility is also maintained with existing Thunderbolt devices since the underlying architecture doesn't really change."

...then my picture is also true, and there are only three PCIe v2 x4 lanes of bandwidth passing through six TB2 ports.

But we don't know how many ports the controllers have. How can x4 lanes be used to feed two ports with 20Gb/s bandwidth?

Until they double the bandwidth of the controllers, TB2=TB1=no change=only x12 lanes of v2 PCIe bandwith= less than current Mac Pro's x20 lanes via PCIe slots after using one x16 lane slot for a GPU.

At best, that's a sidestep for this future generation. I hope for an improvement in bandwidth in the Mac Pro 7,1 or beyond.

That clears this debate up nicely!

Changing the signaling rate to what PCIe v3.0 offers would increase the bandwidth yes.
 
Please explain how I'm a shill.

"A shill, also called a plant or a stooge, is a person who publicly helps a person or organization without disclosing that they have a close relationship with the person or organization.

"Shill" typically refers to someone who purposely gives onlookers the impression that they are an enthusiastic independent customer of a seller (or marketer of ideas) for whom they are secretly working. The person or group who hires the shill is using crowd psychology to encourage other onlookers or audience members to purchase the goods or services (or accept the ideas being marketed). Shills are often employed by professional marketing campaigns. "Plant" and "stooge" more commonly refer to any person who is secretly in league with another person or organization while pretending to be neutral or actually a part of the organization he is planted in, such as a magician's audience, a political party, or an intelligence organization "


So Everyone who happens to like this machine is a dishonest lying scumbag?
How very mature.

Are they?

He just called it as he saw it..
 
Ooh, since we're drawing pictures, let me draw what *I* understand the TB2 situation to be...
Code:
.---------.
|         |
| Mac Pro |<-> x4 lane TB2 controller   \<-----> A
|         |                             /<-----> B
|         |
|         |<-> x4 lane TB2 controller   \<-----> C
|         |                             /<-----> D
|         |
|         |<-> x4 lane TB2 controller   \<-----> E
|         |                             /<-----> F
|         |
'---------'

That is, there are three TB2 controllers, each of which have PCIe v2 x4 lanes of bandwidth, each two TB2 ports sharing that single x4 lane for each controller.
Thus, x4 times three = x12 lanes total bandwidth.

Is this not currently true?

I think this is wrong. As I understand it, in this case, each controller supplies 2 complete and independent TB2 connections. So the illustration from subsonix is right:

Code:
.---------.
|         |
| Mac Pro | <-----> A
|         | <-----> B
|         | <-----> C
|         | <-----> D
|         | <-----> E
|         | <-----> F
|         |
'---------'
 
I think this is wrong. As I understand it, in this case, each controller supplies 2 complete and independent TB2 connections. So the illustration from subsonix is right:

Code:
.---------.
|         |
| Mac Pro | <-----> A
|         | <-----> B
|         | <-----> C
|         | <-----> D
|         | <-----> E
|         | <-----> F
|         |
'---------'
So you're saying you think there are x24 lanes dedicated to Thunderbolt.
How many are left over for dual GPUs?
How many for the internal SSD and everything else?
It's a single CPU machine, so how many lanes have they managed to pull from that single CPU?
 
How many are left over for dual GPUs?
How many for the internal SSD and everything else?
It's a single CPU machine, so how many lanes have they managed to pull from that single CPU?

The Ivy Bridge-EP chips have 80 PCIe 3.0 lanes per chip.
 
The Ivy Bridge-EP chips have 80 PCIe 3.0 lanes per chip.
Not sure what my problem is, but I can't find any Intel chipset diagrams that show 80 lanes on a single 12-core CPU. Best I can find is this dual-CPU one:

Romley-Small.gif


----------

Anyway, if they put a single 12-core CPU in with PCIe v3 80 lanes like you say they are, then I'll be happy with it. :)

I'm marking your words, and holding you responsible for seeing this through. :p
 
Not sure what my problem is, but I can't find any Intel chipset diagrams that show 80 lanes on a single 12-core CPU. Best I can find is this dual-CPU one:

This is what I found:

http://techreport.com/news/24638/idf-keynote-reveals-new-server-processors-rack-architecture

Romley is current generation ie Sandy Bridge.

Edit: I saw this reported by Slashdot as well, curiously both miss the 12 core and 30MB cache detail, so who knows.



----------

[/COLOR]Anyway, if they put a single 12-core CPU in with PCIe v3 80 lanes like you say they are, then I'll be happy with it. :)

I'm marking your words, and holding you responsible for seeing this through. :p

He he, it's just some pre release info that I found on the internet, I'm not responsible. :D
 
Not sure what my problem is, but I can't find any Intel chipset diagrams that show 80 lanes on a single 12-core CPU. Best I can find is this dual-CPU one:

Image

----------

Anyway, if they put a single 12-core CPU in with PCIe v3 80 lanes like you say they are, then I'll be happy with it. :)

I'm marking your words, and holding you responsible for seeing this through. :p

So if Apple are only doing single CPU is there any reason they couldn't hang another PCIe hub on the QPI connection?

After all that is how the pervious generation worked till they moved the PCIe hub in chip. That would give them buckets on lanes to work with.
 
Since the Mac Pro uses the exact same chipset and the Xeon E5 (v1 and v2) have the same PCIe lane bandwidth it is not far more. It is the same collective bandwidth since it is the same implementation.

Except that TB is only capable of 2GBps and there are only 6 of them on the new Mac Pro. That means any LGA2011 with a few PCIe 3.0 slots technically has more expandability than the New Mac Pro.

It is also a bit of a fraud to be quoting physical slot sizes as opposed to electrical slots sized when in the middle of a bandwidth discussion. Card pins that are connected to nothing don't have any bandwidth. So crotch grabbing over seating four x16 cards and only hooking to 8 pins is silly in the context of turning around that poo-pooing Thunderbolt because it throttle the bandwidth.

We are comparing thunderbolt to PCIe, since it was expressed to me (by you, among others) that it was an adequate substitution. It's reasonable to say that a board with 5 slots can each run at 4 times TB2 without being throttled is superior in terms of bandwidth. I never said that combined, they could exceed 40 lanes. Even without bandwidth sharing between cards (having all 5 ports "throttled" to a mere 4 times TB2 [8GBps]), a board like that allows for more total bandwidth than the new Mac Pro's TB ports all combined.

The fact that TB's bandwidth isn't additive between ports makes it even more inferior.

As far as PCIe 3.0 16x, my opinion is that it is currently an overkill for everything up to and including GPU--8x (8GBps) seems adequate for the cards on the market. In the future, that will change. It really is "crotch-grabbing", as you say--the benchmarks of the 7970 (the "first PCIe 3.0 16x card) running at PCIE 2.0 16x (8GBps) prove that.


Pragmatically, yes there is something different. As typically implemented in most workstations the cards not hot-plug capable. Typically that is only implemented and supported on big iron 24/7/365 servers. Therefore, card vendors do not write the hot-plug additions. . If the hot-plug support is not commonly in the drivers... it is missing, hence different. Primarily what TB brings to the table is a hot-plug requirement. So yes, it is a driver of new software features.

Good point. So apart from the implicit guarantee of hot-swap in TB (which I grant is significant), there's not going to be much difference in the way drivers are handled. Therefore Tessalator's comment about driver problems between OS versions with PCIe SSD could persist with TB, as that's a separate issue.

Funny how most of those were x16 slots before. Frankly, TB speeds are plenty for most situations.

Plenty unless you want more than one SAS port on your new Mac Pro--then you have to buy another $900 SAS controller instead of a single PCIe card with 2-4 ports.

To me, this is about choice. A board with a bunch of PCIe slots allows me to put my one or two 16x GPUs in any configuration, or run them at 8x PCIe to divert bandwidth to the other lanes which can have multi-port SAS and other things TB just can't do. Yes, it allows me to be a jackass and make the motherboard share lanes, but only if I start utilizing more than 40 lanes. I contend that this is a better option than having two non-replaceable proprietary video cards and six thunderbolt ports with no PCIe expansion--that's been my fundamental point this whole time.

Also, I like how you were just saying 8GBps PCIe would "throttle" a four port SAS card... I guess we now agree that's not the case?

Then why did you claim that TB replaced PCI-e. It doesn't replace it at all. Thunderbolt's job is to transport PCI-e data. "can coexist" isn't even a question. If there is no PCI-e data there is no purpose for Thunderbolt. A system with a purely DisplayPort data stream doesn't need Thunderbolt at all.

I think we agree, I'm not sure where either of us got the idea we didn't. There are many on this board (goMac included, IIRC) that stated that Apple had no choice but to remove PCIe due to the addition of thunderbolt. My points were that 1) TB is not a replacement for PCIe and in fact has many disadvantages and 2) it wasn't even necessary for Apple to drop PCIe so it's a false choice in the first place.
 
Last edited:
Well if I didn't before I do now. :) I'm mostly just saying that the overwhelming majority of machines I see or hear about on the net have only 4 full length slots and they're usually situated such that two double-wide cards eats up all the space.

I'd say that's probably because most people don't require more than internal SATA for their drive storage--motherboard manufacturers aren't going to add more ports if consumers don't want/need them. That may change as PCIe storage becomes more popular. I was just pointing out that if you want four PCIe SSDs, you can have them :)

That's not what I'm saying, no. Just the obvious fact that the more one adds internally the faster the fans need to go and thus the more noise they make.

I'll buy that.... but how much added heat / increased fan speeds are a couple of PCIe SSDs going to have? Especially sitting next to a modern GPU :) Is this a serious argument in favor of TB SSD over PCIE? Is it really? Come on :)

Of those things said long ago most of them are still true today. TB/TB2 all the more-so of course.

Fair enough. For the record, if there were some decent low cost TB2 -> SATA III controllers, I probably wouldn't require more than 2 or 3 TB2 ports and a 16x PCIe slot for my GPU. I was merely pointing out the silliness of external replacements for perfectly adequate and relatively more inexpensive internal solutions.

Well I dunno what to say to this. There is USB3 and TB2 ports on the MP6,1 so if you don't see a need for the TB2 ports then I guess you plan to use the USB3 ones? We're stuck with the choice between those two - or maybe BlueTooth 4.0 or EtherNet... so you now have to pick one of those. Or dump Apple maybe...

I basically already have. I don't use my Mac Pro professionally much anymore, I'll be doing the KVM thing for a while with my 5,1 for my video/audio work and web development. In 4-5 years when Apple forces me to upgrade to use OS 11 (or whatever), I'll just have to see what's available.
 
Last edited:
I think this is wrong. As I understand it, in this case, each controller supplies 2 complete and independent TB2 connections. So the illustration from subsonix is right:

No. Subsonix hand waving is deeply and fundamentally flawed.


lightridge_thunderbold_inside_600px.png

[ from an article on Tom's covering basics on Thunderbolt http://www.tomshardware.com/reviews/thunderbolt-performance-z77a-gd80,3205-4.html ]


A controller's ports are in no way completely independent. The thunderbolt controllers is largely a switch. A switch's ports can't possibly be independent of each other because the primary purpose of a switch is to connect the ports.
 
Thank you! This is what I have tried to hammer home. :)

Yeah. ... more smoke with clearly flawed understanding of how Thunderbolt works.


This is a straw man argument. The scenario presented is not one in where A communicates with C,

Pure and utter misdirection . Hiliarious that this get two up votes because it is pure smoke. The straw man is your bogus scenarios which purposely suppress the inherit nature of the thunderbolt controller being grounded in a switch. It is a switch so may example puts it into a switch based context.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.