Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
In fairness, Thunderbolt isn't that much worse than PCIe; there aren't that many use-cases that really need more than 10Gbps, and even less that need more than 20Gbps. I mean, if we ever get some good, reasonably priced Thunderbolt 2 RAID arrays, they'll be better than the common 2x Mini-SAS options we had before; it'll still suck that you can't just keep using those without a huge premium, but the potential is there for Thunderbolt to provide superior options.

I disagree that TB will uproot MiniSAS (outside of New Mac Pro owners, where TB SAS controllers are $900 for a single port). MiniSAS can actually do a theoretical 24Gbit/s - 6Gbps x 4 channels per port. That is significantly better than TB 2. That's not to mention the fact that a 4-port MiniSAS PCIe controller with a total of ~8GBps of throughput (or 80Gbit/s) costs $450 -- 33% more bandwidth than all the TB ports on the nMP combined. To get even close on the nMP, you would need 3 separate controllers, and the possibility of anything but a software RAID isn't there. Even on the old MP it has huge advantage: After you plug that PCIe card in, you still have 1 slot for your GPU and 2 more 4x 2.0 slots. If you use your expandability primarily for storage, this is huge.

Even if TB2 accessories become ubiquitous, it is simply not efficient to have multiple controllers on multiple cards performing the function of a single controller.

I guess if you don't care about bandwidth, you can run your 8 drive array over a single TB2 port. At that point the comparison is more akin to using PCIe eSATA controller with the addition of port multipliers -- a much, much cheaper solution with similar speeds.

MiniSAS cables are cheaper and are capable of 30 feet, copper Thunderbolt cables can only do 10 feet. Sure, eventually you can get optical thunderbolt (theoretically at least... they've been "about to come out" for 10 months now), but I believe when it was discussed before, rumors were it'll start at $800. A that price point, you might as well compare it to FiberChannel which can do 30 miles.

Besides that, for the more common use-cases for which Thunderbolt is plenty, it's actually better overall as you can use plug-and-play devices compatible with any recent Mac, and you can daisy chain a load of devices together, which are both capabilities that PCIe lacks. The latter is a fairly important capability too, as the previous Mac Pro was severely limited in how many expansion cards you could fit, while the new one take the same number of Thunderbolt devices connected directly, and potentially a lot more using daisy chaining.

All that is definitely true. Most devices don't require than 2GBps. Also comparing it to the old Mac Pro is kind of depressing -- PCIe 2.0 and only 4 slots is extremely anemic. I think it's probably more reasonable to compare the nMP to other workstations: Sure, it can do 36 devices over TB, but who needs 36 devices? That drives far outside the realm of "common use-cases" and into silliness territory. Moreover, a fairly inexpensive workstation motherboard will often have 7 or 8 PCIe slots with extremely high bandwidth. I guess it comes down to what you really need: 36 little boxes sharing a measly 6GB/s Bandwidth or 8+ PCIe slots sharing between 40GBps - 80GBps (for dual processor motherboards).

The only issue really is price for what you're getting; as you've pointed out there are some grossly over-priced Thunderbolt peripherals right now, using less than premium parts, which is just unacceptable. But without serious competition that isn't likely to change, so I really do hope Thunderbolt will finally start to take off, but that may struggle to happen until Intel realises their tight grip is strangling the market.

The fundamental point when talking about TB as a replacement for PCIe is that TB will always be more expensive and prone to failure, simply due to the fact that you're basically externalizing a PCIe card, which requires separate housing and separate PSU. The marginal cost of adding a slot, some space, and some extra power to a Tower/Mainboard is almost nothing, the marginal cost of adding all those things outside the case is going to be much more.

This is why, I think, TB will never be a replacement for PCIe on desktop PCs/Workstations: It is always going to be less efficient. At best, it'll be a supplement to PCIe (once TB2 goes optical, perhaps loop networks will be good for small businesses) and a useful addition for laptops.

I actually just bought a new MacBook Pro with TB2 and am imagining that someday I might even buy a TB2 product and take advantage of this amazing expandability found in a laptop! By contrast, I have no regrets whatsoever purchasing my PC without thunderbolt as it has 7 PCIe slots. Likewise, if it were even possible to get a TB expansion card for my Mac Pro (it isn't, and likely never will be), I wouldn't take it if they were giving it away.
 
Last edited:
This is why, I think, TB will never be a replacement for PCIe on desktop PCs/Workstations: It is always going to be less efficient.

Yes I have thought similarly. Oh how wonderful it would have been for the 6,1 to be a tower with PCIe 3.0!

At best, it'll be a supplement to PCIe (once TB2 goes optical, perhaps loop networks will be good for small businesses) and a useful addition for laptops.

Interesting.
 
TB was designed to be able to carry 100 Gbit. Of course they didn't release the first configuration at that bandwidth, like they always do, throttle and then increase steadily to sell more. So whenever Intel chooses to do so, they will release PCI-e equivalent TB controllers and then TB will be a true competitor to PCI-e's bandwidth. But even right now TB offers advantages over PCI-e like daisy chaining, using the same hardware with multiple computers, even laptops, etc.
 
TB was designed to be able to carry 100 Gbit. Of course they didn't release the first configuration at that bandwidth, like they always do, throttle and then increase steadily to sell more. So whenever Intel chooses to do so, they will release PCI-e equivalent TB controllers and then TB will be a true competitor to PCI-e's bandwidth.

Interesting info, I had not known that until now.
 
TB was designed to be able to carry 100 Gbit. Of course they didn't release the first configuration at that bandwidth, like they always do, throttle and then increase steadily to sell more. So whenever Intel chooses to do so, they will release PCI-e equivalent TB controllers and then TB will be a true competitor to PCI-e's bandwidth.

While 100Gbit/s, if delivered within the next 2 years or so, will certainly help TB's position, we aren't even close to that now. Even the company working on optical TB only promises 10Gbit/s, and it's been "about to come out" for 10 months now. The price? Best-guess rumors set it at $800.

Even at 100Gbit/s, PCIe 3.0 is already 128Gbit/s and PCIe 4.0 will start rolling out offering 256Gbit/s within the next 2 years.

Technology isn't stuck in a vaccuum, it's always getting faster. Even at the yet-to-be-released theoretical maximum 100Gbit/s (which may never actually come to exist), TB still wont be as fast as 16xPCIe 3.0 which is ubiquitous today. Intel has not even released any specific plans for future TB controllers capable of more than 20Gbit/s

You're offering up "in the future, we'll have X! Therefore the precursors to X must be great!" Even if TB technology really starts to ramp up and live up to it's potential, that will have no impact on nMP buyers whatsoever as their controllers are still stuck at 20Gbit/s. Meanwhile, the PCIe 3.0 16x slots in my PC's which I purchased a few months ago will be able to outperform any thunderbolt that will ever be released--ever. I fully admit that devices in the future will fly past my measly 128Gbit/s, but I'd rather have that then 20Gbit/s.

It's like when people compare the nMP (which is not even here yet) to the current Mac Pro which hasn't had a real feature update in 3 years. You're just picking a future product with no ETA (or even hint at an upcoming announcement) like 100Gbit TB and comparing it to present or past products. I could say that my TI-83 is a revolution in technology, all I have to do is forget the past 50 years of computing and compare it to the Apollo mission computer.

When TB gets to 8GBps (64Gbit/s), let me know and we'll compare them again--but we'll do it to whatever technology is already on the market at that time. Think of it as a race, it doesn't matter how much faster you'll be driving in lap 4 when you're still in lap 2 and your opponent is in lap 5, is driving faster than you are now, and is accelerating at a faster pace.

Moreover, even if TB starts to get as fast as PCIe and can support 90% of your devices, there's still the matter of power supply for the device itself and a box to put it in. Why pay more for something that's more prone to failure, occupies more space, and offers no speed advantage? With the move to optical TB, there may not be any bus-powered devices at all! That means power bricks, power bricks everywhere. It makes no sense in a desktop computer.

But even right now TB offers advantages over PCI-e like daisy chaining

You don't need to daisy-chain PCIe if you have 7 or 8 PCIe slots... Unless you see people using more than 8 internal devices in the future... PCIe is much more environmentally friendly and space-efficient than having the same device in an external box.

Obviously you need external connectivity for cameras, scanners, printers, storage etc. Most of that can be accomplished through Wifi ( I love my Wifi Printer/scanner), MiniSAS (24Gbit/s, soon 48?), USB, or a half-dozen other mediums that are available as inexpensive, reliable PCIe controllers. It's even possible TB will slip into common usage--it just wont replace PCIe because it doesn't make sense for a lot of devices to be external.

using the same hardware with multiple computers, even laptops, etc.

Laptops are an excellent place for TB in the market. I myself just purchased a laptop with TB and may even use it someday. If you have thunderbolt installed, you need that kind of speed, and you have no access to PCIe, clearly Thunderbolt is your only/best option. People with PCIe slots, however, will have more options which are cheaper and equally if not more powerful.
 
Last edited:
TB was designed to be able to carry 100 Gbit.

It think that is closer to Lightpeak was design to eventually carry 100 Gb/s bandwidth. Without a transition to optical, that isn't likely for what is commonly shipped as Thunderbolt now. So this will in part depend upon how quickly optical prices come down over time.

The other major short-to-intermediary termed problem is that the vast majority of Intel and AMD systems sold have no where near this kind of "extra" bandwidth just lying around. So how quickly is the PCIe v3 lane budget going to substnatially grow larger over time.

So whenever Intel chooses to do so, they will release PCI-e equivalent TB controllers and then TB will be a true competitor to PCI-e's bandwidth.

Not. One, it isn't a competitor. Transport a larger fraction of the bandwidth to another box? Sure, but that is complementary more so than competitive to overall PCIe lane bandwidth. Second, consume all or practically all of the bandwidth? Probably not. discrete GPUs , other very high speed network interconnects , and increasingly SSDs are going to still compete for the same fixed set of PCIe lanes as the Thunderbolt controller will. More than likely "not enough to go around" will still be the case several years from now in the vast majority of PC system infrastructure.
 
Even at 100Gbit/s, PCIe 3.0 is already 128Gbit/s and PCIe 4.0 will start rolling out offering 256Gbit/s within the next 2 years.

Except not even the fastest GPU's on the market use the theoretical maximum of PCI-e 2.0 right now let alone 3.0. So 128 Gbit/s is, for now, just a number nobody needs.


When TB gets to 8GBps (64Gbit/s), let me know and we'll compare them again--but we'll do it to whatever technology is already on the market at that time. Think of it as a race, it doesn't matter how much faster you'll be driving in lap 4 when you're still in lap 2 and your opponent is in lap 5, is driving faster than you are now, and is accelerating at a faster pace.

That's your mistake, to think of it as a race. It's not. That's like comparing the desktops to laptops. Both advance, but desktops are always faster. That does not make the laptops obsolete.

Moreover, even if TB starts to get as fast as PCIe and can support 90% of your devices, there's still the matter of power supply for the device itself and a box to put it in. Why pay more for something that's more prone to failure, occupies more space, and offers no speed advantage? With the move to optical TB, there may not be any bus-powered devices at all! That means power bricks, power bricks everywhere. It makes no sense in a desktop computer.

I don't think it occupies more space. A new Mac Pro, together with a 4 bay RAID enclosure occupies less than half the space as the old Mac Pro. About the prone to failure part, we'll have to wait and see. And you are still stuck at the "speed advantage" and not thinking about the disadvantages of PCI-e.

You don't need to daisy-chain PCIe if you have 7 or 8 PCIe slots...
Mac Pro never had more than 3 PCI-e slots available. So if you need at least 7-8 you already shouldn't have been using a Mac Pro anyway. Compare one x16 and two x4 with the new one. Not some 3rd party workstation which always offered more. So what you are losing right now is a single x16 and gaining one more x4, both at v2.0.

it just wont replace PCIe because it doesn't make sense for a lot of devices to be external.
I also don't think it'll replace PCI-e, nor it does have to replace it. Exactly like laptops did not really replace desktops, since desktops are still around. But it'd be much better to have the kind peripherals which was exclusive to desktops until today, for laptops. And that's what TB will achieve if it gains traction. And this'll help eventually everyone who doesn't really need a desktop for his work anymore but needs some 3rd party PCI-e hardware. Every year the size of that group grows.


People with PCIe slots, however, will have more options which are cheaper and equally if not more powerful.

I think it'll always be more powerful. After all PCI-e does have one technological advantage, it transports information through silicon, not cabling.
 
Last edited:
MiniSAS can actually do a theoretical 24Gbit/s - 6Gbps x 4 channels per port. That is significantly better than TB 2.
I'm not sure I'd call 4Gbps "significantly better", better sure but that's only a 20% advantage.

That said, are there any Mini-SAS devices out there doing this? The main RAID solutions I've seen using Mini-SAS have been using two ports for 2x 750Mb/sec or so, which is better than what we have right now for Thunderbolt 1, but well within Thunderbolt 2's capability to compete with.

I may just not have seen any but it just hasn't really seemed to me like there's been any particular demand for speeds as high as Mini-SAS is capable of. Sure you can get Mini-SAS controllers for PCIe that give you more than two ports, but are there any single storage devices that use all of them, or is it just for hooking up more than one to a single card? If the latter then this is something you can still do on the new Mac Pro thanks to having six Thunderbolt ports.

comparing it to the old Mac Pro is kind of depressing -- PCIe 2.0 and only 4 slots is extremely anemic. I think it's probably more reasonable to compare the nMP to other workstations
Sure there are better workstations out there, but at the same time there were always better workstations than the Mac Pro. Don't get me wrong, I've always liked the Mac Pros, but just like XServes they were never really all that competitive even when they were being regularly updated, despite having some great design features.

So to be completely honest about it, I'd say the Mac Pro never really was a true workstation, but rather it was a professional desktop with workstation features and components. The new Mac Pro is really just an extension of this reality, as I don't think it would have been realistic for Apple to try to compete, even if I would have loved a new tower with tons of PCIe 3 slots, it would have only really kept the Mac Pro workstation-like. I do think Apple's direction with the new Mac Pro is the right one overall, as it shows a clear focus on it being a professional desktop, rather than a full-blown workstation.

So ehm… my point being that comparing to proper workstation computers isn't really right either.

Sure, it can do 36 devices over TB, but who needs 36 devices?
Who really needs the full bandwidth of 8 PCIe slots? I'm not saying people will attach 36 devices, the point is that Thunderbolt has a lot of freedom in how you expand, even if it does cost a fortune right now. Plus you only take up as much space as you need external devices to fill it, so a Mac Pro with only a single 4-bay storage enclosure should take up a lot less space than the current Mac Pro in total.

The marginal cost of adding a slot, some space, and some extra power to a Tower/Mainboard is almost nothing
Not if, as Apple clearly has, you put a value on reduced size and power consumption.

the marginal cost of adding all those things outside the case is going to be much more.
Again, not if you don't need them.

I don't disagree that external devices are worrying from a reliability perspective, and if you do need a lot of them then yes your setup will be more cluttered if you can't move those external devices somewhere. Power bricks of every device is also a huge pet peeve of mine, even though I haven't had any issues with most of them (except for disaster prone HDD docks).

But if you can consolidate what you need into only a couple of Thunderbolt 2 devices, particularly ones that can happily live under the desk, in a cupboard or whatever, then things are pretty good overall. Maybe not strictly better, but as I say, it depends a lot on your use-case.

As I mentioned earlier, I haven't really used the expansion slots of my Mac Pro, so even though I do make use of all the drive bays (plus the spare SATA ports) inside, I still have a lot of external cruft hanging off an already space-consuming machine. I could probably have tidied it up some with a better external drive enclosure, but it would still be a lot of space being used. When I replace it though, while I'll be moving all those internal drives into an external enclosure, along with my already external drives, I'll be saving a lot of space overall. I mean, the external enclosure I'm working on, plus a new Mac Pro, will occupy at least a third less volume than my current Mac Pro on its own.
 
I'm not sure I'd call 4Gbps "significantly better", better sure but that's only a 20% advantage.

20% is significant :)

The point mainly was that TB2 -- which there are not currently any drive controllers for, btw (they're all TB1) -- is not faster than MiniSAS and costs a lot more.

That said, are there any Mini-SAS devices out there doing this?

Getting close! This is off a single MiniSAS port - 4 drives.
tbolt01_kr.gif.pagespeed.ce.YA1yQXHVJw.gif

http://www.barefeats.com/tbolt01.html
(That's TB1 being compared, so ignore it I guess)
I may just not have seen any but it just hasn't really seemed to me like there's been any particular demand for speeds as high as Mini-SAS is capable of.

Fair enough, but eSATA and MiniSAS are a lot cheaper than thunderbolt solutions, in addition to offering more bandwidth.

TB2 devices will probably get cheaper, it will never be faster. nMP users are stuck forever at 2GBps.

So to be completely honest about it, I'd say the Mac Pro never really was a true workstation, but rather it was a professional desktop with workstation features and components.

If that's the case, that does change the perspective somewhat. However, when looking at the broader market and whether or not TB2 will becoming ubiquitous in the PC desktop world, the future looks rather bleak for TB2 simply because PCIe does it all and then some, plus PCIe has the advantage of already being ubiquitous.

Why does it matter? A lot of the speculation on availability of new TB2 devices (necessary if the nMP is to become a usable machine) is dependent on adoption in the broader market. If it's just the nMP and rMBP holding TB2, that's not a lot of demand I'm sad to say (as a current owner of a rMBP).

TB2 just doesn't make a lot of sense in the desktop world that already uses PCIe.

so a Mac Pro with only a single 4-bay storage enclosure should take up a lot less space than the current Mac Pro in total.

Probably true, and if that were the end of it, that's a great point. However, you're also saying the nMP with TB2 has freedom and flexibility to add 36 devices. What's that volume going to look like?

BqSb1mg.jpg


Not if, as Apple clearly has, you put a value on reduced size and power consumption.

True, and if you're no longer calling it a workstation, just a neat looking prosumer machine, the low-noise, low-power compromises may be appropriate. Hopefully it occupies a large niche, but Apple still sees fit to create a workstation-ish machine again.

As far as PCIe slots taking up more space: we're talking about what? Maybe 8" x 1" x 4" - 32 cubic inches per slot? I'd rather have a bunch of those go unused and deal with the added 3D footprint than do without them entirely. Not only that, but you don't have to get a motherboard with 8 slots, you can get one with 4--even a Mini-ATX form-factor.

Keep in mind the 4 PCIe slots and 4 drive bays in the old Mac pro were the 2nd smallest of the 3 compartments, and a lot of that space was wasted. Apple's decision to create a spacially-inefficient case design was their own :)

5GyVBV5.jpg
 
Last edited:
Not. One, it isn't a competitor. Transport a larger fraction of the bandwidth to another box? Sure, but that is complementary more so than competitive to overall PCIe lane bandwidth. Second, consume all or practically all of the bandwidth? Probably not. discrete GPUs , other very high speed network interconnects , and increasingly SSDs are going to still compete for the same fixed set of PCIe lanes as the Thunderbolt controller will. More than likely "not enough to go around" will still be the case several years from now in the vast majority of PC system infrastructure.

Ok, I haven't thought of that. But what is the maximum amount of PCI-e bandwidth you can get on a single motherboard right now?

----------

TB2 devices will probably get cheaper, it will never be faster. nMP users are stuck forever at 2GBps.

That is if channel bonding doesn't work.


Why does it matter? A lot of the speculation on availability of new TB2 devices (necessary if the nMP is to become a usable machine) is dependent on adoption in the broader market. If it's just the nMP and rMBP holding TB2, that's not a lot of demand I'm sad to say (as a current owner of a rMBP).

TB2 just doesn't make a lot of sense in the desktop world that already uses PCIe.

Yeah but any TB2 device works exactly the same way and speed on your rMBP as your nMP and eventually your iMac. And that's like multiplying the number of potential users by 100 or more probably. So if TB2 gains traction, I'd suppose it'll be because it can be used by more than a very small portion of professionals that use workstation class machines.
 
Ok, I haven't thought of that. But what is the maximum amount of PCI-e bandwidth you can get on a single motherboard right now?

Unidirectional 78.8G Bytes per second, 630.4G bits per second (after overhead is subtracted - raw transfer rate is 640G bits per second). This is PCIe 3.0 only - the 8 additional PCIe 2.0 lanes from the chipset aren't included.

Expansion Slots
Primary Riser
(Standard) Expansion Slots # Technology Bus
Width** Connector Width Bus Number* Form Factor Notes
1 PCIe 3.0 X16 X16 7 Full Length, Full Height Slot Proc 1
2 PCIe 3.0 X8 X8 10 Half Length, Full Height Slot Proc 1
3 PCIe 2.0 X4 X8 13 Half Length, Full Height Slot Chipset
* Default bus assignment (in decimal). Inserting cards with PCI bridges may alter the actual bus assignment number
** Indicates the number of physical electrical lanes running to the connector.
NOTE: When populating the second optional riser slot, the second processor must be installed.
NOTE: All slots support up to 150w PCIe cards, but an additional Power Cord Option is required (PN 669777-B21). See Option Section below for offering.

PCIe Riser
(Optional 3-slot) Expansion Slots # Technology Bus
Width** Connector Width Bus Number* Form Factor Notes
4 PCIe 3.0 X16 X16 16 Full length, full height slot Proc 2
5 PCIe 3.0 X8 X8 20 Half length, full height slot Proc 2
6 PCIe 3.0 X8 X8 23 Half length, full height slot Proc 2
* Default bus assignment (in decimal). Inserting cards with PCI bridges may alter the actual bus assignment number
** Indicates the number of physical electrical lanes running to the connector.
NOTE: When populating the second optional riser slot, the second processor must be installed.
NOTE: All slots support up to 150w PCIe cards, but an additional Power Cord Option is required (669777-B21). See Option Section below for offering

http://h18000.www1.hp.com/products/quickspecs/14212_na/14212_na.html
 
Last edited:
The point mainly was that TB2
I know, and you're right that Thunderbolt 2 is still slower than Mini-SAS is capable of, my point was just that it's faster than what Mini-SAS is currently used for, that I know of at least. But the fastest devices I've found so far are still just 2x Mini-SAS running at 750Mb/sec or so per port; definitely fast, but well within what Thunderbolt 2 can replace, eventually.

Sure if someone does (or has) come out with something that can truly saturate Mini-SAS then Thunderbolt isn't replacing that any time soon, I just can't find anything off hand, which probably means it isn't in huge demand.

I completely agree that it sucks that Thunderbolt isn't faster, as I struggle to see why it isn't when external PCIe seems to work just fine; Thunderbolt does add daisy chaining and hot-plugging but should that really limit it so much? I mean when it was first announced I was expecting it to just be external PCIe but with these extensions to make it more consumer friendly. So yeah, it's disappointing that even with a 2nd version it's not really what the technology promised to be initially.

But all that said, while it's not capable of the same maximum speeds that other technologies can potentially achieve, it does seem to be faster than most devices anyone is actually going to want it for. Well, except people that were excited about the idea of external high-end GPUs of course, but even then, Thunderbolt attached GPUs intended for number crunching (rather than streaming back 60fps video) is actually pretty viable.

Fair enough, but eSATA and MiniSAS are a lot cheaper than thunderbolt solutions, in addition to offering more bandwidth.
I do wish they'd given us eSATA, or at the very least provided a Thunderbolt to eSATA cable (as Thunderbolt to eSATA shouldn't have to cost $200), but then I suppose it's not surprising really as Apple's shown no interest in eSATA or Mini-SAS in the past on any of their line-up. Plus I suppose neither eSATA or Mini-SAS are hot-pluggable, so they're not really the friendliest external interfaces in that regard.

If that's the case, that does change the perspective somewhat. However, when looking at the broader market and whether or not TB2 will becoming ubiquitous in the PC desktop world, the future looks rather bleak for TB2 simply because PCIe does it all and then some, plus PCIe has the advantage of already being ubiquitous.
I'm not sure that's really the case, Thunderbolt is still hot-pluggable, and for many uses it's more than fast enough.

IMO the main issue is the cost and Intel's tight control, but I think there's room even in the proper workstation market to compete on size. Apple has shown just how compact a machine can be, and maybe they've gone to extremes with it, but that doesn't mean other companies couldn't take a traditional workstation, drop the number of PCIe slots to three or four high speed ones, lose the optical bay, use fewer internal 3.5" bays and come out with a pretty sweet, smaller form-factor workstation. Even workstations with loads of drive bays don't offer enough space to professionals with really high-end storage requirements, and for internal workstation storage four 2.5" bays could be plenty for what you can gain in a much reduced size.

This is actually the kind of thing I had in mind for the Mac Pro update; I just hadn't expected them to go quite so far with shrinking it (or come up with something that I want so badly).

Sorry I keep rambling; point being that Thunderbolt doesn't have to be a full replacement to be attractive to PC makers, as they can use it to externalise just some of their components but make big savings on space in the process. Apple may not have done this, but I do think there's an attractive mid-point between traditional workstation and the new Mac Pro that could be a good fit for Thunderbolt in the wider professional market.

The application for machines that are already small, such as all-in-ones, mini desktops and of course laptops are the other main areas as Apple is already trying to show. And I do believe PC makers would be jumping at it too if Intel were actually doing more to encourage adoption, so I hope Apple is thinking the same thing and pushing them towards it as there's no point in Thunderbolt being largely Mac exclusive if it means no-one can affordably use it!

Probably true, and if that were the end of it, that's a great point. However, you're also saying the nMP with TB2 has freedom and flexibility to add 36 devices. What's that volume going to look like?
Image
Heh, sure I wouldn't want to do it either, but the point is that you use as much or as little space as you need, so for people with only a small number of devices attached it can be better overall. Most of my current clutter is from always foolishly buying cheap, rather than looking for devices that can function from USB power, or that do more with a single device etc. :)

Power bricks and bulky enclosures do make it a bit less of an appealing prospect unless you've got a really great setup for hiding that stuff away, or maybe if someone finally just releases a single really good power brick that can power multiple devices at once. But with PCIe, if you run out of slots what can you do? Usually means trying to find stuff to hang off USB, and while that's affordable personally I'd rather do it with Thunderbolt if I could.

Also, one thing I haven't see mentioned are the devices that you can't internalise anyway. For example, 4k displays using Thunderbolt/Display Port is a nice way to do things IMO; I know HDMI can handle it too now but then you can't also incorporate convenient hubs over the same cable.
There are also things like 4k video cameras, which could do very well using Thunderbolt too. Sure that's more hypotheticals while people continue mostly using monitors with DVI, and there's not much point using Thunderbolt on a 4k camera if not enough people have ports to plug it into, but the potential for a wide range of uses is a big part of what makes Thunderbolt that bit more than just a PCIe… alternative (I won't say replacement ;))
 
Ok, I haven't thought of that. But what is the maximum amount of PCI-e bandwidth you can get on a single motherboard right now?

No limits on motherboard size? Just find one to which you can add the most Xeon E5 (v1 or v2 ) or E7 v2 CPU packages. PCIe v3 lanes are provisioned by the CPU package. Designs which add more packages will add more lanes. For example 4 E5 4650 would be 32 cores , around 520W , and 160 PCIe lanes. Enough so if threw 44 at modern Thunderbolt embedded infrastructure ( 2 x16 , 3 x4 ) there is would still be buckets left. Even if that was 80 ( 5 x16 and each TB controller got a x16 link ) . Still would be buckets left over.

Pragmatically this is typically capped at a reason number of sockets and embedded connections because the cross package PCIe data traverses the same QPI network as off package memory read/writes, cache coherency traffic, etc. Too much and at some point choke QPI and the boardis too large and too complex.


For the single CPU workstation the it probably far more useful in terms of cost effectiveness to take a look at the iOHub and repurpose some non PCIe lanes there into PCIe v2 or start to look at perhaps x4 (or x8 ) v3 TB controllers.

That is if channel bonding doesn't work.

Thunderbolt doesn't replace PCIe. Each TB controller is coupled to an independent x4 PCIe bundle. PCIe doesn't do "channe bonding" on two x4 PCIe bundles. That will make it pretty difficult for Thunderbolt to transparently do it.

Can folks create virtual devices that split up the work ? Sure, stuff like CrossFire and SLI exist for GPU cards. Likewise can put a RAID layer on top of devices each downstream on separate TB controllers. Is there one standard way of doing this? Nope. A "software" TB layer has as many overhead downsides as upsides. One of the major upsides with TB is that there is practically no software layer ( outside of boot configuration and interrupt signalling for network change events; plug/unplug. ).

One reason why TB will scale so high in bandwidth at relatively low prices is because it is not a "do everything for everybody" protocol.
 
Thunderbolt doesn't replace PCIe. Each TB controller is coupled to an independent x4 PCIe bundle. PCIe doesn't do "channe bonding" on two x4 PCIe bundles. That will make it pretty difficult for Thunderbolt to transparently do it.

Can folks create virtual devices that split up the work ? Sure, stuff like CrossFire and SLI exist for GPU cards. Likewise can put a RAID layer on top of devices each downstream on separate TB controllers.

I meant only for RAID, which should go up to 60Gb/sec through channel bonding in the new Mac Pro.
 
MiniSAS cables are cheaper and are capable of 30 feet, copper Thunderbolt cables can only do 10 feet. Sure, eventually you can get optical thunderbolt (theoretically at least... they've been "about to come out" for 10 months now), but I believe when it was discussed before, rumors were it'll start at $800. A that price point, you might as well compare it to FiberChannel which can do 30 miles.

30 mile FiberChannel is not particularly for speed or cost control. ( typically to disaster recovery fail over sites that cost megabucks). Thunderbolt is dirt cheap compared to those kinds of set ups.

Optical has gone through several delays (one of which probably is TB v2 related **). While a unit in Japan was the only folks in the world they were charging ridiculous prices. Now that Corning is going to directly jump in, they are far more reasonable. Here is someone who got $24/meter quots. So 10 meters (30 ft.) would be around $250 . Not particularly close at all to your $800 FUD number.

http://forums.appleinsider.com/t/15...r-all-optical-thunderbolt-cables#post_2411123

** At the point where cable vendors knew that Intel was going to beat the drums about TBv2 it really wouldn't make alot of sense to drop a TBv1 cable that is costs substantive money that couldn't scale to the next level. Going to need TB v2 devices to get a TBv2 certification. So those controllers existing even as engineering samples become a gating factor.

At the TB channel levels the cables operate it is the same 4 10Gb/s channels. The controller is going the bonding not the transceivers in the cable.


. Moreover, a fairly inexpensive workstation motherboard will often have 7 or 8 PCIe slots with extremely high bandwidth.

Actually no they don't. PCIe Slots are not bandwidth. The vast majority of PC system designs that have more than 4 physical slots use PCIe switches to provision more slots. The bandwidth is diluted. Just like how the PCIe switches embedded into TB controllers dilutes as you cacasde more controllers onto a network.

A subset of the newer E5 2600 dual package motherboard with 6 slots or so don't engage dilution but typically for the PCI (not PCIe ) or x1 PCIe slot (for Wifi or somehting like that) .... those are commonly diluted also.


The marginal cost of adding a slot, some space, and some extra power to a Tower/Mainboard is almost nothing, the marginal cost of adding all those things outside the case is going to be much more.

This is why, I think, TB will never be a replacement for PCIe on desktop PCs/Workstations: It is always going to be less efficient.

The marginal cost of not adding a slot and just embedding a PCIe controller is even lower. The "scary" external power supply on the monitor is a threat too. Follow the cheaper, cheaper, cheaper mantra and end up with something that looks more like an iMac than a classic Mac Pro.
 
20% is significant :)
....
Getting close! This is off a single MiniSAS port - 4 drives.
Image
http://www.barefeats.com/tbolt01.html
(That's TB1 being compared, so ignore it I guess)

But one PCIe SSD in the Mac Pro is pulling down 1200MB/s. $300-400 card , cables, and 4 240GB SSDs versus just one 1TB SSD. It is a bit slower but $/performance is going to work for lots of folks. You keep running away from the question of broad base supported use cases. Fastest, is important in crotch grabbing benchmark smack talking exercises.




However, when looking at the broader market and whether or not TB2 will becoming ubiquitous in the PC desktop world, the future looks rather bleak for TB2 simply because PCIe does it all and then some, plus PCIe has the advantage of already being ubiquitous.

Again you conflate PCIe slots with PCIe to promote your FUD. They aren't the same thing.

Pretty much ignores the direction the desktop world is going in. Can be a polar bear on a shrinking iceberg, but it is still a shrinking iceberg.

Thunderbolt is not limited to the desktop world. Most of the PC world has PCIe but does not have PCIe slots.
 
While a unit in Japan was the only folks in the world they were charging ridiculous prices. Now that Corning is going to directly jump in, they are far more reasonable. Here is someone who got $24/meter quots. So 10 meters (30 ft.) would be around $250 . Not particularly close at all to your $800 FUD number.

Actually this just came out. Looks like it just came out - $330

Not $800!


Actually no they don't. PCIe Slots are not bandwidth. The vast majority of PC system designs that have more than 4 physical slots use PCIe switches to provision more slots. The bandwidth is diluted. Just like how the PCIe switches embedded into TB controllers dilutes as you cacasde more controllers onto a network.

I'll not play the nitpick game with you, all I'll say is "Duh." and move on.

The marginal cost of not adding a slot and just embedding a PCIe controller is even lower.

That must be why TB is so darn popular over PCIe slots! Oh wait...

We're talking about cost to the end user. If you don't think it's more expensive to add another box and another PSU to run a device that could be powered and housed just fine in a PCIe slot, you're either out of your mind or just can't stand to be wrong.
 
But one PCIe SSD in the Mac Pro is pulling down 1200MB/s. $300-400 card , cables, and 4 240GB SSDs versus just one 1TB SSD. It is a bit slower but $/performance is going to work for lots of folks. You keep running away from the question of broad base supported use cases. Fastest, is important in crotch grabbing benchmark smack talking exercises.

I wasn't comparing the 1TB SSD in the nMP to MiniSAS, I was comparing MiniSAS to TB2 in terms of price and performance. It wins at both.

If you want to compare the 1TB SSD in the nMP to other solutions, you should probably first look into your crystal ball and find out how many hundreds (thousands?) of dollars it'll cost over the standard 256GB on the nMP.

I can buy an $80 SATA III controller for my Mac Pro and throw four 256GB SSD in the optical bay for $1000. The speed of this array will beat the 1500MBps of the nMP's drive and approach 2GBps. With the money I save, I can do a Time Machine backup on a WD Black platter drive (which I can also fit in my Current Mac Pro)--that should sew up any concerns of reliability over the RAID0 Vs a single drive... not that you should be running without backup in either case.

Alternatively, I could just buy a 3,000MBps PCIe SSD (yes I know they're ridiculously expensive), which is something the nMP can't even run over TB2.
 
Last edited:
Alternatively, I could just buy a 3,000MBps PCIe SSD (yes I know they're ridiculously expensive), which is something the nMP can't even run over TB2.

Again, how do you know channel bonding for RAID won't work?
Do you have some prototype at home?
 
Again, how do you know channel bonding for RAID won't work?
Do you have some prototype at home?

I actually do think you'll be able to soft-raid multiple TB->[SATA ->]SSD controllers together to get more than 2GBps.

It will almost certainly not be able to run a single PCIe SSD though, as that level of aggregation isn't possible with that technology.

You also probably wont be able to do this with GPU or other PCIe cards--really just storage, and really just Soft-RAID.

Personally, for SSD I don't see a problem with RAID0 as failure is far more dependent on amount of space than the number of drives, but others see things different.
 
Actually this just came out. Looks like it just came out - $330

$250 * 1.3 = 325. Shocker!!! B&H doesn't sell the cable with Apple's canonical 30% mark-up piled on top. Probably one reason doesn't openly list the price on their site.

That must be why TB is so darn popular over PCIe slots! Oh wait...

PCIe slots or external PCIe ? There already is a standard for external PCIe. What is its adoption rate relative to Thunderbolt deployments?


We're talking about cost to the end user.

Actually not. Your real core criteria is to swing this back into "erector set with classic box with slots".

The "one laptop per child" initiate to find a $100 PC.... box with slots? Nope. Netbooks ....PCIe standard format slots? Nope. Smartphones? Nope. Tablets? Nope.

If you don't think it's more expensive to add another box and another PSU to run a device that could be powered and housed just fine in a PCIe slot, you're either out of your mind or just can't stand to be wrong.

This is simply a strawman presumption that you make up to shoot down.

The overall industry trend line to more integrated personal computers is clear where system cost is the single primary factor. Juggling multiple factors sometimes will lead to PCIe standard slots and sometimes it won't.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.