Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
The idea of an external GPU translates into wallet rape for me. Cable is expensive, dock is expensive, GPU probably expensive if specially made. Would need an external power supply for beasty GPUs - expensive.

Its Wallet Gang Rape, your right. The cable won't be cheap. the dock, like anything apple will be very expensive, the GPU will be expensive, and perform terribly.

And, those beast GPUs will be throttled by TB big time, its like putting a Radeon 6000 series into an AGP slot. Pointless.

Though I suppose some monitors could have built in GPUs but I don't see the practical side for this. I suppose chaining HDDs could be useful.

Chaining HDDs would be about it, ( which you can already do with USB and Firewire, both of which are fast enough for spinning platter drives. ), but Im not sure why youd want a ton of SSD drives.

Monitors with GPUs? Lolz.

I just see TB are another silly attempt to create a standard. Expect for a couple of Niche applications, its not totally great. USB 3.0 is fast enough for 95% of the devices, and backwards compatible with everything, and vice versa.

Apple guys love to tell me that " oh PCs just have to many wires all over your desk "

But now, you can have your GPU, most of your hard drives, and whatever else all outside of your case in a big cluttered mess. Kinda ironic lol.
 
Chaining HDDs would be about it, ( which you can already do with USB and Firewire, both of which are fast enough for spinning platter drives. ), but Im not sure why youd want a ton of SSD drives.

How is Firewire fast enough? USB 3 is; USB 2 is not. And that just with a single spinner platter hard drive so we're are not even considering RAID set-ups.

LacieThunderbolteSATAhub.png


I can get around 76 MB/s on the same drive with FW800. That means FW800 is good enough for half of what the drive is capable of. Fast enough? If it was up to people like you, then humanity would never progress.
 
How is Firewire fast enough? USB 3 is; USB 2 is not. And that just with a single spinner platter hard drive so we're are not even considering RAID set-ups.

Thumb resize.

I can get around 76 MB/s on the same drive with FW800. That means FW800 is good enough for half of what the drive is capable of. Fast enough? If it was up to people like you, then humanity would never progress.

Well, I apologize, I didnt look up how fast FW800 was, I just thought up off the top of my head from what I thought I remembered, and yes, USB 3.0 is fast enough, so why not just stick with USB 3.0? Instead of adopting an expensive interface, that I'm willing to put money on, will not become the standard.

No need to get personal. I don't stoop to that level.
 
Well, I apologize, I didnt look up how fast FW800 was, I just thought up off the top of my head from what I thought I remembered, and yes, USB 3.0 is fast enough, so why not just stick with USB 3.0? Instead of adopting an expensive interface, that I'm willing to put money on, will not become the standard.

No need to get personal. I don't stoop to that level.

Because thunderbolt is faster. That's the point of progress. I didn't think I was getting personal. I was simply stating an opinion on your thoughts, but I apologise.
 
Because thunderbolt is faster. That's the point of progress. I didn't think I was getting personal. I was simply stating an opinion on your thoughts, but I apologise.

Understood, no hard feelings.

Yes, thunderbolt is faster than USB 3.0, but my beef with it, is that its not fast enough to do things like mobile GPUs, which some people seem to think.

Thunderbolt to me just doesn't seem like its worth it, as fast as it is. Its not an industry standard, I just see it as being WAY to expensive.

USB 3.0 yes its slower, but USB is the standard and is way more compatible and widespread. So I don't get the thing about thunderbolt, for me I just see it as when Apple Introduced Firewire, all over agian.
 
Understood, no hard feelings.

Yes, thunderbolt is faster than USB 3.0, but my beef with it, is that its not fast enough to do things like mobile GPUs, which some people seem to think.

Thunderbolt to me just doesn't seem like its worth it, as fast as it is. Its not an industry standard, I just see it as being WAY to expensive.

USB 3.0 yes its slower, but USB is the standard and is way more compatible and widespread. So I don't get the thing about thunderbolt, for me I just see it as when Apple Introduced Firewire, all over agian.

Well, the good news is that Intel is already hard at work to make it twice as fast within the next 12 months, if I recall correctly. With chipzilla (intel) behind it I don't think it will become FireWire.
 
Though I suppose some monitors could have built in GPUs but I don't see the practical side for this. I suppose chaining HDDs could be useful.

People keep saying this, yet it makes no sense given the increased latency. If it's an issue of performance, moving it that far from the cpu is going to be a performance hit. You'll increase the cost of the display on several fronts if you're going for power. Some displays already have a lot of embedded hardware to drive internal LUTs or hardware based rotation, but this just a fundamentally silly idea as it creates more problems than it solves.

Because thunderbolt is faster. That's the point of progress. I didn't think I was getting personal. I was simply stating an opinion on your thoughts, but I apologise.

Intel's own marketing has stated that in the longer term, it will come down to how much people are willing to pay for increased bandwidth. They've actually avoided the speculation that it's a valid usb replacement. Beyond that the thunderbolt display at this point has usb ports rather than just an array of thunderbolt connectors. They do make cheaper ones appropriate for ending a chain, but I don't see any desire to use them in low end electronics. It may just not be worth the effort.

Well, the good news is that Intel is already hard at work to make it twice as fast within the next 12 months, if I recall correctly. With chipzilla (intel) behind it I don't think it will become FireWire.

That sounds excellent, but I haven't read it anywhere.
 
Its Wallet Gang Rape, your right. The cable won't be cheap. the dock, like anything apple will be very expensive, the GPU will be expensive, and perform terribly.


The cable is expensive because of the way thunderbolt works. the transciever is built into the cable. This is why they can upgrade to fibre down the track with the same plug.


If you look at any other 10 gig technology it is similar, SFP+ ports on switches, X2 ports on switches, etc.

And yes, it should be fast enough for mobile GPU. You're talking about an upgrade from mobile HD3000/HD4000, while plugged into a desktop monitor.

If your GPU hits system memory even over PCI-e it is already massively slowed down, even over a PCI-e 16x slot. Put enough VRAM on the GPU to stop it talking constantly over thunderbolt and it will be fine. My point is - the fact that PCI-E video works means thunderbolt video will work almost as well. there's a reason video cards have gigs of VRAM - its because PCI-E 16x is not fast enough either. Load everything into VRAM, run from there.

----------

why not just stick with USB 3.0? Instead of adopting an expensive interface, that I'm willing to put money on, will not become the standard. .

because they are solving 2 different problems. USB is designed to be a cheap, CPU driven serial bus for inexpensive, relatively slow peripherals. Yes, USB is "5 megabit" (actually standard says 4 after encoding and that 3.2 is enough to be certified USB3 superspeed) now, but it is still CPU driven, and still has horrible latency, etc. It is not designed for low latency system devices.

thunderbolt is essentially a PCI slot on a cable.


The 10gb vs 5gb thing doesn't tell the whole story. one does not replace the other.

Thunderbolt is intended to be a universal standard replacement for things like SCSI, e-sata, fibre channel, docking station connectors, etc. Not USB.

You can run USB over thunderbolt, not vice versa.

This is why i guarantee all new Macs from ivy bridge onwards will include USB3 ports (as well as thunderbolt). That isn't heralding the death of thunderbolt, that is just keeping up with new intel chipset technology. They serve two different purposes.
 
Last edited:
The cable is expensive because of the way thunderbolt works. the transciever is built into the cable. This is why they can upgrade to fibre down the track with the same plug.


If you look at any other 10 gig technology it is similar, SFP+ ports on switches, X2 ports on switches, etc.

And yes, it should be fast enough for mobile GPU. You're talking about an upgrade from mobile HD3000/HD4000, while plugged into a desktop monitor.

If your GPU hits system memory even over PCI-e it is already massively slowed down, even over a PCI-e 16x slot. Put enough VRAM on the GPU to stop it talking constantly over thunderbolt and it will be fine. My point is - the fact that PCI-E video works means thunderbolt video will work almost as well. there's a reason video cards have gigs of VRAM - its because PCI-E 16x is not fast enough either. Load everything into VRAM, run from there.

----------



because they are solving 2 different problems. USB is designed to be a cheap, CPU driven serial bus for inexpensive, relatively slow peripherals. Yes, USB is "5 megabit" (actually standard says 4 after encoding and that 3.2 is enough to be certified USB3 superspeed) now, but it is still CPU driven, and still has horrible latency, etc. It is not designed for low latency system devices.

thunderbolt is essentially a PCI slot on a cable.


The 10gb vs 5gb thing doesn't tell the whole story. one does not replace the other.

Thunderbolt is intended to be a universal standard replacement for things like SCSI, e-sata, fibre channel, docking station connectors, etc. Not USB.

You can run USB over thunderbolt, not vice versa.

This is why i guarantee all new Macs from ivy bridge onwards will include USB3 ports (as well as thunderbolt). That isn't heralding the death of thunderbolt, that is just keeping up with new intel chipset technology. They serve two different purposes.

I couldn't have put it better myself.
 
I really don't think people appreciate just how hard getting 10 gigabit down a cable is.

I mean, if you think thunderbolt is expensive... 5m Cisco twinax cables for 10 gigabit switch uplinks = $200Au each, retail (just ordered some for my new UCS cluster).

10 gigabit over copper cable (hell, even over fibre - yes, you can get 40 gigabit, etc but that is generaly bonding multiple ports) is still reasonably high-end in the datacenter.



To have that on your laptop is just awesome.
 
I guess someone should change the title of this thread to USB3 vs Thunderbolt.
 
And yes, it should be fast enough for mobile GPU. You're talking about an upgrade from mobile HD3000/HD4000, while plugged into a desktop monitor.

If your GPU hits system memory even over PCI-e it is already massively slowed down, even over a PCI-e 16x slot. Put enough VRAM on the GPU to stop it talking constantly over thunderbolt and it will be fine. My point is - the fact that PCI-E video works means thunderbolt video will work almost as well. there's a reason video cards have gigs of VRAM - its because PCI-E 16x is not fast enough either. Load everything into VRAM, run from there.


Yes, it would likely be an improvement over integrated graphics.

Unfortunately that's not what this thread is about. It is not at all fast enough to replace discrete graphics cards in the Mac Pro, in some hypothesized modular future format.

Thunderbolt is great. I love it. Good for an external RAID. Good for monitors.

But it has to double, twice, from its current speed to equal raw bandwidth of a 2007 era discrete graphics card. By the time that doubling happens, twice, where will the state of the art in discrete be then?

And we haven't even mentioned the latency issue.
 
Yes, it would likely be an improvement over integrated graphics.

It would be, expect for the fact. As with anything remotely related to Apple, it'll be so expensive. As you said, a mid 2007 Video card performance level is the best you'd hope to get out of TB, whats that, a high end GeForce 6000 series? Sure its better than integrated. But is Geforce 6000 performance worth the 300+ dollars I bet this setup would cost? I bet it isnt.

The fact is, TB cannot compete with PCIe 16X for GPU's, it doesn't have close to enough bandwidth.


PCIe16X 3.0 can do 16 gigabytes per second. As far as I know, Thunderbolt can't come close to anything like that.

Yeah..... And that 16 GB per second, while thats alot of bandwidth. Some higher end video cards are running into walls at 8GB per second. TB just isn't good for any decent GPU.
Thunderbolt is great. I love it. Good for an external RAID. Good for monitors.

Yeah it is, I could deff see doing it to my iMac at some point.
 
Last edited:
Also is TB power only so you're extremely limited on power draw.

That is not TB power only. You can see the black power cable coming out of the box. The "PCI-e power only" is referring that the card is only powered by the power coming out of the PCI-e slot. The slot gets its power from cases power supply which is plugged into the wall.

TB could barely power a mobile GPU let alone a entry level desktop GPU.

It is a matter of relative power/"horsepower". While a 'entry' level ($60-100) desktop GPU card may be relative low power to some $350 GPU card. However, in comparison to a mobile GPU embedded inside a Mac it could easily be relatively more powerful ( and consume more power with less noise than a something equivalent trapped inside the Mac's case with those same volume constraints. )

For example Anandtech also did a review of the 7750 ( $100 )
http://www.anandtech.com/show/5541/amd-radeon-hd-7750-radeon-hd-7770-ghz-edition-review/2

Single slot, not "extra" power connections, and fan cooling attached to the card. Weighs in at about 75W.

A connection over TB is substantially less than the bandwidth of 16x PCI-e connection but this card probably isn't pushing much over 9-10 PCI-e lanes worth data across PCI-e. For example the DDR transfer rate on card is 4.5 Gbps. (http://www.amd.com/us/products/desktop/graphics/7000/7750/Pages/radeon-7750.aspx#2 ) Yeah the max memory bandwidth is much higher but that is under perfect conditions.

The gap right now is relatively small over a "Cape Verde" based 7770M clocked about 200MHz slower on shaders and capped at 4.0Gbps internal DDR to fit into the "mobile" profile. (http://www.amd.com/us/products/notebook/graphics/7000m/7700m/Pages/radeon-7700m-series.aspx#2).
The losses you'd take over TB with the throttle bandwidth maybe as high as 20%. That is perhaps as big as the 20% boost get by going to external-over-TB GPU. So for now it would be a wash...... at least on Macs with "top end offering" mobile GPUs and memory. For a Intel HD4000 box it is probably a net gain.


However, the real value difference would be cross generational. Where the next gen desktop card compares against the "fixed" mobile GPU in the Mac.
So relative to the AMD 6750M in the first generation TB MBP (http://www.amd.com/us/products/note...00m-6600m/pages/amd-radeon-6700m-6600m.aspx#3) with its 3.2-3.6 Gb/s card memory bandwidth and the substantively even lower clock speeds on previous architecture... and the gap significantly widens. So even with the performance hit across TB can still "go faster than what is built in". Especially once most of the textures have been cached over onto the card.


The value prop isn't and never was implied that folks will be able to play the latest bleeding edge game at bleeding edge frame rates over TB. If the new card is 60% faster than the internal GPU, even if take a 30% hit over TB in some contexts, that's a next 30% gain. That's the value proposition.


Note also adding a bigger power supply to the external box would solve the power problem for even more power hungry cards. This MSI box passes on that likely to hit a much lower price point and match to the much more numerous Macs that are limited to iGPU only.

MSI's box is likely doomed in the Mac market if Apple doesn't put a "slot only power" PCI-e card in the upcoming Mac Pro revised configs. [ As noted in the Anadtech article the box only worked at the time when running Windows. ]


It would also saturate one of your TB channels so you couldn't run much else on it.

That's true. TB can't solve multiple 4x (or larger) PCI-e bandwidth problems at one time.
 
Image


Oh the irony

This is no irony if there is net decrease in cables.

The iMac is a "all in one" So the cables went to zero ( OK. essentially one if you get the USB keyboard instead of the wireless one.).

If the PC config starts off with 8 while the iMac "expands" to 2-4 with two TB cables then that is still a 50% decrease. Even if the PC has 2-4 and the iMac has 2-4 it is a tie. Again no irony.

Folks yelping about towers can't possibly be complaining about cables. Removing the display creates at least one cable back to the display. ( often two since the display is likely more convenient to hook the keyboard to).

If following Apple's "all-in-one" approach the TB devices will themselves subsume multiple abilities ( handle multiple things that are single cables in this gratuitous cable tangle on the right hand side). There aren't going to be relatively many folks with more than 2 TB cables attached at any one time. More TB devices are going to be like Apple's TB Display than single HDD enclosures.
 
You are aware that most games/software don't thrash the PCIe bus when using the video card, and cache everything in video ram because even PCIe x16 is far too slow?

That doesn't make sense; although I think I know what you are trying to get at. Either one or the other. If the data is so large it would thrash the bus even after compression and during transfer then it is a problem. If the bulk of the data is cached on the card then it isn't as big of a problem.


One issue is that the lanes go in powers of 2. 1x , 2x , 4x, 8x, 16x. So even if just need 9x or 10x worth of bandwidth you need a 16x connection. 8x is just a bit too small.

However, the bigger reason games compress and expand on the GPU side is that most PC's are PCI-e over subscribed. There may be 2 16x slots but there is only 16x real lanes being provided. If they set-up some SLI/Crossfire set up then each card is only getting 8x. Therefore, there are optimizations done to package up the data being shipped off to the cards into what fits in 8x since a large, broad spectrum of hardware caps its users there. If use one card then get 2 * 8 worth of data which is a 16x.

Similarly, games can have gimmicks where a video or some interlude consumes time between room or scene traversals that require significant changes to the texture cache backloaded onto card's VRAM. However, that gimmick work almost just as well when chop down the bandwidth below 16x. The latencies before the game gets going again may be longer but once loaded the two cards are roughly back on equal parity. If 97+% of all the textures for the entire game are loaded into the card's VRAM then having a slower bus actually matters less ,not more, during that primary phase of play. The only "bad outcome" is the start up time.... which is a major issue of some spend far more time playing the game then watching it start.

Games and many apps already have mechanisms to get around "16x is not enough issues " which are useful in a "4x is not enough" context. Perhaps they will be not quite as effective (frame rates drop somewhat) but not useless.

It is only the " continuously flushing the VRAM cache at single digit Gb/s rates" games/apps that would present a major problem.


P.S. While this would be a "works good enough" solution for many. There are always going to be a few who are "bent out of shape" because they are not getting maximum performance out of the hardware. For those, TB is never going to work "well enough" because there is performance they are "leaving on the table" somehow. Similarly, the fact that games/apps are being hamstrung by being written to the lower common denominator ( sub 16x bandwidth , smaller caches , etc.) will have them twitching also.
 
Last edited:
You are aware that most games/software don't thrash the PCIe bus when using the video card, and cache everything in video ram because even PCIe x16 is far too slow?

So if PCI x16 is already far to slow, we should just go slower?

This makes running pro level stuff on a PCI x4 connection sound even worse. Not to mention, pro level apps don't use VRAM caching as heavily, they're more stream oriented.
 
iCloud has crazy profit dollars. Numbers that are completely and totally unreal.

The macpro does not fit into the iCloud model very well. When I had my macpro i had 12tb of storage in it and a 600gb intel ssd.

My need and or desire for iCloud was and still is none. My feeling is Apple sees mac pros as a roadblock of sorts for more iCloud expansion.

I moved on from mac pros as I really need a decent cpu and fast/large storage. I never needed a 6 core although my 6 core upgrade thread has helped many 2009 and 2010 users.

https://forums.macrumors.com/threads/1122551/.

I am meeting my needs with mac mini with t-bolt a diy pc and a nas. To be honest I suspect a lot of lessor mac pro users have done this. My mac mini has vmware/fusion with windows 7 and plays well with the diy pc.

It all points to Apple wanting iCloud, Apple tv, real tvs and ipad iphone as the money makers for the company.


Are you still in the Macpro CPU Modding Business? Inquiring Minds would like to know.
 
Why in the hell would someone want a bunch of tiny little boxes all over the place, when they could just have everything in one simple tower with a monitor?
In 2010 I had a Mac mini early 2009 driving a 30" display. Worked fine as long as you didn't do any graphic intense work. Showing pictures fullscreen in iPhoto was something that the mini considered as being graphic intense work since it didn't perform well at all. I disliked having to disable the external disk to prevent problems when I wanted to sleep the machine so I wanted an additional internal disk. What to choose?

The Mac mini obviously wasn't an option as it would have the same problems I was trying to solve. The iMac looked fine: it had the computing power as well as the graphics and a good price but it had a glossy display and I already had a good display anyway. Then what? Well, the only option left was the Mac Pro so I bought one even though it was the most expensive option and a bit of an overkill. It did solve all my problems but it created new ones as well. It is a huge and heavy machine taking up more space on my desk than the mini did. It is also noisier than the mini although you want hear the machine when you turn on the music.

When I look at the 3 options again today I now see that the mini has resolved half the problems. It is now a powerful machine that can take 2 disks (alternatively I can use a network share too) but it still lacks in the gpu department. If I could use an external gpu via something like thunderbolt than this would solve all my problems. I have a quiet and still compact solution that packs quite the punch when it comes to performance. In other words, I don't really need something like a Mac Pro but I have to use one because Apple does not offer something that is tailored to my needs. Thunderbolt does change this for me. I don't think I'm the only one out there because I've seen others post similar things.

One other advantage with this setup is that it makes it more flexible in the future. I can upgrade the gpu and I can upgrade the mini to something like a MacBook Air in the future without losing any/much of performance.
 
If the Mac Mini was a Mac Pro, it would be great. But then it wouldn't be a Mac Mini. It would be a Mac Pro.

You can't even get a Mini with both a discreet GPU and quad core.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.