Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Status
Not open for further replies.
:)Yes. they do support it. I didn't have time for more but...please check https://support.apple.com/en-us/HT201189
Apple's words:
Why doesn't my external FireWire, USB, or Thunderbolt disk appear in the Microsoft Windows Startup Disk control panel?
External FireWire, USB, and Thunderbolt disks are not recognized by the Startup Disk control panel in Microsoft Windows. To start up from a bootable external drive, press and hold the Option (Alt) key while the computer starts up, then select the external disk.

Apple doesn't support you running your normal OS X install externally. You can do it of course, just like you can plug in a TB2 GPU and startup with it. But they tell you not to for the exact same reason they don't support TB2 GPUs.

That's probably why it's not a big deal to them that Startup Disk under Windows doesn't see external drives.

Yes, I agree with you that Apple is not going to support officially eGPUs now for the reasons you have mentioned.

Then I don't know what we're arguing about.
 
Then I don't know what we're arguing about.

:) My point was that not every single component of a system is the same. Different handling required, some components like CPU, RAM, GPU, chipsets, are crucial for the normal function, you cannot hot swap them, as you already of course know, so it's a million times easier to just shut down and disconnect, - and if this makes things easier: an brand name special edition or whatever officially supported eGPU - , than find a way to eject it like a external drive, it is not a printer or an iPhone.
I know that Apple and current tech achievements have made a lot of things easier for the masses, but I don't think that some components is possible to be so easily ejected, it's far far way more complicated. removed, it was my mistake: and this is the reason you have to reboot to change the GPU in a MBP.
Perhaps I wasn't clear enough ...

Always imho of course... and friendly:).
 
Last edited:
it's a million times easier to just shut down and disconnect [...] than find a way to eject it like a external drive, it is not a printer or an iPhone.
[...] I don't think that some components is possible to be so easily ejected, it's far far way more complicated, and this is the reason you have to reboot to change the GPU in a MBP.
I wasn't aware you would still have to reboot to change the active GPU in a MBP. AFAIK this problem has been solved several MBP generations ago.

You can even grab a common business notebook from its docking station without advising Windows to properly prepare the docking-off process and the graphics will simply switch back to whatever is present (problems are more related to network connections and caches). As long as an eGPU is not the _only_ GPU in the system, I don't see big problems to eject/disconnect it.
 
  • Like
Reactions: tuxon86
I wasn't aware you would still have to reboot to change the active GPU in a MBP. AFAIK this problem has been solved several MBP generations ago.

You can even grab a common business notebook from its docking station without advising Windows to properly prepare the docking-off process and the graphics will simply switch back to whatever is present (problems are more related to network connections and caches). As long as an eGPU is not the _only_ GPU in the system, I don't see big problems to eject/disconnect it.

Very interesting. I must have missed something.
:oops:Sorry ... for spreading, unintentionally, misinformations.

Thank you very much.:)
 
You can even grab a common business notebook from its docking station without advising Windows to properly prepare the docking-off process and the graphics will simply switch back to whatever is present (problems are more related to network connections and caches). As long as an eGPU is not the _only_ GPU in the system, I don't see big problems to eject/disconnect it.

And this is changing my point of view.

The docking stations you have mentioned, have a separate GPU installed? or they just passthrough the output of the laptop's internal one?
 
And this is changing my point of view.

The docking stations you have mentioned, have a separate GPU installed? or they just passthrough the output of the laptop's internal one?
Sorry for being imprecise! I was actually thinking of what is called "Port replicator" and mixed that up a bit with older docking stations, which sported individual expansion slots internally.

Still I don't see a general problem, as USB video cards are comparably common these days (and there have been external video cards with other interfaces before, like e.g. PCMCIA/ExpressCard).

The problems of all these "external" GPU solutions was mainly the limited bandwith, so they are rarely useful for more than 2D work. Thunderbolt would solve the bandwith issues to a big degree and the other problems are basically already solved for quite some time.
 
  • Like
Reactions: filmak
The docking stations you have mentioned, have a separate GPU installed? or they just passthrough the output of the laptop's internal one?
Still I don't see a general problem, as USB video cards are comparably common these days (and there have been external video cards with other interfaces before, like e.g. PCMCIA/ExpressCard).
All of those USB solutions are using the computer's GPU and then sending the output through the USB port or an existing video port. They're not eGPUs - they're essentially port replicators with some display management software.
 
  • Like
Reactions: filmak
AFAIK this problem has been solved several MBP generations ago.

This is correct. As of Late 2008, Apple supported multiple GPUs and added the concept of GPUs going offline and online to OSX. The MBP uses this mechanism to switch between the integrated and discrete GPU. *IF* Apple did an eGPU, they'd leverage the same software support. Apple would undoubtedly have to do a bunch of work to make it work flawlessly, because it really does have to support hot-pluggability if they're going to put it on laptops. However, it does not appear that they'd have to create any new APIs that would be visible to developers (although they might). But none of this is going to look anything like the CPU card in Akitio box stuff people are doing.
 
  • Like
Reactions: filmak
DGLee has setup an SFF system with a Nano and 2699 v3 Xeon to fly at almost 10TFlops. Insane, all that power inside a little box. Imagine such an nMP!!! And with "only" a 600W PSU. With 2 Nanos we should be able to get almost 15TFlops, although we won't be seeing the 2699 v4 in there for sure. But even if it tops out at 14 cores, and Nano underclocked to fit 145W or so, 12-13TFlops should be possible.
 
DGLee has setup an SFF system with a Nano and 2699 v3 Xeon to fly at almost 10TFlops. Insane, all that power inside a little box. Imagine such an nMP!!! And with "only" a 600W PSU. With 2 Nanos we should be able to get almost 15TFlops, although we won't be seeing the 2699 v4 in there for sure. But even if it tops out at 14 cores, and Nano underclocked to fit 145W or so, 12-13TFlops should be possible.
Nope. Basic Nano with 175W TDP has 8.2 TFLOPs of compute power. So two of them - 16.4. At 850 MHz you get 7 TFLOPs of compute power from the GPU. So two - 14 TFLOPs. Possibly 7 TFLOPs at 125W of TDP. 14 TFLOPs from 250W of TFP is pretty astonishing value. The most interesting thing in the Nano GPU is the power management.
But that is only theory. Im really curious to see what will next Mac Pro be capable of. Especially for 4K editing and gaming.
 
That was my math. I dialed it down a bit just in case you couldn't actually get 7TFlops with the power limited Nano. Also the Xeon should get close to an additional 1TFlop, depending on the top model used, maybe 14 cores.
So, 15TFlops total compute power achievable, possibly.
 
Nope. Basic Nano with 175W TDP has 8.2 TFLOPs of compute power. So two of them - 16.4. At 850 MHz you get 7 TFLOPs of compute power from the GPU. So two - 14 TFLOPs. Possibly 7 TFLOPs at 125W of TDP. 14 TFLOPs from 250W of TFP is pretty astonishing value. The most interesting thing in the Nano GPU is the power management.
But that is only theory. Im really curious to see what will next Mac Pro be capable of. Especially for 4K editing and gaming.

According to www.computerbase.de, some of Broadwell-EP processors with DDR4-2400 support will come out Q1/16. But others are predicting Q4/15 release.
http://www.computerbase.de/2015-08/broadwell-ep-xeon-e5-2600-v4-mit-ddr4-2400-zum-jahreswechsel/
http://news.softpedia.com/news/inte...-e5-v4-broadwell-ep-in-late-2015-489715.shtml

And then there are stories that there is a short supply of Radeon R9 Fury Nano's... that could mean a problem in AMD's product line or... something is eating up the whole production.
http://www.fool.com/investing/gener...ced-micro-devices-incs-radeon-r9-nano-be.aspx
http://www.techpowerup.com/215776/amd-radeon-r9-nano-review-by-tpu-not.html

It could be possible that Apple will introduce nMP v2 at the end of October (Q1/16 latest). With both goodies. And new external displays.

Of course, speculation, speculation.. most likely, both can be a sign of something else.

UPDATE: changed some text after publishing
 
Last edited:
However, it does not appear that they'd have to create any new APIs that would be visible to developers (although they might). But none of this is going to look anything like the CPU card in Akitio box stuff people are doing.

Mmmmm this isn't how this works.

There is no "API" because the GPU simply isn't allowed to go offline until the app is done with it. It's soldered into the machine so it's not going anywhere. There's no developer visible API besides the hint that you want your application pinned to the dGPU. It's all or nothing, not dynamic. The app is allowed to decide when the GPU can power down. That's a very very different situation than the GPU being powered down without the app having a choice.

That's still a totally different can of worms than letting the GPU be unplugged. Like I said, in the current setup it makes no sense. If your app is using 2 gigs of VRAM and it gets moved to a 512 gig of VRAM GPU, what is it supposed to do? Or if it's using OpenGL 4 and it gets moved to a OpenGL 3.X GPU?

Those sorts of things require driver and OEM support, which Thunderbolt 3 has. Thunderbolt 2 could as well, but there is absolutely no existing API that deals with this sort of thing. And really it's something you want to tackle at the driver level, not the app level.

On Windows this isn't guaranteed to work either. But DirectX is a little more tolerant of driver interrupts, mostly because the GPU drivers were so unstable they had to come up with workarounds for them crashing. But if you fire up an app that uses the GPU it's not going to respond well if you disconnect the GPU.

There's a lot of stuff that happens to work, but a lot of stuff that doesn't. Like I said, if you've got a GPU that's got 3 or 4 gigs of VRAM, with data critical to an app, and the GPU disconnects, how does that data get restored?

Also dynamic switching wasn't supported on the MBP until 2010. The 2008s and 2009s had to shut down all GUI processes to do any switch. On the 2010s, an application could launch and ask to be pinned to the faster GPU without having to kill all the applications. It's a step up, but still not what's needed to make Thunderbolt hot plug work.

It's odd to me this is such a discussion point when AMD and Intel have already both said eGPU hot plug does not work properly under the current implementation. At the very least, it doesn't work because the people who make Thunderbolt and GPUs already said it doesn't. I'm not just making this up. The people who wrote the specs agree. They've already said they had to make new driver changes for it to work properly.
 
Last edited:
  • Like
Reactions: filmak
Mmmmm this isn't how this works.

Really? My understanding, and I could be wrong, is that under OSX your application, assuming it is performing operations that actually depend on GPU capabilities, can register for notifications if the underlying hardware renderer changes. You can then re-query for information about the newly active GPU.
 
  • Like
Reactions: tuxon86
Really? My understanding, and I could be wrong, is that under OSX your application, assuming it is performing operations that actually depend on GPU capabilities, can register for notifications if the underlying hardware renderer changes. You can then re-query for information about the newly active GPU.

In OpenGL there are some sorts of notifications, but it's independent of the stuff Apple did for the MBP. And in practice, most apps don't implement this. I'm looking up the spec on it, I was tinkering with it the other day... Realistically it's a really hard thing to cope with. You've got to figure out what work was lost and try and restore your state. There are some cases when apps have to deal with GPU changes, like if an app changes screens, and has to move GPUs. But realistically a lot of apps, especially apps that are full screen or are compute apps, don't care about that.

The stuff Apple did for the Macbook Pro is mostly here:
https://developer.apple.com/library/mac/qa/qa1734/_index.html

You'll notice Apple makes this guarantee:
"MacBook Pro automatically switches to the higher-end discrete GPU for performance concerns and won't switch back until the application quits."

That's the issue. The API Apple designed is based around the GPU not going anywhere until your application quits

It's also not a required thing to opt into. Apps like games don't because the default behavior is to force the app onto the dGPU, which is exactly what games want.
 
This document here has some info on renderer changes, but it's not quite on topic. It's mostly around adding a Crossfire/SLI setup to an OS X app:
https://developer.apple.com/library/mac/technotes/tn2229/_index.html

But again, this is totally optional. If you don't need to render across multiple GPUs at once (and as we all know, almost no application on OS X does), you don't need to follow these practices. These specs are also built around display disconnects, not GPU disconnects, and they make certain assumptions around that.

Edit: Actually, on reading this document again, it's worse. This describes how a developer can maintain a context on the same GPU through different changes. The doc actually encourages developers to assume a GPU isn't going anywhere.
 
Last edited:
In OpenGL there are some sorts of notifications, but it's independent of the stuff Apple did for the MBP. And in practice, most apps don't implement this. I'm looking up the spec on it, I was tinkering with it the other day... Realistically it's a really hard thing to cope with. You've got to figure out what work was lost and try and restore your state. There are some cases when apps have to deal with GPU changes, like if an app changes screens, and has to move GPUs. But realistically a lot of apps, especially apps that are full screen or are compute apps, don't care about that.

The stuff Apple did for the Macbook Pro is mostly here:
https://developer.apple.com/library/mac/qa/qa1734/_index.html

You'll notice Apple makes this guarantee:
"MacBook Pro automatically switches to the higher-end discrete GPU for performance concerns and won't switch back until the application quits."

That's the issue. The API Apple designed is based around the GPU not going anywhere until your application quits.

Yeah, I'm somewhat familiar with the documentation you're referring to :) You're not really making the case, but I'm going to assume you're debating this in good faith. First of all, just to make sure we're working with the same assumptions, I'm talking about a hypothetical Apple eGPU where they've actually built and tested drivers coupled with specific Apple eGPU products. I really have no idea or opinion on the viability of hobbyist eGPU chassis with user-upgradeable video cards in them, that seems like a longshot. I'm mostly imagining a scenario where Apple ships a future monitor with an eGPU in it and the reason that I think Apple can do this easier than the other vendors you talk about is because they have enough end-to-end control whereas in the Windows ecosystem it would be much more complicated. For that scenario, my argument is that the existing support for offline GPUs in the Quartz Display Services is the likely model for how this would work. Further, looking at this support, it's pretty clear that existing applications that don't access OpenGL directly shouldn't have any issues if they can already handle display changes and that OpenGL-based apps that are coded to Apple's multi-GPU guidelines should be fine as well. As to how hard this is for the application developer, as I said, most applications don't have to be aware at all, and for the ones that are, if they're written well, then 99% of the time all GPU operations and state are typically considered transient and when they get a notification the GPU has changed, will be able to simply replay their operations.
 
  • Like
Reactions: tuxon86 and filmak
I'm talking about a hypothetical Apple eGPU where they've actually built and tested drivers coupled with specific Apple eGPU products.

Right... that's exactly what I'm saying, and exactly what AMD and Intel have done on the Windows side. You can't do this without driver changes. The work wasn't done for this at all with the MBPs. Nothing on the MBP was done around an application changing hardware mid-execution. All the work done was around being allowed to keep using the same GPU through changes. The document I linked to specifically was about staying pinned to a GPU through changes. Apple's guidance is the exact opposite of what you'd want to do to deal with hardware changes.

I'm not sure how, without driver changes, this would work at all with the current implementation.

"In these configurations you may wish to take advantage of the hardware that is not connected to a display, or to be able to start rendering on this hardware should a display be connected at a future date without having to reconfigure and reupload all of your OpenGL content."

It's specifically about supporting offline renderers so you don't have to follow the online GPU.

The display change notifications can be useful, unless what you're doing is GPU compute, then they're totally useless because you're not necessarily rendering to a display. Same is true if you're rendering to an offline GPU which isn't connected to any display.
 
Last edited:
  • Like
Reactions: filmak
I'm not sure how, without driver changes, this would work at all with the current implementation.[...]
I think noone expects the current implementation to already support a hypothetical eGPU to the fullest. But the mechanisms to switch GPU's are known and tested, so Apple engineers could build on that foundation to implement eGPU support without having to re-invent the wheel.

The display change notifications can be useful, unless what you're doing is GPU compute, then they're totally useless because you're not necessarily rendering to a display. Same is true if you're rendering to an offline GPU which isn't connected to any display.
Couldn't a hypothetical eGPU mechanism use those display change notifications and simulate a connected display on a compute GPU?

Besides - any system (and user, for that matter) needs to be able to cope with unexpected events. Whether a non-critical part of the system (like a GPU) is disconnected or suddenly failing, makes no difference in result: Potential application crash and probably loss of data.
Note: I consider a GPU as 'non-critical' part, because without it a computer can still run fine (unlike e.g. a missing/dead CPU), even though its usability is limited in that state.

Eventually gamers probably wouldn't care too much if their cat tripped over the wire and disconnected the eGPU, while professional users making heavy use of GPU compute functionalities would probably take precautions against accidental disconnects anyway.
 
Except now when those kids have machines that won't boot anymore, where are they going to go? The genius bar at their local Apple store. Where Apple is going to have to pay people to clean up the resulting mess, even if it was the user's own fault.

The same place they go currently when they have data loss from pulling a hard drive or usb stick without unmounting it? (No sir, you've hosed the partition map. No sir, we don't do data recovery. I'm sorry sir, did you have it backed up with Time Machine? Oh, this WAS your time machine drive? I'm afraid there's nothing we can do sir.)

How is an eGPU any different?
 
The same place they go currently when they have data loss from pulling a hard drive or usb stick without unmounting it? (No sir, you've hosed the partition map. No sir, we don't do data recovery. I'm sorry sir, did you have it backed up with Time Machine? Oh, this WAS your time machine drive? I'm afraid there's nothing we can do sir.)

How is an eGPU any different?

Because an eGPU will bring down the whole machine and render the boot drive possible corrupted. Time Machine drive is sad, but it doesn't mean an OS reinstall. (Any how many people actually know how to do an OS re-install?)

A Time Machine drive failure just means sending someone away. A boot drive failure means doing a lot more support work to reinstall the OS. It's not coincidence Apple has been doing a lot of work to prevent boot drive corruption, including rootless in 10.11.

Couldn't a hypothetical eGPU mechanism use those display change notifications and simulate a connected display on a compute GPU?

The applications have already been written. It's too late to change the spec. Offline GPUs by definition have no display. By changing that you muck up so much stuff, and already written applications won't respect any change.

Again, the easiest way is to deal with this in the drivers like they did for TB3. It doesn't require anyone to change but the drivers. I'm not even sure why people are looking for a different workaround when the drivers can take care of it. If the drivers talk, they can work around interruptions, back up state outside of VRAM, and restore state onto a different GPU.

If you don't involve the drivers, the other problem is that GPU disconnects can happen in the middle of a GPU operation. Under a normal shutdown, if you've sent work to a GPU, you're at least allowed to finish it before you move GPUs. With a disconnect, that's forced. The work is completely lost and you don't get a chance to cleanly finish up. So even a display change notification isn't really adequate to deal with this.

But like I said, if you deal with this in the drivers like they are in Windows, this doesn't end up being an issue. I'm sure Apple could do the same thing, and then no one has to change or rewrite anything in their applications.

Note: I consider a GPU as 'non-critical' part, because without it a computer can still run fine (unlike e.g. a missing/dead CPU), even though its usability is limited in that state.

GPUs are a lot more critical these days. Apple is re-writing OS X so every application will sit on top of Metal. That means every application is running tasks on the GPU. We're not talking about high end applications or games being the only issues here. We're talking about every application on the system being linked to and using the eGPU, and having it suddenly go away.
 
According to www.computerbase.de, some of Broadwell-EP processors with DDR4-2400 support will come out Q1/16. But others are predicting Q4/15 release.
http://www.computerbase.de/2015-08/broadwell-ep-xeon-e5-2600-v4-mit-ddr4-2400-zum-jahreswechsel/
http://news.softpedia.com/news/inte...-e5-v4-broadwell-ep-in-late-2015-489715.shtml

And then there are stories that there is a short supply of Radeon R9 Fury Nano's... that could mean a problem in AMD's product line or... something is eating up the whole production.
http://www.fool.com/investing/gener...ced-micro-devices-incs-radeon-r9-nano-be.aspx
http://www.techpowerup.com/215776/amd-radeon-r9-nano-review-by-tpu-not.html

It could be possible that Apple will introduce nMP v2 at the end of October (Q1/16 latest). With both goodies. And new external displays.

Of course, speculation, speculation.. most likely, both can be a sign of something else.

UPDATE: changed some text after publishing
It looks like in Q4 Broadwell-EP and in Q1 2016 Broadwell-E, which is something that is not interesting for Mac Pro ;).

Everything so far falls in line for late 2015 release of new MP.
 
Status
Not open for further replies.
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.