Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Status
Not open for further replies.
flat, that is so true but by controlling the accessories it should give you some sort of assurance as to the quality of the product. Usually, if that control isn't happening, all sorts of bad quality gear pops up, and for those that only look at the price tag, it usually goes south, stuff breaks easily and you end up buying a better one anyway. I don't like to spend more money on anything of course, but I rather buy decent stuff from the get go than having to waste my time changing bad parts all the time.

maybe not so much about quality control worries as safety control..
"girl electrocuted by iPhone" is a pr nightmare compared to "my iphone cable is frayed".

shady 3rd party parts can lead to "iphone killed person".. it has happened but apple can prevent it now. (or- prevent much of it)

(not saying this is THE reason for proprietary connectors.. just one of the more positive reasons)
 
Haswell-EP can use thunderbolt 3/USB-C. However, bandwidth limitations from the processor may limit how many thunderbolt controllers and ports can be used. This same limitation exists for Haswell-EP's successor Broadwell-EP that is rumored to be released in Q1 2016.

But the answer to the question is yes, TB3 CAN be supported just not in it's full capacity? I guess I don't see Apple going that way, which means no new mac pros for a while.
 
But the answer to the question is yes, TB3 CAN be supported just not in it's full capacity?

TB v3 would run at its full capacity. The real question does the user need six TB sockets or not. What is needed? The bandwidth or the physical sockets ?

Apple could put four ports on a Xeon E5 v3 or v4 ( Haswell or Broadwell) system: two TB v3 and two
TB v2 with no new bandwidth issues or constraints while same time switching to a x4 PCIe v3 SSD. It is just a matter of juggling what is present.

A Mac Pro with 4 TB sockets still would have at least twice as many as any other Mac. ( and likely more than any other system. )

What can't do now with Xeon E5 v3/v4 is have dual GPU cards and six Thunderbolt v3 sockets. Not sure why, except for a limited "lunatic fringe", anyone needs six Thunderbolt v3 sockets. If need 6 directly coupled displays then 4 TB and 2 HDMI socket probably would work just fine for a wide variety of monitors.
There is no drop in total TB bandwidth coming off the Mac Pro's back panel. v3 is double the bandwidth so dropping four older, slower ports for two faster ones is a wash.

Folks with investments in TB v2 equipment probably will like having some sockets around that don't require a dongle.


I guess I don't see Apple going that way, which means no new mac pros for a while.

Only if Apple wants to put the Mac Pro on a suicidal path. Waiting until 2017 is doom.

The problem with bandwidth is that folks with wish lists want everything. Multiple internal x4 PCIe SSD drives , three TB v3 controllers (for six ports ). In that context yeah the Xeon E5 v3/v4 run out of bandwidth. Xeon E5 v4 , some TB v3 , and some incrementally better GPUs would be a good thing. And far more a good thing that shipping an even older Mac Pro for at least another year.
 
  • Like
Reactions: Xteec
I am worried that the market for these eGPU is too small to justify a company building it.....

The eGPU is a software as at least as much as a hardware thing. The primary blocker isn't the hardware, but the software to support a GPU that may disappear or reappear dynamically at runtime. The whole "well don't unplug it then it will appear to work" is a kludge not a solution.

There is not much is speciality new hardware that needs to be done outside of the normal stuff requirement by the TB certification process. TB v3 has a longer/deeper certification which has some parts (how optional remains to be seen) with coverage for eGPU.
 
TB v3 would run at its full capacity. The real question does the user need six TB sockets or not. What is needed? The bandwidth or the physical sockets ?

Apple could put four ports on a Xeon E5 v3 or v4 ( Haswell or Broadwell) system: two TB v3 and two
TB v2 with no new bandwidth issues or constraints while same time switching to a x4 PCIe v3 SSD. It is just a matter of juggling what is present.

A Mac Pro with 4 TB sockets still would have at least twice as many as any other Mac. ( and likely more than any other system. )

What can't do now with Xeon E5 v3/v4 is have dual GPU cards and six Thunderbolt v3 sockets. Not sure why, except for a limited "lunatic fringe", anyone needs six Thunderbolt v3 sockets. If need 6 directly coupled displays then 4 TB and 2 HDMI socket probably would work just fine for a wide variety of monitors.
There is no drop in total TB bandwidth coming off the Mac Pro's back panel. v3 is double the bandwidth so dropping four older, slower ports for two faster ones is a wash.

Folks with investments in TB v2 equipment probably will like having some sockets around that don't require a dongle.




Only if Apple wants to put the Mac Pro on a suicidal path. Waiting until 2017 is doom.

The problem with bandwidth is that folks with wish lists want everything. Multiple internal x4 PCIe SSD drives , three TB v3 controllers (for six ports ). In that context yeah the Xeon E5 v3/v4 run out of bandwidth. Xeon E5 v4 , some TB v3 , and some incrementally better GPUs would be a good thing. And far more a good thing that shipping an even older Mac Pro for at least another year.

I'd be pretty happy if they did 2xT3 and 2xT2. Not like the Mac Pro would power 3 5k monitors anyway, and I'm guessing the TB3 ports will double as USB C. So sounds like it's possible, doable and will probably happen before 2017. Just when I guess is the question.

I'm on the edge of buying either a new mac pro or a dell precision 5810. I love mac, but don't want to buy current mac pro. But if no announcements until January, I can't wait that long and will need to drop cash on one or the other.

Oh and thanks for the explanation.
 
dec
You missed my point, or I explained it wrong. I took flat's example with accessories that, if not somehow controlled, you could get low quality products.

I didn't see part of the point ( I blurred Apple contractor and 3rd party.... Apple doesn't make much of anything directly), but the control is still is overstated here. Folks pass a certification and still sell junk. The demos don't the same as the production runs.

The certifications are in part to keep the "race to the bottom" folks limited. There are still going to be folks fraud and tip-toe around the certification. Policing them is a job.

What I said (or tried to say unsuccessfully it seems) is that Apple makes money on things they don't make (cables, connectors, accessories) but at least you can expect a good product, even from another manufacturer, since Apple controls the whole process, at least we should expect it to be so.

But Apple doesn't completely control the process. Even with their own products because they don't make them. Apple probably isn't making tons of money off of this. If Apple chases down all the cheats and frauds who claim to be certified or use 'fake' authentication chips , etc. ..... all of that adds to the costs of running the program. Apple probably makes some profit but in the scheme of their operations it is doesn't do anything to their bottom line.
 
I'd be pretty happy if they did 2xT3 and 2xT2. Not like the Mac Pro would power 3 5k monitors anyway, and I'm guessing the TB3 ports will double as USB C.

All TB v3 controllers have a USB 3.1 gen 2 controller inside. So yes they can fully play a USB role if plug in USB only stuff. TB v3 is pragmatically a superset of what the mainstream "USB C" is. USB got subsumed by TB in a similar fashion to the way DisplayPort get subsumed either (only the USB data 'source' is inside the TB controller. ).

So sounds like it's possible, doable and will probably happen before 2017. Just when I guess is the question.

At this point in time it is somewhat pointless to introduce at Xeon E5 v3 (Haswell-EP) product. v4 is coming in early 2016 (If Intel sticks to their previous roadmap timelines. ). v4 isn't going to be radically better but the low end of the product line 1620-1650 should be more competitive than the v3 versions ( which are slightly kneecapped for some reason.). If Apple is taking another round of binned and clock limited GPUs from AMD those too will likley have better availability in early 2016 than rest of 2015. TB v3 controllers are also limited supply for 2015. (Apple will get most but not sure they'll get whole to flip whole product line in a massive "big bang" conversion. )


But if no announcements until January, I can't wait that long and will need to drop cash on one or the other.

End of October should mostly clear up if there is any chance for something to move in 2015. I doubt there will be anything in 2015. However, that will be even clearer if the Fall Mac Product dog and pony show comes and goes and there is still no whiff of a Mac Pro.
 
I am worried that the market for these eGPU is too small to justify a company building it.
Gamers with the budget to afford a TB3 eGPU could probably afford an Alienware/home pc.
The market for such a product are the MacBook , MB Air & MB Pro 13" . Get an ultra portable for going to classes / clients but keep a proper gpu for the home. A game or photoshop performance would also end up being limited by the CPU too (possibly the PCI latency/bandwith as well). Would an eGPU really offer so much benefits over the new skylake iGPU to be worth the extra cost (probably in the $700/1k ) and hassle to enough people to make it a viable product for a company?

Very good bunch of questions.

Let me add this, I just saw the BEST OS X OpenGl Uningine Valley score I have ever seen on one of my machines. It was on a nMP, via a 980Ti via eGPU. With a 12 Core 5,1 I have seen 55 fps in final score (Extreme HD preset)

With a 980Ti in the nMP I got 63 or 64 fps.

Mind you, this isn't taking advantage of PCIE 3.0 at all. The eGPU is running at 8 lanes of PCIE 2.0 but then squeezed through TB2 down to 4 lanes. So, newer CPU? Don't know what else to think. It is running at 3.7 Ghz or faster as it is a 4 Core.

And I will tell another story, for first time ever I had both my brothers visit LA. One is a Dr. of medicine and the other a salesman from Indiana. (just moved to LA, anyone looking for a salesman?) Anyhow, the reason I mention their backgrounds is that they are not Mac folks, they aren't even really computer folks. (both have PC laptops) I showed them both a semi-pro gamer running Batman Arkham Night in 4K on a 65" screen. Not fully turned up, but running most settings in middle or higher. Smooth, fluid gameplay that had them both drop their jaws. (I could see envy, both have PS3s on 1080P TVs)

They both were stunned by the 4K, the game, and the smooth play. Even the guy playing the game was impressed. And what makes this unique is that the game was being played on a 2014 Base Mac Mini.

1.4 Ghz

4 GB of RAM
 
For what I see even Skylake won't bring that much of a punch in terms of performance, even with AVX512 that will be great for some but most of us will not use it anyway I guess.
The best thing about it really is the PCIe 3 in the PCH and DMI3.
Broadwell-EP will most likely be in nMP. I'd settle for 2 TB3 ports, 2 TB2 ports, 1 SSD and the 4/6 USB3 ports in the chipset. TB3 and SSD hanging from the CPU lanes, TB2 on the PCH. Spare lanes for the GbE and WiFi/BT.
Good to go...
 
Interesting concept, if only the upcoming Mac Pro (?) could take some pointers.
Interchangeable, stackable components - no need to worry about the dreaded egpu unplug while powered up.
http://www.pcauthority.com.au/News/...letter&utm_campaign=daily_newsletter&nl=daily
However I still believe that powerful portability (read laptop and egpu) is what the market is crying out for.

I have some questions.
I understand the eGPU concept, but some projects also demand CPU power, and if you need the eGPUs for extended projects like rendering, uHD video editing etc, do you think that....

a. it's possible for the usually underpowered components of the laptops to do the job?
b. are 2 - 4 CPU cores enough?
c. how are the thermal dissipation needs of the laptops going to be managed? Is CPU throttling acceptable?
d. is laptop's max RAM capacity enough for this market you mentioned?
e. if you use your expensive laptop daily at full power, and breaks sooner than expected,

is this a wise investment in general or you are better with a workstation ?

or is it only about gaming?
 
Last edited:
65 C under load in Fury Nano. Not bad at all ;).

Filmak. There is low level API on OS X. I wrote many, many months ago on this forum that Mantle can be used for Professional use, that it will be like CUDA, but for AMD. Nobody believed me. Metal is based on Mantle Driver, and can be used for Pro Apps. AMD themselves made API for RayTracing in OpenCL on Mantle Driver. And that is only the beginning.
What It will allow is to use ALL GPUs that are connected to the computer. The Application will talk to API, API will manage all the rest - Hardware Scheduler and Asynchronous Compute will come here to help.

Times have changed. CPU will be from now on only ordering and scheduling job that will be handled by GPU. GPUs will take the job of work, regardless of what it is. We are talking here about compute, etc. GCN is Out-Of-Order architecture, with context switching in pipeline.

You may say that in Metal there is no Asynchronous Compute. Yes, that is correct. But it HAVE to be added. Apple went the route with two GPUs - one for Compute one for Graphics for good reason. Nvidia GPUs are not able to handle Graphics and Compute at the same time. GCN - is capable of doing this. Imagine: you work with an application. 2% of GPU is needed to handle UI, and the rest is needed for compute. In this situation you have 98% of GPU in Mac Pro idling, and the second one is doing compute. With API you have 98% of GPU doing compute task, and 2% is handling the UI. The second GPU is full - on Compute. As is two other eGPUs connected via TB3.

That is huge game changer especially for HSA architecture, that Apple is really pushing.
 
  • Like
Reactions: filmak
… Mind you, this isn't taking advantage of PCIE 3.0 at all. The eGPU is running at 8 lanes of PCIE 2.0 but then squeezed through TB2 down to 4 lanes. So, newer CPU? Don't know what else to think. It is running at 3.7 Ghz or faster as it is a 4 Core. …

I would like to add that your remarks are true for graphics. On the compute side of things, there are some interesting changes which might have to be kept in mind.

There are some workloads—especially computer games—which just work on their local data (i.e. textures and geometry data). Basically in these scenarios huge amounts of data are pushed initially into the GPUs RAM. After this only smaller updates of the data are pushed from the system RAM to the GPU memory. This is a scenario where 8 lanes PCIe 2.0 show similar performance as 16 lanes PCIe 3.0.

But have a look at this WWDC video about Metal. At about 19:22 minutes he talks about the Metal Memory model. Metal has the concept of Discrete Memory (GPU RAM) and System memory (normal RAM). There are three modes how to manage the memory:
  • Shared storage mode
    Both CPU and GPU access the same memory. Notice the limitation: "you have to be done with the GPU before accessing it with the CPU".
  • Private storage mode
    Only the GPU has access to memory. This is more or less the classical approach used in gaming.
  • Managed storage mode
    Data is stored in Discrete Memory and System Memory. Here metal makes copy of the discrete memory to the system memory. Again, notice the hints about "App must syncronize resource before CPU read".
The last mode is the technique which will be most likely be used in professional applications. And once you depend on copying data in order to syncronize portions of the GPU RAM with the System RAM, the speed of the PCIe interface gets much more important.
If you have a one way copy process (as in games) you can hide latencies and slow transfers by loading data ahead of time. But once you move to compute tasks, you can transfer the result data back to the system only after the GPU has finished. Thus the transfer speed should have a higher effect on the overall speed in these kinds of workloads.

So, perhaps eGPUs may be an option for playing games on your notebook. But they are not a panacea.
 
  • Like
Reactions: filmak
65 C under load in Fury Nano. Not bad at all ;).

Filmak. There is low level API on OS X. I wrote many, many months ago on this forum that Mantle can be used for Professional use, that it will be like CUDA, but for AMD. Nobody believed me. Metal is based on Mantle Driver, and can be used for Pro Apps. ........
That is huge game changer especially for HSA architecture, that Apple is really pushing.

These are really good news.:)
Thank you very much for taking the time and of course for the useful infos.
 
The eGPU is a software as at least as much as a hardware thing. The primary blocker isn't the hardware, but the software to support a GPU that may disappear or reappear dynamically at runtime. The whole "well don't unplug it then it will appear to work" is a kludge not a solution.

There is not much is speciality new hardware that needs to be done outside of the normal stuff requirement by the TB certification process. TB v3 has a longer/deeper certification which has some parts (how optional remains to be seen) with coverage for eGPU.

They could manage the eGPU if they see it as a display and not as a GPU. Your system, at least on windows, handle losing a display connection quite well. This would also mean that Apple or any other company could put out display that includes their own built in GPU tailored to said display. The driver could implement a dummy internal GPU for the sole purpose of fooling the OS in believing it is talking to an internal one that is always present. The driver would be the one telling the OS that the GPU is there or not just like it can tell the OS that a display is active or not.
 
Tuxon, TB3 is 2 channels in one and as it is, it can be used as input and output for data to and from eGPU.

eGPUs don't have to be used only by powering external displays.
 
Tuxon, TB3 is 2 channels in one and as it is, it can be used as input and output for data to and from eGPU.

eGPUs don't have to be used only by powering external displays.

You misunderstood my post. The problem with eGPU is that they can be disconnected anytime. If it's treated as a normal GPU this will crash the OS. But if you treated as a display and access it through a virtual driver than it can be removed without crashing the OS. It as nothing to do with how it can be used.
 
They could manage the eGPU if they see it as a display and not as a GPU.

Manage it as something it is not? That is probably pretty difficult. Displays don't take OpeGL/Metal calls. When the OS goes to make those where do they go in this "virtual GPU that is not a GPU". The GPUs 'job' is substantially more than merely just pushing finished bits to a display.


Your system, at least on windows, handle losing a display connection quite well.

OS X does this too. There is nothing 'new' to be added to the OS and GPU drivers there. It has very little to do with a GPU. Removing a DisplayPort/VGA/HDMI out connection from the back of the Mac is substantially different from removing hardware with assumptions built up around it as to its permanence when the OS is up and running.

This would also mean that Apple or any other company could put out display that includes their own built in GPU tailored to said display.

Making the DisplayPort connection signal permanent between the GPU source output and the panels inputs doesn't nothing to solve the real issue at hand here. As pointed out above. all the OS out there can already handle a display being disconnected. Making it permanent solves nothing.




The driver would be the one telling the OS that the GPU is there or not just like it can tell the OS that a display is active or not.

If the driver tells the OS it is there and the OS hands this "virtural" GPU a ton of work .... what gets that work done. If telling the OS it is present, it will get assigned work. What does the work? The CPU?
 
If the driver tells the OS it is there and the OS hands this "virtural" GPU a ton of work .... what gets that work done. If telling the OS it is present, it will get assigned work. What does the work? The CPU?

Now you're being silly playing typo cop... I bet you understand it was virtual...

The role of this virtual driver is to manage the pluging and unplugging of the eGPU. It is the one that when it detect the loss of connection will do what is necessary to keep the system going. This "could" mean redirecting the work to the other GPU presently in the machine. When the eGPU is connected than the work would be send to it.
 
Now you're being silly playing typo cop... I bet you understand it was virtual...

The role of this virtual driver is to manage the pluging and unplugging of the eGPU. It is the one that when it detect the loss of connection will do what is necessary to keep the system going. This "could" mean redirecting the work to the other GPU presently in the machine. When the eGPU is connected than the work would be send to it.
The point of deconstruct's post still stands, you are very much over simplifying the problem. In reality, its very difficult and Intel has been working on a solution since they introduced thunderbolt. Intel suggested they had solved this with Thunderbolt 3, but nothing besides a couple tech demos has demonstrated this.
 
The point of deconstruct's post still stands, you are very much over simplifying the problem. In reality, its very difficult and Intel has been working on a solution since they introduced thunderbolt. Intel suggested they had solved this with Thunderbolt 3, but nothing besides a couple tech demos has demonstrated this.

No I don't see it as over simplified since I've not talked about how it would manage internally to do this. You and Dec are the one over thinking it. eGPU have been done already. What you don't have is the hot swap capability and this could be taken care via software.

The reason why you don't have it by a major player like Intel is that there is only a small market for the thing. People with real workstation can add more gpus internally. If they need even more, then it is way more efficient to use a rendering or gpgpu cluster of independant machine. The only one who would really benefit from those eGPU would be the TB equiped laptop which is for most account limited to Apple currently.
 
There is small market. At this moment. Only Apple has hardware for this solution, and software slowly comes to it also. But its not only Apple Laptop. Its also their desktops. You are oversimplifying the whole situation were upon.

Imagine a Mac Pro with single CPU, two internal GPUs and a rack made of 50 SoC GPUs. Add to that the Virtualization technology on GPUs AMD have demoed lately fits exactly in this idea. You can now see the bigger picture, where everything goes.
 
I think I remember when I posted the slides from Intel where they where saying eGPUs are a official thing with TB3 there was something with them being hot pluggable. I dont think this is a problem anymore when Intel says themselves put a GPU in a external pcie enclosure
 
There is small market. At this moment. Only Apple has hardware for this solution, and software slowly comes to it also. But its not only Apple Laptop. Its also their desktops. You are oversimplifying the whole situation were upon.

Imagine a Mac Pro with single CPU, two internal GPUs and a rack made of 50 SoC GPUs. Add to that the Virtualization technology on GPUs AMD have demoed lately fits exactly in this idea. You can now see the bigger picture, where everything goes.

You are putting too much faith in AMD my friend. AMD is great at demoing stuff. It's at producing them that they suck.
Rendering clusters aren't anything new either.
 
Im sorry but that shows me that you don't even understand what AMD has shown, and it invalidates your point of view.

http://ir.amd.com/phoenix.zhtml?c=74093&p=RssLanding&cat=news&id=2083146
http://forums.anandtech.com/showpost.php?p=37669975&postcount=14
http://www.amd.com/Documents/Multiuser-GPU-Datasheet.pdf

Asynchronous Compute and Hardware Scheduler. Thats where the secret is.

Believe my guys, Im not an AMD fan. But I am fan of GPU engineering. And the more I dig into the GCN architecture the more I am amazed with the capabilities it has.
 
Last edited:
Status
Not open for further replies.
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.