Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Apple don't even use real workstation video cards. The GPUs don't use ECC VRAM and they are underclocked to try to deal with the thermal problems.

The question had to do with workstation graphic cards in general, which you stated is an advantage of computers with PCIe slots having updated cards every year or two. But on a longer update cycle for these workstation graphics can take longer than that.

Apple not having real workstation graphic cards is quite debatable as even the PC workstation graphics are based on commercial based cards.

There are myriad reports of thermal issues and you dismiss them as opinion. Don't expect hard data because the only one's who have it are at Apple HQ.

Ok, so no real proof.
 
i think the big fail with the Mac Pro is perhaps that Apple expected someone else to build an external thunderbolt GPU box. They should have released one themselves.

If you could plug in 6 external GPUs into your shiny new Mac Pro (for a total of 8, all usable for compute) that would certainly change things.

Its technically possible today. Just you need to hack hardware to do it.

I don't know of any other machine on the market that can do that and it's clearly what thunderbolt was aimed at. Even before Apple announced it on the Mac Pro, external compute over "light peak" (Intel development code name) was touted as a big use case.

Good news is that there are thunderbolt 3 e-GPU boxes coming. Bad news is they're 5 years late.
 
Last edited:
I started out in industrial design before switching majors and even I can tell that at the very least the nMP needs bigger vents for proper ventilation.

And while we're at it someone needs to point out to Apple that heat rises and that it was an equally stupid decision to remove the vents from the top of the iMac 5k, which not surperisingly also has thermal problems. But hey, it's thin and pretty.


 
But 850-1000+ W power supplies are likely not coming.

In server, with Blade or 1U rackmount servers, they often have 1KW PSUs smallers than the one o the nMP. AE: http://www.excelsys.com/news/new-excelsys-xsolo-psu-provides-1000w-in-ultra-compact-1u-package/ 238mm x 128mm.

Seems you didn't account PSU's silicon also evolves.

It isn't the thermal transfer speed that is the limitation. It is the surface area. If need to dissapate more heat by the core it just needs to be bigger. Aluminum or copper is not material if can't get the heat transferred into the air.

Thats why I proposed using heat pipes, in heat pipes the heat in the hot zone is transferred to an liquid by phase change as the heat moves to the coolest area where the gas becomes liquid again, those compact setup can manage easy 3X TDP

Heatpipes: 5,000 Watts/meter·K to 200,000 Watts/meter·K
Aluminum, copper etc range from 250 W/m•K to 1,500 W/m•K, in the worst comparison a Thermal Pipe design can handle 3x the Heat than a comparable Aluminum or Copper heat sink, so a 1200W ThermalCore on Heatpipes is fairly possible.

http://www.thermacore.com/thermal-basics/heat-pipe-technology.aspx
PC-Cooler-S126-CPU-Cooler-120mm-Blue-LED-Cooling-Fan-5-Heatpipe-Heatsink-For-Socket-AM2.jpg


The only way they ever get 300W GPUs in a Mac Pro is to trash the Tube and start over. Like you said, there is now way a higher wattage PSU will work in the Tube.

I give you a point, Apple should have designed the MP with an much higher TDP from the beginning, while its possible to push the tube to 1200W TDP never would be as cheaper as with an passive aluminum thermal core, a 1200W Heatpipe cooler cost easy 200$ and more while the aluminum thermalcore cost less than 10$.
 
Last edited:
The question had to do with workstation graphic cards in general, which you stated is an advantage of computers with PCIe slots having updated cards every year or two. But on a longer update cycle for these workstation graphics can take longer than that.
I spoke of graphic cards in general, not workstation cards. Most professionals do fine without workstation video cards. They are aimed at a rather small niche market and need to be optional on workstations, just as they were on the cMP.

Apple not having real workstation graphic cards is quite debatable as even the PC workstation graphics are based on commercial based cards.
Thanks for your opinion, Factman.

Ok, so no real proof.
I see your armor of belief is impervious to links detailing user experiences with Tube thermal issues.

Just curious, what is your theory as to why would Apple downclock the Tube's "workstation" GPUs if not as a last minute bandaid to thermal issues? In a product touted as a GPGPU powerhouse, no less.
 
The only way they ever get 300W GPUs in a Mac Pro is to trash the Tube and start over. Like you said, there is now way a higher wattage PSU will work in the Tube.
You just cannot put 300W compute heavy GPU in Thunderbolt box, outside of tube design Mac Pro?

Why people constantly stick with this idea, if they will be able to have more GPUs connected to Mac Pro outside the computer?
 
I give you a point, Apple should have designed the MP with an much higher TDP from the beginning, while its possible to push the tube to 1200W TDP never would be as cheaper as with an passive aluminum thermal core, a 1200W Heatpipe cooler cost easy 200$ and more while the aluminum thermalcore cost less than 10$.

The real expense is in the processors and chipsets, all the rest barely drives retail price at all. Just look at the cMP, it was cheaper yet the materials cost far higher, with the most beautiful massive heatpiped heatsinks. Oddly, even a dual CPU + single GPU cMP was cheaper than the equivalent 12 core nMP with dual GPU.

Interesting how the cMP, with it's yuuge heatsinks and multiple fans, never suffered thermal issues. The predecessor to the aluminum tower, the MDD Power Mac, was a thermal nightmare and it seemed like Apple learned their lesson. Too bad they didn't write it down someplace.
[doublepost=1467657199][/doublepost]
You just cannot put 300W compute heavy GPU in Thunderbolt box, outside of tube design Mac Pro?

Why people constantly stick with this idea, if they will be able to have more GPUs connected to Mac Pro outside the computer?

Dual GPUs over PCIe x4? Sure, that will work. And the PCIe standard changes so rapidly that the PCIe box will likely be obsolete as fast as the computer.

Save the TB for massive storage. A Mac Pro with only a couple internal SSD blade slots makes sense. A multi-HDD TB enclosure can more easily share data between computers and it doesn't grow obsolete as fast as computers so no sense in combining the two. I guess my main problem with Apple's way is that they remove the internal HDD storage but use all the savings to pad their margins, passing none of it to the buyer.
 
You just cannot put 300W compute heavy GPU in Thunderbolt box, outside of tube design Mac Pro?

Why people constantly stick with this idea, if they will be able to have more GPUs connected to Mac Pro outside the computer?

So the solution is to put another noisy box with another PSU on your desk beside the ones for your storage... Just so you won't have to deal with a nice rectangular box on your floor with a single PSU... Please tell me how this is even acceptable as a solution? Before you had one box that included all your drives, GPU, other expansion cards and all the ports you needed, and now you have one cylinder with a couple of dongle for the connectors that you miss, one box for your GPU (maybe more) and a couple for your hard drives/SSD. Explain to me how this is progress?
 
Because then you still will have tube with CPU with a lot of cores, a lot of RAM, fast SSD, and dual GPUs to take it elsewhere, where your work is needed.

And there you can connect to similar GPU cluster that does the compute job.
 
So the solution is to put another noisy box with another PSU on your desk beside the ones for your storage... Just so you won't have to deal with a nice rectangular box on your floor with a single PSU... Please tell me how this is even acceptable as a solution? Before you had one box that included all your drives, GPU, other expansion cards and all the ports you needed, and now you have one cylinder with a couple of dongle for the connectors that you miss, one box for your GPU (maybe more) and a couple for your hard drives/SSD. Explain to me how this is progress?


Bingo.

So, now the answer is a nMP, with stacks of external drives and an external eGPU cage? At a minimum that's three boxes on a desk (nMP, RAID, eGPU, plus additional external drives) plus all the cables, plus power supplies needing multiple outlets. That's ridiculous and a joke compared to putting everything in one neat box with one power cable. Plus when you start to push that system all the fans are going to kick in and it's going to sound like you're sitting in an airplane.

All of this also adds up to a greater expense. External drives are more expensive than bare drives. The eGPU box is also not free so be prepared to add a few hundred dollars to the cost of your system. TB3 is fast @ theoretical 40GB/sec, but it pales in comparison to the throughput of a PCI slot. I wonder how big a pipe is needed to support 2-4 GPU cards in that cage and if one CPU has enough bandwidth to handle that, plus running your software.
 
Because then you still will have tube with CPU with a lot of cores, a lot of RAM, fast SSD, and dual GPUs to take it elsewhere, where your work is needed.

And there you can connect to similar GPU cluster that does the compute job.

Presently you have a tube with not that many core, not that lot of ram, not that fast SSD and 4+ years old GPU compare to what the rest of the market offers and you still have to lug around your multiple dongles and drives box plus now you also have to carry your expensive external GPU also because it would be assinine and a waste of money for someone to double or tripple up on GPU clusters that are gathering dust when no nMP are connected to them...
[doublepost=1467660267][/doublepost]
Bingo.

So, now the answer is a nMP, with stacks of external drives and an external eGPU cage? At a minimum that's three boxes on a desk (nMP, RAID, eGPU, plus additional external drives) plus all the cables, plus power supplies needing multiple outlets. That's ridiculous and a joke compared to putting everything in one neat box with one power cable. Plus when you start to push that system all the fans are going to kick in and it's going to sound like you're sitting in an airplane.

All of this also adds up to a greater expense. External drives are more expensive than bare drives. The eGPU box is also not free so be prepared to add a few hundred dollars to the cost of your system. TB3 is fast @ theoretical 40GB/sec, but it pales in comparison to the throughput of a PCI slot. I wonder how big a pipe is needed to support 2-4 GPU cards in that cage and if one CPU has enough bandwidth to handle that, plus running your software.

Wait for the "Those GPU are for compute only so bandwith doesn't matter" excuse to fly in.... Again, it would be Apple dictating how you should do your work instead of letting you use the tool in the most efficient manner.
 
Wait for the "Those GPU are for compute only so bandwith doesn't matter" excuse to fly in.... Again, it would be Apple dictating how you should do your work instead of letting you use the tool in the most efficient manner.

Agreed.

It may be a useable solution if you're only rendering in OCTANE etc. But it could be a problem for programs like Resolve, Flame etc. that rely on the GPU to deliver realtime interactivity...
 
Last edited:
I see your armor of belief is impervious to links detailing user experiences with Tube thermal issues.

Just curious, what is your theory as to why would Apple downclock the Tube's "workstation" GPUs if not as a last minute bandaid to thermal issues? In a product touted as a GPGPU powerhouse, no less.

I don't discount that some do indeed burn up. But I'm sure there a lot of reasons for failures other than the GPU's are overheating. Than out of the total nMP users have problems with this actually happening, we don't know and probably never will.

A few user experiences doesn't prove the whole GPU burning up theory. When you start to look into these individual stories in detail they already start to fall apart. Conjecture, no data on temps and for how long. No repair, replacement information. You come into MacRumors with a few unproven stories and soon everyone is passing it around as fact. I'm glad for one this is MacRumors and not MacFacts and we would all have wild stories to tell.
 
In server, with Blade or 1U rackmount servers, they often have 1KW PSUs smallers than the one o the nMP.

And meet the same noise levels of the MP? No. You can cherry pick the design criteria all you want. However, if trying to find a solution in this reality then need to pull in all of the constraints of the Mac Pro design incorporates. For actual desktop ( on top of a work desk and not hidden behind a locked door in the server room) machine. These don't fly.


Thats why I proposed using heat pipes, in heat pipes the heat in the hot zone is transferred to an liquid by phase change as the heat moves to the coolest area where the gas becomes liquid again, those compact setup can manage easy 3X TDP

Not. Rube Goldberg contraption that consumes far more internal space than the current Thermal core does. "Moving the heat" is not a problem. The main heat sources are directly coupled to the Thermal core. What is needed is to pull air over the thermal core. Moving the heat to somewhere else so that can then pull air over it .... just consumes more space. As long as the fan is aligned with the Thermal core all of those moving mechanisms are a waste of space.


I give you a point, Apple should have designed the MP with an much higher TDP from the beginning, while its possible to push the tube to 1200W TDP never would be as cheaper as with an passive aluminum thermal core, a 1200W Heatpipe cooler cost easy 200$ and more while the aluminum thermalcore cost less than 10$.

That Aluminum block in the Mac Pro is carefully machined. Highly doubtful it is a $10 piece as a "ready to use component".

Fact is the design criteria doesn't try to "keep up with the max Power Jones". Dual 300W cards aren't the objective. There is a very large space of workloads that don't need that kind of budget.
[doublepost=1467691739][/doublepost]
Apple don't even use real workstation video cards. The GPUs don't use ECC VRAM and they are underclocked to try to deal with the thermal problems.

There are no non-Apple GPUs with ECC VRAM. All of the Pro solutions offered up by Nvidia and AMD use hardware in the GPU to do ECC checks on RAM without the "extra bits". The mechanism is disabled in Apple's cards and need less VRAM if off ( no need to store checksums), but it isn't a RAM component issue.
 
Top end gaming: Nope, GPUs burn up with professional use, not gaming, which is less demanding than the overnight renders that burn up GPUs.

Crossfire is an easier path. But yes if can find something to light up both GPUs at top end then the a pair of D700s probably will stretch the limits.

But even cranking fan speed doesn't solve the thermal issues with the tube, so it is likely the heatsink is not properly matched to the TDP of three high wattage processors.

Cranking Fan speed isn't going to help if don't provide the right feedback to the GPUs to manage better. The design is premised on not all three major sources being at max at the same time. If "race to sleep" or "race to completion" fits most workloads they tested with. The Mac Pro has a "Interactive" program bias. Users don't do what headless renders do.

IHMO, the work that went into getting Crossfire to work would have been better spent on better stress test tools. I don't think Apple rigidly tested out the 12 core + D700 + D700 configurations.

My guess is that Apple are waiting on more efficient AMD GPUs so they don't have to bother with an re-engineering of their own.

The GPUs aren't going to be magically better. They can get more performance for a given Watt level but they'd still need a bigger reservoir for the max conditions. It is the only the top end edge case that is the problem. The D700 either need better dymamic management or should be clocked down a bit more. (preferably the former. I dont' think Apple had a good solution for that. )



The only way they ever get 300W GPUs in a Mac Pro is to trash the Tube and start over. Like you said, there is now way a higher wattage PSU will work in the Tube.

At some point one fan to handle the whole load just gets awkward when chasing maximum Wattage loads. Once go to two (or more ) fans then a single chimney isn't the natural fit. I don't think Apple is trying to chase the bleeding edge, highest wattage cards. The workloads that can be covered by moderate wattage cards is only growing with time. The basic Mac Pro design will cover more as the components gets better. ( But yes, with static components .... they aren't doing themselves any favors ).[/quote][/quote]
 
Apple not having real workstation graphic cards is quite debatable as even the PC workstation graphics are based on commercial based cards.

Sure, they don't reinvent the wheel with their workstation GPUs, but:
  • Real FirePros have ECC memory, Apples don't
  • Real FirePros have heavily optimized OpenGL drivers for professional CAD apps. No sign of those drivers or apps in OS X.
  • Real FirePros have a different device ID than their cheap consumer brothers. Apple didn't even change that, D700 = 0x6798, which equals HD 7970!
So there's virtually no reason they couldn't take any consumer GPU out of AMDs shelf and sell it as FirePro, since that's exactly what they did last time.
 
Sure, they don't reinvent the wheel with their workstation GPUs, but:
  • Real FirePros have ECC memory, Apples don't
  • Real FirePros have heavily optimized OpenGL drivers for professional CAD apps. No sign of those drivers or apps in OS X.
  • Real FirePros have a different device ID than their cheap consumer brothers. Apple didn't even change that, D700 = 0x6798, which equals HD 7970!
So there's virtually no reason they couldn't take any consumer GPU out of AMDs shelf and sell it as FirePro, since that's exactly what they did last time.


AMD FirePro S9300 X2 doesn't use ECC Ram. Are you going to tell me its not a real FirePro?!?

It takes more than a label to differentiate workstation graphic cards. Its not surprising to find similar labeling considering they are based on its consumer counterpart.
 
I'm pretty sure about 80 to 90 percent of the people here under Mac Pro sub forum are pissed off due to lack of hardwares. It appears people are moving away to hackintosh or Windows until next Mac Pro arrives. So many speculations going on around here and yet no knows what the deal is. I hope next Mac Pro comes around but I don't know if they will announce it within another 3 years. I don't believe that's Tim's focus.

You are absolutely correct.
 
Last edited:
Well, it's a simple difference: That dual Fiji FirePro doesn't have ECC memory because HBM1 ECC memory just doesn't exist. All other (= GDDR5-based) FirePros have it.

Apples FirePros don't have ECC because Apple is a cheapskate.

Btw, I don't say that the nMP necessarily needs ECC VRAM or workstation GPUs in general. It's just unfair take cheap consumer parts and rebadge them with a "glorious" workstation brand. You don't take an ordinary i7, repaint the model identifier and sell it as "Xeon", do you? :D

And I didn't want to start a debate over "real" or "fake" workstation GPUs. My point was just that Apple doesn't have to wait for the longer product life cycle of workstation GPUs.
 
You are correct. Tim has his focus on his pet project... Apple Watch. He is also trying to revive the iPad line up by falsely claiming the iPad Pro is a PC replacement. If I remember correctly, those sales were not that great. As long as the iPad Pro uses iOS as it's operating system it will not be a PC replacement. It does not have any ports for external hard drives which would make this "tablet" is bit more of a PC replacement and the RAM needs to increase so desktop programs not Apps can run on the iPad Pro. But I am rambling. Sorry, I am not interested in any of these iOS devices, the Mac Pro is my machine. I am really disappointed with how Apple is handling the Mac Pro. They redesign it without any real feedback from it's users then Mr Schiller made his great quote, "... can't innovate my ass." What the MP users really wanted was a "mini" tower not a tiny cylinder. But once again I am rambling, Apple then ignores updating the MP for almost 3 years and still sells this version of MP, with old technology, for the same price as when it was introduced. Apple will claim the MP sales are bad but it is their fault. If they gave the MP 1/4 of the attention they gave their iOS devices I believe sales would be different.
You are correct. Tim's focus is not where it should be.
 
Well, it's a simple difference: That dual Fiji FirePro doesn't have ECC memory because HBM1 ECC memory just doesn't exist. All other (= GDDR5-based) FirePros have it.

Apples FirePros don't have ECC because Apple is a cheapskate.

Btw, I don't say that the nMP necessarily needs ECC VRAM or workstation GPUs in general. It's just unfair take cheap consumer parts and rebadge them with a "glorious" workstation brand. You don't take an ordinary i7, repaint the model identifier and sell it as "Xeon", do you? :D

And I didn't want to start a debate over "real" or "fake" workstation GPUs. My point was just that Apple doesn't have to wait for the longer product life cycle of workstation GPUs.

The Pitcarin based W7000 and W5000 have no such ECC support with GDDR5 memory.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.