Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I am not sure why we are talking about engines. Apple chose to use two GPUs down clocked from their AIB counterparts. The downside is that each GPU inidividually is somewhat slower, but the upside is that the combined GPU compute performance is much greater. For instance the 7 TFLOPS of the D700s is still somewhat competitive compared to high end single GPUs today. For instance, Nvidia's GTX 1080 and Titan X(P) and AMD's Fury are the only single GPUs that can top that today.

Thats a good tradeoff if you are doing compute tasks in MacOS such as final cut pro that are well implemented using OpenCL and metal. Its bad if you care about OpenGL performance or gaming since those things can not utilize dual GPUs.

My guess is when Apple was going to start designing the new mac pro AMD came to them and said "its really hard to make big GPU dies at small nodes, you should design the mac pro to use two mid sized dies and we can keep delivering those for a long time and optimize frameworks for multi-GPU." I think the GPU world turned out a bit differently. Turns out its not that hard to make big GPUs (at least for 14/16 nm) and the frameworks for multi-gpu aren't quite there yet.
 
Are you sure about that? Zotac ZBOX, with 65W CPU and GTX 1060. https://www.computerbase.de/2016-10/zotac-zbox-magnus-en1060-test/3/

Look how this performance compares to desktop GPUs. GTX 1060 in this small form factor, because of thermal throttling is only slightly faster than GTX 960 and R9 380.

You cannot have power efficiency and performance at the same time, and small form factor without compromising something. And the first thing that is compromised is cooling efficiency.

Mac Pro was a great achievement from engineering perspective, because it did not throttled other way than by power Virus, that consumed way too much power.

Also, the Razer Blade that packs GTX 1080 is Razer Blade Pro. It has 17.3 inches, and weighs 3.7 KG's. Razer Blade is 14 inch computer with GTX 1060, and 180W PSU. So its not efficient, compared to MBP.
Check the Magnus EN1080 that I linked to. The size of a small shoebox and mops the floor with anything Apple currently offers on the desktop.
 
The discussion is because people do not understand the difference between downclocking(Throttling) parts to fit thermal envelope, and thermal throttling WITHIN that thermal envelope.

D700's do not throttle while being in the Mac Pro 6.1, under load. We have been discussing about how good was the cooling solution in MP 6.1, compared to how big it was. The only way to throttle the GPUs under load, was Power Virus. Anandtech review has proven that. You can go to that review and check it for yourselves.
 
Possibly, torque is more than a product of displacement.
In general - longer stroke equals more torque. Cut the stroke to reduce the displacement, and the torque suffers more than the horsepower.

At one point in time Ford was simultaneously selling cars with 6.997L engines, 7.014L engines, 7.030L engines, and 7.046L engines. Very different engines for very different purposes - even though the displacements were almost identical (in cubic inches - 427, 428, 429, and 430).

Anyway, comparing the external adjustment of the clock rate of a silicon chip to physically changing the bore or stroke of an internal combustion engine is a failed analogy.

Perhaps comparing an otherwise identical engine with two-barrel and four-barrel carburetors would be more appropriate - but decidedly last millennium since carburetors have virtually disappeared from the scene. (Or maybe not inappropriate, since Xeon E5-2600 v2 processors have disappeared except for still being used in the MP6,1.)
[doublepost=1484950469][/doublepost]
The discussion is because people do not understand the difference between downclocking(Throttling) parts to fit thermal envelope, and thermal throttling WITHIN that thermal envelope.

D700's do not throttle while being in the Mac Pro 6.1, under load. We have been discussing about how good was the cooling solution in MP 6.1, compared to how big it was. The only way to throttle the GPUs under load, was Power Virus. Anandtech review has proven that. You can go to that review and check it for yourselves.
Would it hurt you to provide a link to that Anandtech story? It always seems that you're hiding something when you don't support your claims with a simple link.
[doublepost=1484952596][/doublepost]
The discussion is because people do not understand the difference between downclocking(Throttling) parts to fit thermal envelope, and thermal throttling WITHIN that thermal envelope.
In practice, what is the difference?

Say I have a part that can run at 1000 MHz with good cooling.

If I put it in box with poor cooling, when it hits 80° it drops for thermal protection - maybe to 750 MHz.

So, the innovating Amigos at Apple put that part in a *beautiful* box with poor cooling, and set the clock at 750 because the *beautiful* box has compromises on cooling.

How is running at 750 MHz because of defective design of the system cooling different from running at 750 MHz because the designers set the clock at 750?
 
Last edited:
Actually, thinking about this, it actually makes sense why Apple have done things the way they have, to a degree.

They want to run everything through Thunderbolt, which in laptops and the iMac makes sense, but with the Mac Pro, it's not so simple. If they had a regular form factor, they still couldn't use regular GPUs as (to the best of my knowledge) you can't pass video back through PCIe and back out through the Thunderbolt bus. To that end, you couldn't use off-the-shelf GPUs like they did with the 4xxx and 5xxx series with the 2009/2010/2012 Mac Pro.

So when they wanted to make a new Mac Pro with the 2013 model, they needed to include Thunderbolt to match their other products. Since they're Apple, it should work in the same way, so it needed to support video too, so they needed to create bespoke GPUs that could be connected to TB switches for the TB2 ports. To that end, if they had to go to that length, why not look at updating the design. Makes sense to a degree.

Perhaps they needed a different PCIe pin-out to support the display outputs going to TB switches, but that's wandering into an area I'm not familiar with. It's a shame that they didn't go with something like MXM, but perhaps it wasn't technically possible for them to.

Sorry if other people have figured all this out already, but it just came to me :)

Yes this is the exact straight jacket that the thunderbolt spec puts systems designers in. The TB spec demands a display port signal, even for systems where it makes no sense like workstations. TB on workstations would make sense if it could be used exclusively for GPIO, while allowing the GPUs to do what they do best, provide video output.

You can't change the PCIe pin outs as that is a standard affecting tons of existing hardware. And that is why you end up with solutions like taking the GPU display port output, and connecting it to a display port input of a TB PCIe card, just so you can get the signal you routed from the GPU to the TB card back out of the TB output.

Whats needed for workstations is a TB light that is GPIO only. They could call it ThunderClap! /soapbox There is no such thing as a thunder bolt since thunder is a sound wave, and does not form a bolt of any type. /soapbox
 
Would it hurt you to provide a link to that Anandtech story? It always seems that you're hiding something when you don't support your claims with a simple link.
Why is is it so hard for you to type in in google: "Mac Pro 2013 review Anandtech"?
AidenShaw said:
In practice, what is the difference?

Say I have a part that can run at 1000 MHz with good cooling.

If I put it in box with poor cooling, when it hits 80° it drops for thermal protection - maybe to 750 MHz.

So, the innovating Amigos at Apple put that part in a *beautiful* box with poor cooling, and set the clock at 750 because the *beautiful* box has compromises on cooling.

How is running at 750 MHz because of defective design of the system cooling different from running at 750 MHz because the designers set the clock at 750?
The difference is simple. GPU is designed to work in specific thermal envelope, with specific frequency. If the temperatures are too high, it will declock itself, to avoid overheating.

GPUs in MP 2013 do not do that, only way that they are throttling themselves is from Power Virus situation(which is not high, Anandtech have quoted 463W load on computer required for the GPUs to throttle, its easy to find. I think.).

The GPUs most likely were downclocked because of power limits. Not thermals. Thats the difference, what you do not get.

How come? Ask yourself how high D700 can be OC'ed under windows, and check if the GPUs are overheating. In essence - they do not. But they do exceed the power limit, and it shuts down computer.
 
  • Like
Reactions: ManuelGomes
Why is is it so hard for you to type in in google: "Mac Pro 2013 review Anandtech"?
If you've already typed that in to Yahoo! (I don't use the evil Google engine) to double-check your post - how hard is it cut and paste the link to your post? Unless you're hiding something.

The difference is simple. GPU is designed to work in specific thermal envelope, with specific frequency. If the temperatures are too high, it will declock itself, to avoid overheating.
So, Apple "declocked" it from the factory due to the failed design of the MP6,1 of too little power and insufficient cooling. Thank you for admitting what we've been saying.

GPUs in MP 2013 do not do that, only way that they are throttling themselves is from Power Virus situation(which is not high, Anandtech have quoted 463W load on computer required for the GPUs to throttle, its easy to find. I think.).
If it's easy to find, then include a link. Unless you're hiding something.

The GPUs most likely were downclocked because of power limits. Not thermals. Thats the difference, what you do not get.
What you "don't get" is that I don't care why the GPUs downclock - I simply want them to run at the advertised frequency.
 
  • Like
Reactions: tuxon86 and H2SO4
I am not sure why we are talking about engines.....

Because the nMP and the Apple car are connected. Look at the picture of the Piston and crankshaft I smuggled out of APL headquarters. As you can see the Apple car will use the nMP's as pistons. 4 core in 4 cylinder cars, 6 in 6, 8 in 8 and 12 core nMP's in the APL race car. You people must start paying attention! :p

Pistons:Crank.png
 
Overheating GPUs have been common in Macs since the 2007 MBP with the Geforce 8600M GT. The 330M GT was even worse in the 2010. And there have been numerous reports of failing GPUs in the nMP. I'm not convinced this isn't a thermal problem, even if Apple says it's a bad batch of cards.
 
  • Like
Reactions: H2SO4
Because the nMP and the Apple car are connected. Look at the picture of the Piston and crankshaft I smuggled out of APL headquarters. As you can see the Apple car will use the nMP's as pistons. 4 core in 4 cylinder cars, 6 in 6, 8 in 8 and 12 core nMP's in the APL race car. You people must start paying attention! :p

View attachment 684684
Brilliant !!! And an apple piston will have the same rrp as a Mac Pro.
 
First, both GPUs can be used for compute. I don't know why that myth seems to still circulate around here. For example barefeats shows a benchmark with 1 and 2 GPUs enabled on the mac pro.

Second, in the current mac pro configuration, you have 3 chips, each contributing roughly 125 W from each side of the thermal core triangle. That works because in the highest load case the heat sources are spread out and can dissipate evenly. In cases where only 1 GPU is active then there is excess thermal dissipation capacity and the fan can stay quiet.

Now if you put a 250+ W GPU on one of those sides, its no longer balanced and its much more difficult to effectively cool, given that all that heat is coming from one side and the heat flux is much higher. Its not an impossible problem, maybe they could use heat pipes or something to spread out the heat, but its certainly not what the current chassis was designed for.
the truth is very few apps are capable to do compute (not rendering) on bothe Mac GPUs this is an old discussion.

the most likely scenario is the nMP toasting a single GPU and the CPU while the 2nd gpu on iddle, even when both GPU are running are throttled to avoid overheat (d700 are 220W gpu at full speed).

I wont teach you thermodynamics, Apple's thermal core isnt and car engine block where heat balance accounts for reliability,in case the current thermal core operates unbalanced it only implies the metal close to hotter cpu/gpu to expand few 1/100 mm given each cpu/gpu is on its own board and interconnect by an flex flat ribbon cable, even the most unbalanced scenario is comfortable with hardware tolerances.


***********************************************************

Those that still think on Liquid cooling, please show me a single HPC super computer using liquid cooling.

Loquid cooling drawbacks:
  1. not as reliable as air cooling (forced neither passive)
  2. coolant leaks as the system wears on age.
  3. add noise/heat (pumps/fans requires more power than just fans)
on the other hands:
Heath pipes considering total system size are much more compact and more efective (specially those with phase-change setup).

Imagine moving a Mac pro with an external liquid cooling system... c'mon
Dont think on integrate liquid cooling to save space, the actual system size will be bigger than one with heatpipes.
 
Last edited:
What you "don't get" is that I don't care why the GPUs downclock - I simply want them to run at the advertised frequency.
Do the GPUs throttle because of thermal solution? No. Are you are trying to move the goalpost to prove how bad in your opinion the design of Mac Pro is? Yes.

We were talking about how efficient solution is MP if you consider, how dense it is, and how high number of thermal power it dissipates. Then you and H2SO4 jumped out with ridiculous statement that the GPUs are downclocked because of thermal design.

Anandtech disagrees with you two both. The GPUs do not throttle because of thermal core design. They can throttle because of Power Virus. Then you spin it out, that you do not care about it, because they are throttled by design. Well, with your logic, every single Professional GPU is throttled, by design, because it uses lower core clocks, compared to consumer parts.

Is it not?
 
Those that still think on Liquid cooling, please show me a single HPC super computer using liquid cooling.

Loquid cooling drawbacks:
  1. not as reliable as air cooling (forced neither passive)
  2. coolant leaks as the system wears on age.
  3. add noise/heat (pumps/fans requires more power than just fans)
on the other hands:
Heath pipes considering total system size are much more compact and more efective (specially those with phase-change setup).

Imagine moving a Mac pro with an external liquid cooling system... c'mon
Dont think on integrate liquid cooling to save space, the actual system size will be bigger than one with heatpipes.

https://www.zotac.com/us/product/mini_pcs/magnus-en1080-10-year-anniversary-edition

Liquid cooled and very small. This is just one of many examples. It's well established by now that liquid cooling systems provide better cooling than simple air cooling systems. Even Apple knows that, hence the G5. There may be drawbacks, but I'd much rather top up the coolant every three years than replace the fried mainboard every year, as I did in the 2010 MBP.

And as you say, it doesn't even have to be an LCS, a good heatpipe solution is already much better than pure chip on aluminium coolers. Heck, even chip on copper is better than that. Clever use of heatpipes in a modified nMP design could make for a much more even heat distribution both against the die and in balance of the three chips, and it could take the heat away from other components to the top part of the case where the fan is, before that even has to spin. Is a hunk of aluminium the cheapest solution though? Sure is!
 
Except when the coolant leaked...I had one of those, and it ruined the whole computer, which was replaced by my trusty 2006 Mac Pro...
It was 10 years ago. Currently the loop designs are much more reliable, than before. Especially the Be Quiet! Silent Loop 280. Its possibly the best of the them all.

For Mac Pro its fairly straight forward reengineering the desing(throwing away the thermal core, connecting the silicon with the tubing, and adding the radiator, underneath, where the the fan is. GPUs will have HBM so the size of the cards will be smaller, and will require much simpler solution for cooling the whole package.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.