I realize that this is a quote and not your words - but the MP6,1 thermal core is anything but balanced.
There are literally twice as many fins cooling the CPU as there are fins per GPU. Hmm, and the GPUs often burn out.
It shouldn't matter if it looks like the CPU has more fins pointing towards it since all 3 components share the same triangular thermal core.
The part where the quote mentions 'balance' is probably related to total heat output since the 'Thermal Core' probably has a total heat dissipation rating. So, let's say it is rated at 450 total watts. That means 450 divided by 3, which equals 150. That means all three components must only output a total of 150 watts max to be within the heat dissipation rating of the Thermal Core.
But, I don't exactly know what the total heat output/dissipation rating is for the 2013 Mac Pro's Thermal Core. But, one would assume it is adequate. Or, perhaps, due to GPU failures, they aren't?
If, it wasn't/isn't, why has Apple neglected to re-design and/or address it?
My own fail-safe method would be to add another fan in the bottom to create a push-pull config. It would make sure air is pushed up and out and prevent heat buildup from components that are outside the Thermal Core--the components facing out when opening the 2013 Mac Pro would get minimal airflow and could potentially risk heat buildup.
I don't know. I am not an engineer, though.
I never understood why the fan was at the top. It would make sense, if, there is only one fan that it would be at the bottom to make sure any heat buildup from those outer laying components on the opposite side of the Thermal Core are pushed out, instead of, pulled out...
Again, not a thermal engineer.
Also, perhaps there are dead zones in the design, like the center of the core that doesn't get airflow at all since all fans like the ones used have a dead zone at the center where the hub of the fan is.
Again, not a...
you got it!
[doublepost=1543638952][/doublepost]
I don't think there is one single cause for all of the failures. A sizable class though seems to be sustained workloads which light up one or both of the GPUs for relatively long periods of time along with moderate to high utilization of the CPU.
Too much coupling of heat sources combined with not quite enough airflow and no overall 'balancing' global fallback mechanism probably was/is a major contributing factors to those. It seems doubtful that Apple extensively torture tested the MP 2013 model primarily as a high performance computational node as opposed to a single user interactive "viewer" workstation.
Well, like my prior post, I think, the 2013 MP has a couple of obvious head scratching design elements from a thermal perspective, even, if one is not a thermal engineer as I pointed out in my prior post (#102).
There were software/firmware glitches too. Lots of stuff piled up, but the 'differentness' of the Mac Pro was an easy thing to point a finger at in the blame game.
It did look different. But, more than that it seems like an Aeronautical military NASA piece of kit. But, of course, just because it looks “advance” doesn’t mean it’s “perfect.”
Perhaps, the biggest achilles heel of the 2013 Mac Pro is looking "perfect." Maybe, if anything can be learned in the 2013 Mac Pro is that nothing is "perfect." And, of course, Apple could have tweaked the design, but they didn't to make it really "perfect." So, in that sense, it was either really the "perfect" design. Or, that Apple didn't feel like tweaking it because of what people say on here might actually be true in that Apple might have second or third guess whether or not to continue the Mac Pro.
No. Pragmatically Apple switched to metal because there wasn't a "open standard" that was evolving quickly enough (and with the priorities Apple wanted emphasized). OpenGL had some previous development stages where it turned into more of an exercise of herding cats and than a diligent, focused committee solving problems. So did their own and didn't ask for consensus input on direction. Nvidia was primarily pushing CUDA ( and slow rolling OpenCL to allow CUDA to develop more traction/inertia ). Microsoft as usual was not particularly interested in either OpenGL or CL ( they have alternatives to both to push; which muddles the water further ). Google Android was off hemming and hawing. Imagination Tech, Intel, and AMD played along with Apple requests, but they apparently were also open to Metal (to keep the Apple checks coming).
It makes sense for Apple to create their own API. It streamlines and paves a path down that road they wanna travel to.
And, even though travelers need to get used to new routes, or a new destination, it is sometimes better to start sort of from scratch but not really. And, do it again, but, hopefully do it better.
Similar to Thunderbolt where Apple pitched the notion to Intel and they sprinted off getting something done without trying to gather a large group into some sort of standard body. Standards are funny things. It isn't just technical issues. The timing has to be right for them to work. Too many cooks in the kitchen in the extremely earrly stages can mess can defeat consensus building. Too late in the process and others have cooked up their own alternatives to participate well.
I don’t know. I feel like they need TB4 to be done, too. For that “modular” Mac Pro.
Apple probably felt they didn't have 'time' for all of that courtship stuff. iOS needed to move to a "next generation" graphics stack sooner rather than later, so Apple just largely did their own based on some factors that were being discussed as to what to do "next' for OpenGL and some common baseline approaches for the video gaming systems ( Playstation and Xbox/DirectX ) , and AMD's Molten. (What was 'wrong' with OpenGL from the 3D library developer perspective was relatively well researched and disccused at that point). In 2012-2014 timeframe none of that was lining up for a solid well formed consensus, so Apple opted to just do it themselves ( i.e.., similar to Microsoft model where leverage your market inertia to get other folks to row to their cadence/design dictations. ) Vulkan and OpenCL 2+ eventually arrived in 2015-2016, but that was probably at least two years too late for Apple.
My understanding is that OpenGL is slow. And, perhaps, bulky.
Isn’t Metal suppose to also streamline coding and make use of computational hardware resources that OpenGL can’t? Like, next generation heterogenous compute stuff?
Metal incrementally picked up some compute/computational aspect but it isn't trying to be a "general usage' as OpenCL. ( it is a shader/shading language that is taken on few more general compute aspects ). That aspects Apple is probably a bit behind the alternatives in.
That might not be point of Metal today. It could be a two-stone, one bird thing with Metal was to make MacOS more responsive, thus making Mojave Metal GPU compatible only and frees them from redoing OpenGL code just to bring it to modernity….
No. Metal ( and Vulkan and the latest 'next gen' ) graphics stacks are much more 'low level' than OpenGL is. There is really now more a presumption that most applications are going to use another 'portable' stack on top of the lowest level one ( Apple's Foundation libraries , QT , gaming 'engine' , company proprietary porting library , etc.).
Metal as a substitute for OpenGL requires a very substantive rewrite (or port to 3rd party) work.
I know. Metal is akin to DirectX12 or Vulcan.
Similarly, OpenGL (and DirectX ) had shader languages attached to them previously. Metal having a shader language doesn't really replace/merge OpenCL. Again folks can mutate their OpenCL solution to fit the Metal abilities but it is a substantive change. ( Khronos which manages the development of OpenGL/Vulkan/OpenCL do coordinate Vulkan and OpenCL so that compute aspects tend to get funneled through OpenCL, but it is a somewhat different approach. )
Yeah, so, Metal doesn’t replace OpenCL, but sorta wants to get rid of OpenGL (I think). So, since Metal and OpenCL shares similarities, something can be done in either/or and it would be fine. Whereas, I think, OpenGL was too old and clunky, right?
GPU compute and classic GPU work all have to get along and 'share' the physical GPU so the management has to be merged at some point (both can't do whatever they want at any time they want. )
OpenGL/OpenCL versus Metal is pretty similar to putting a round peg in a square hole. For some apps the round peg going to be smaller than the square hole so it will fit with some small shim/scaffolding adjustments. For some apps the round peg is bigger than the square hole and just hammering on the peg isn't going to make it fit. ( When Apple just waves their hands and say "it is a simple port" ... it is mostly that ... hand waving. )
No, I am saying, OpenCL and Metal are both round and OpenGL is square…. wait… whatever… something like that….
I don't even pretend to understand the intricacies of this, but I think you're response to this is from a very technical standpoint... for the poster you replied to, the simple answer is "Metal is kind of like OpenCL & OpenGL merged". That's the way wikipedia puts it…
I understood it just fine.
Now that may be a very simplified way of putting it, but unless you're actually a programmer using these API's, it's apt. And I don't think understanding the subtle technical differences lends further understanding to why the 2013 MP failed (though I'd be interested to hear otherwise).
Well, it could be that an OpenGL workload is not as extensive as say an OpenCL workload, thus, an OpenCL extensive task are cranking and lighting up all computer cores in those GPU’s creating heat.
Since the 2013 Mac Pro with two GPU’s are marketed as compute computer units, it might be used in scenarios where the CPU, and both GPU’s are lit for compute OpenCL extensive tasks for hours on end or perhaps days. So, this is perhaps where he is coming from where Graphic API’s are related to GPU failures.
Of course not capital... they bet the farm that the dual GPU's would propel the success of the 2013 Mac Pro. Was it the whole farm? Who cares, it's just a phrase. The point is that the dual GPU's were supposed to be the killer feature, and it didn't work out that way for a host of reasons.
Or, you can look at it as it worked too well if we take the anecdotal evidence that GPU’s in them burned out.
This is interesting... could be... but would be kind of a weird approach to design a computer that needs custom GPU's, but because they know they can't sell enough Mac Pro's with just a single GPU to make it worth while, they design it so each computer comes with two. Could be... but might be a stretch.
I think the dual GPU thing is what made the 2013 Mac Pro, the 2013 Mac Pro, right?
I can see my phrasing could be misconstrued, but what I was saying is that if they had taken the innards of the iMac and placed them in the form factor of the 2013 Mac Pro, it would have sold like hotcakes.
You mean, if they updated the 2013 Mac Pro?
Or, just put a 7700K CPU and a Radeon Pro 580x in a 2013 Mac Pro chasis?