Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Status
Not open for further replies.

Xteec

macrumors regular
Sep 21, 2012
146
71
Australia
Can't believe you guys are still having these kinds of threads; I admire your tenacity!

The fact is the nMP is based on two big bets. The first is that all but niche uses will transition to HBM-based video cards; this bet still seems right, but the transition is many, many years behind the expected schedule, and in the interim the nMP is making some rather large compromises.

The other bet is that "mid-sized" jobs aren't economically-significant enough to Apple for Apple to worry about when designing the nMP.

How do you know you have a "mid-sized" job?

It's easy!

Do you find yourself saying "man, if only I could upgrade my built-in video card to a Titan...", or "man, if only I could get a 2-cpu, one-gpu nMP..."?

If you do, congrats! If small changes like that have a big impact on time-to-complete, whatever you're doing is a "mid-sized" job.

By contrast, if you're doing a "big job", the actual processing work is far too big for any one box, and will necessarily get farmed out to servers.

And, of course, if you're doing a "small job" the nMP (or even a higher-end iMac/mbp) is already fast enough for you; all it has to do for "small jobs" is keep up with you during interactive use.

Moreover, once you stop caring about the "mid-sized" jobs, you get an opportunity to "think different": for "small jobs" interactive responsiveness is what matters, so you should emphasize that; for "big jobs", the cluster does the actual processing, with the workstation only used for interactive preview/configuration/setup...thus, once again, for use on "large jobs" interactive responsiveness is what matters.

Sure, if you wanted to improve performance on "mid-sized" jobs it'd help to add a 2nd cpu, but that 2nd CPU doesn't have a material benefit for either the "small job" or the "large job" customers; the "small jobs" won't need it, the "big jobs" won't actually use it. Adding an expensive component that doesn't benefit your intended customers isn't worthwhile...yes, you *could* say that about GPU 2, but the bet there is that it'd pay off to have dedicated "UI" and "compute" GPUs under interactive use.

I don't entirely agree with this reasoning, but like it or not such considerations are a significant part of why the nMP is the way it is; the perception is that the users with "mid-sized" jobs are the least-valuable segment (highly vocal, but also rather parsimonious...) and not valuable enough to cater to.

This is also why the updates have been so slow: if the nMP's important metrics are seen as interactive-responsiveness, single-threaded CPU benchmarks are the best single proxy here, and those tick up maybe 5%-8% a generation, at *best*, these days...thus new CPU generations on their own aren't enough to merit a spec-bump release, and the other components are similarly lagging.

As a betting man I'd guess there will be a 2016 nMP, and the high-end GPU will be a cut-down Fury Nano derivative updated to use HBM2 (thereby able to hit 8GB); the other GPU models will be updated, but of similar vintage. It'd be cool to jump straight to AMD polaris, but it just seems unlikely. The CPU will be whatever is available.
Thanks for this post. This is a really interesting take on why we are where we are at with the Mac Pro.

However it does beg the question: shouldn't the Mac Pro be serving the mid-size job market as you define it given the iMac will eventually serve the small-size jobs? Your post provides a lot of insight into what Apple might be thinking. But it also further convinces me that the gap left in the Mac lineup is a mistake.
 
  • Like
Reactions: Aldaris

Stacc

macrumors 6502a
Jun 22, 2005
888
353
Stacc: it's the other way around. If you're looking at a long-term roadmap from your vendors, and there's a cut-off point after which all but niche GPU boards will be at-most half-length or smaller, why *would* you stick with the cMP case? What's all that internal case volume going to be *for*, once that transition has happened?

I am not trying to justify the classic mac pro, my point was only that you can't justify the new mac pro design based on GPUs with HBM, since it currently uses GDDR5 and works just fine. Regardless, the size of many GPUs is dictated by how big the cooler needs to be instead of the actual PCB size. For instance AMD Fury boards are 12" long to maximize cooling.

For the other point, on the one hand, you're right, on the other hand, carefully consider the word "derivative". Either way though HBM1 is a horrible idea from a supply-chain standpoint so expect either HBM2 (if lucky) or GDDR5 (safer choice).

"Derivative" is very subjective. AMD would not redesign Fiji (Fury/Nano) with simply a new memory controller for HBM2. Usually when AMD creates a new high end chip, all the previous chips just get bumped down the lineup. So a Nano "derivative" would simply be the next generation of chips, which in this case will be Polaris. Rumors point to probably one high end GPU from AMD this year with HBM2 on it and 1 or 2 low to mid range GPUs with GDDR5. I am not sure what makes HBM1 "horrible" besides the fact that it is limited to 4 GB, making it unsuitable for professional graphics and the mac pro.

Thanks for this post. This is a really interesting take on why we are where we are at with the Mac Pro.

However it does beg the question: shouldn't the Mac Pro be serving the mid-size job market as you define it given the iMac will eventually serve the small-size jobs? Your post provides a lot of insight into what Apple might be thinking. But it also further convinces me that the gap left in the Mac lineup is a mistake.

While I don't want to turn this thread (once again) into one debating the merits of the new mac pro, it is clear that Apple chose to position the mac pro so it addresses the computing needs of someone who needs parallel processing from 6 to 12 cores (and soon 6 to ~22 cores when it gets updated). They chose to ignore the market of mac users who would more than these numbers assuming that they would be building cheap linux/windows workstations or simply using computing clusters. The dual GPU design was probably to motivate developers to better develop for openCL since GPUs are very good parallel computing machines (for some workloads). Now obviously some people disagree with Apple's design decisions but the closer you get to a pure number crunching machine in which UI design doesn't matter, the less Apple competes. For a similar reason Apple stopped producing the Xserve. No one wanted to pay the apple design premium for a headless machine that just served webpages or did other server duties.
 
Last edited:

goMac

macrumors 604
Apr 15, 2004
7,663
1,694
By contrast, if you're doing a "big job", the actual processing work is far too big for any one box, and will necessarily get farmed out to servers.

I keep telling people this and no one tends to listen...

The reality Apple is acknowledging is that large jobs are being done on server farms, not locally. You need a machine powerful enough to do previewing and editing, but you're not going to be doing rendering or encoding locally if you're a serious business. Pixar isn't rendering their next movie at their desks.

Shops that are still trying to do heavy lifting at their desks are either not very large, or are doing things really inefficiently.

If you're not storing the whole project at your desk, you don't need a lot of local storage either. You've probably just pulled down the part you're currently responsible for.
 

tuxon86

macrumors 65816
May 22, 2012
1,321
477
I keep telling people this and no one tends to listen...

The reality Apple is acknowledging is that large jobs are being done on server farms, not locally. You need a machine powerful enough to do previewing and editing, but you're not going to be doing rendering or encoding locally if you're a serious business. Pixar isn't rendering their next movie at their desks.

Shops that are still trying to do heavy lifting at their desks are either not very large, or are doing things really inefficiently.

If you're not storing the whole project at your desk, you don't need a lot of local storage either. You've probably just pulled down the part you're currently responsible for.

Except Apple isn't selling any of that server farm or backbone infrastructure... So what is the incentive to buy some Apple workstations to go with your HP or Dell servers instead of negociating a wholesale deal with either said HP or Dell for both the servers and the workstations considering that beside FCPX, just about everything else is already running better on windows or linux anyways?
 

MacVidCards

Suspended
Nov 17, 2008
6,096
1,056
Hollywood, CA
I keep telling people this and no one tends to listen...

The reality Apple is acknowledging is that large jobs are being done on server farms, not locally. You need a machine powerful enough to do previewing and editing, but you're not going to be doing rendering or encoding locally if you're a serious business. Pixar isn't rendering their next movie at their desks.

Shops that are still trying to do heavy lifting at their desks are either not very large, or are doing things really inefficiently.

If you're not storing the whole project at your desk, you don't need a lot of local storage either. You've probably just pulled down the part you're currently responsible for.

You need to explain your theory to the amateurs behind "Deadpool"

https://larryjordan.com/blog/deadpool-edited-with-adobe-premiere/

They apparently burned up 10@ nMPs because someone mistook them for workstations and tried to get work done on them.

Fortunately they wised up and rendered the tough stuff (opening sequence) on some hp gear using CUDA.
 

tomvos

macrumors 6502
Jul 7, 2005
345
119
In the Nexus.

SeaCaptainBiscuit

macrumors newbie
Aug 31, 2015
17
10
Thanks for this post. This is a really interesting take on why we are where we are at with the Mac Pro.

However it does beg the question: shouldn't the Mac Pro be serving the mid-size job market as you define it given the iMac will eventually serve the small-size jobs? Your post provides a lot of insight into what Apple might be thinking. But it also further convinces me that the gap left in the Mac lineup is a mistake.

From a functionality standpoint, maybe, but from an addressable-market perspective it's less clear (especially since most of the actual need for workstations capable of doing "mid-sized" jobs isn't really in domains where Apple has much strength). It's also the case that Apple's not afraid of cannibalizing itself (consider the recent color gamut changes to the iMac lineup in this light).

I am not trying to justify the classic mac pro, my point was only that you can't justify the new mac pro design based on GPUs with HBM, since it currently uses GDDR5 and works just fine. Regardless, the size of many GPUs is dictated by how big the cooler needs to be instead of the actual PCB size. For instance AMD Fury boards are 12" long to maximize cooling.

If you're looking for strict logical entailment you won't find it; the cMP design is always an option, no?

You keep approaching this from the wrong angle, as if I'm claiming there's something magical about HBM that forces an nMP case, and failing to make that case. I've certainly failed to convince you, but that's not the claim I'm making.

I will try one last time here: the nMP case takes responsibility for cooling away from its GPU boards, at the cost of some trade-offs. The design timeline here is longer than you may realize, which should color what follows.

Without expecting a move to something *like* HBM, adopting an nMP-style case is risky, for all the reasons you guys have groused about for almost three years now: yes, it's an extremely-thermally-efficient case, but you give up *too much* to get that case.

If you expect a transition to something like HBM, you can justify the nMP design, because there's a reasonable argument to be made that adopting the case won't force significant compromises on the GPU.

It's, again, not because HBM is magic pixie dust; it's because, in a post-HBM world, you can be entirely confident that the *entire* functionality of the "GPU" part of a "GPU" board fits into a square ~1.5" (or less) on a side. This can leave you pretty confident you can fit *any* GPU into a case like the nMP without dropping any significant functionality (like, being stuck with half the video RAM or something like that); note that the rest of a board is either support electronics Apple usually does itself, or "dead space" used for cooling (which an nMP case handles).

This makes it look like the worst-case scenario is something like "your nMP has 2 top-end GPUs, but maybe 20% slower due to running at lower power levels"...which, yes, is a compromise, but it's at least a defensible one.

In a world where on-package GPU memory is still a decades-away pipe dream, an nMP design would be madness. Yes, in hindsight, GDDR-based GPUs have favored performance over memory capacity, but that wasn't necessarily a given; but, that's how it turned out, and even the GDDR-based GPUs have a smallish "functional block" these days, which makes it feasible to squeeze a reasonable GPU into the nMP case...but the compromise factor is a lot higher, and the amount of customization is a bit higher (if you're observant, you'll note the GDDR chips on the nMP are packed in closer-together than the norm, which increases the importance of very, very good cooling...consider this especially in light of reliability issues when worked hard).

But yes, nothing in HBM dictates the nMP design, but it makes it easier to justify that design as a low-compromise design (and again, the design timeline here is surprisingly long, making it easier to get things wrong).

Obviously the timing hasn't really played out too nicely; the bigger miscalculation is the power envelope being too low. It's a bit unexpected that "mainstream high end" GPUs have stabilized at such high power levels, but they have (for various reasons), and this means even in a best-case scenario the nMP is going to look like a pretty big GPU compromise compared to the obvious alternatives.

"Derivative" is very subjective. AMD would not redesign Fiji (Fury/Nano) with simply a new memory controller for HBM2. Usually when AMD creates a new high end chip, all the previous chips just get bumped down the lineup. So a Nano "derivative" would simply be the next generation of chips, which in this case will be Polaris. Rumors point to probably one high end GPU from AMD this year with HBM2 on it and 1 or 2 low to mid range GPUs with GDDR5. I am not sure what makes HBM1 "horrible" besides the fact that it is limited to 4 GB, making it unsuitable for professional graphics and the mac pro.

It's horrible from a supply-chain standpoint, like I said. Words aren't there just for decoration!

HBM1 is really only used by AMD, in only one product line; that product line has had at-best "ok" sales (at least relative to the overall market size).

HBM2 is better on essentially every relevant metric, has multiple customers already lined up, and is already beginning volume production. The HBM2 standard also includes a legacy mode making it easier to drive an HBM2 stack from an HBM1-targeted memory controller (it's not drop-in easy, and you lose some of the performance, but it's a lot less work than a full redesign).

If you're Apple, and you might not put out another update for 3 years, do you want to be on the hook to keep buying a dead-end product? Maybe you do, but probably you either stick with the tried-and-true stuff or you pick the obvious choice for the future (if it's ready on time).

If you're AMD, do you really want to be the only customer for a dead-end tech? Maybe, maybe not; it depends on sales projections and inventory levels into which I have no insight.
 

zephonic

macrumors 65816
Feb 7, 2011
1,314
709
greater L.A. area
Except Apple isn't selling any of that server farm or backbone infrastructure... So what is the incentive to buy some Apple workstations to go with your HP or Dell servers instead of negociating a wholesale deal with either said HP or Dell for both the servers and the workstations considering that beside FCPX, just about everything else is already running better on windows or linux anyways?

Many people prefer an OSX front end, regardless of what crunches numbers in the back.

iCloud runs on Windows Azure, I believe. Knowing that, would you now prefer a Windows machine because that's what works on the back end?

I think Apple feels most people don't care what does the heavy lifting; they just care how they interact with it.

As long as your machine is capable of handling this interaction smoothly, what's not to like? :shrug:
 

ManuelGomes

macrumors 68000
Original poster
Dec 4, 2014
1,617
354
Aveiro, Portugal
I could see a Nano as entry level D310, and Polaris with HBM2 as mid and high tier, 8GB and 16Gb would be pretty awesome, although I doubt we'll se a 16GB card on the nMP. Maybe Tonga/Fiji/Polaris?
 

Ph.D.

macrumors 6502a
Jul 8, 2014
553
479
Some of us are curious about the potential for a nnMP. If others are not, or hate the nMP on principle, etc., then fine, you are welcome to your own opinions. However, I'd humbly suggest you remain respectful to the other forum members. This is not the place for vitriol, and we've heard it all before anyway.

Peace.

(Edited)
 
Last edited:

poematik13

macrumors 65816
Jun 5, 2014
1,397
2,046
I'm SO SICK of the repetitive, redundant, annoying and personally-insulting vitriol from a certain forum member, who seems to view this forum as his personal trolling sandbox.

Some of us are actually curious about the potential for a nnMP. If you, this "certain forum member" does not, then show some respect for others and just let it go.

We all know who it is. Best bet is to just report everything they do, and write a message detailing the issues to the mods. I've been making an effort to do that almost weekly, and also trying to line it up with whenever they post a new hate thread so I can use it as evidence. I agree with you that that certain user is definitely ruining this forum.
 

hollyhillbilly

macrumors member
Mar 30, 2012
73
23
You need to explain your theory to the amateurs behind "Deadpool"

https://larryjordan.com/blog/deadpool-edited-with-adobe-premiere/

They apparently burned up 10@ nMPs because someone mistook them for workstations and tried to get work done on them.

Fortunately they wised up and rendered the tough stuff (opening sequence) on some hp gear using CUDA.
We all know that cuda doesn't work as well on a nMP as on cMP/Hackintosh. I am sure they will have a different workflow in the future. And they will use the nMP, not a giant tinkertoy hackintosh.

For a workflow that works look at "Whiskey, Tango, Foxtrot", which had 1200 VFX shots, used FCPX and did not blow up any computers.

https://library.creativecow.net/wilson_tim/fcpx_whiskey-tango-foxtrot/1
 
Last edited by a moderator:

MH01

Suspended
Feb 11, 2008
12,107
9,297
You need to explain your theory to the amateurs behind "Deadpool"

https://larryjordan.com/blog/deadpool-edited-with-adobe-premiere/

They apparently burned up 10@ nMPs because someone mistook them for workstations and tried to get work done on them.

Fortunately they wised up and rendered the tough stuff (opening sequence) on some hp gear using CUDA.

says the guy selling non workstation GPUs for workstation machines for a living....... Careful or else people will wisen up ;)
 
  • Like
Reactions: poematik13

AidenShaw

macrumors P6
Feb 8, 2003
18,667
4,677
The Peninsula
says the guy selling non workstation GPUs for workstation machines for a living....... Careful or else people will wisen up ;)
From your signature, it seems like you're taking a "pass" on the MP6,1 - a machine with a couple of Radeons that Apple is selling as workstation cards.

;)
 
  • Like
Reactions: tuxon86

Stacc

macrumors 6502a
Jun 22, 2005
888
353
If you're looking for strict logical entailment you won't find it; the cMP design is always an option, no?

You keep approaching this from the wrong angle, as if I'm claiming there's something magical about HBM that forces an nMP case, and failing to make that case. I've certainly failed to convince you, but that's not the claim I'm making.

I will try one last time here: the nMP case takes responsibility for cooling away from its GPU boards, at the cost of some trade-offs. The design timeline here is longer than you may realize, which should color what follows.

Without expecting a move to something *like* HBM, adopting an nMP-style case is risky, for all the reasons you guys have groused about for almost three years now: yes, it's an extremely-thermally-efficient case, but you give up *too much* to get that case.

If you expect a transition to something like HBM, you can justify the nMP design, because there's a reasonable argument to be made that adopting the case won't force significant compromises on the GPU.

It's, again, not because HBM is magic pixie dust; it's because, in a post-HBM world, you can be entirely confident that the *entire* functionality of the "GPU" part of a "GPU" board fits into a square ~1.5" (or less) on a side. This can leave you pretty confident you can fit *any* GPU into a case like the nMP without dropping any significant functionality (like, being stuck with half the video RAM or something like that); note that the rest of a board is either support electronics Apple usually does itself, or "dead space" used for cooling (which an nMP case handles).

This makes it look like the worst-case scenario is something like "your nMP has 2 top-end GPUs, but maybe 20% slower due to running at lower power levels"...which, yes, is a compromise, but it's at least a defensible one.

In a world where on-package GPU memory is still a decades-away pipe dream, an nMP design would be madness. Yes, in hindsight, GDDR-based GPUs have favored performance over memory capacity, but that wasn't necessarily a given; but, that's how it turned out, and even the GDDR-based GPUs have a smallish "functional block" these days, which makes it feasible to squeeze a reasonable GPU into the nMP case...but the compromise factor is a lot higher, and the amount of customization is a bit higher (if you're observant, you'll note the GDDR chips on the nMP are packed in closer-together than the norm, which increases the importance of very, very good cooling...consider this especially in light of reliability issues when worked hard).

But yes, nothing in HBM dictates the nMP design, but it makes it easier to justify that design as a low-compromise design (and again, the design timeline here is surprisingly long, making it easier to get things wrong).

Obviously the timing hasn't really played out too nicely; the bigger miscalculation is the power envelope being too low. It's a bit unexpected that "mainstream high end" GPUs have stabilized at such high power levels, but they have (for various reasons), and this means even in a best-case scenario the nMP is going to look like a pretty big GPU compromise compared to the obvious alternatives.

I understand what you are saying, this actually has been a somewhat significant problem. Check out this picture of the AMD 390X. Look how many memory modules are around that card. For comparison, here is the D700. Basically with GDDR5 if you want to up the memory bandwidth you have to add memory modules. Its not impossible though, as Nvidia gets away with a reasonable amount of memory modules to go on the Titan X with 12 GB of VRAM.

However, I think the biggest constraint on GPUs in the mac pro is power/thermals and not PCB area. Reduced sized GPUs have existed for a long time in MXM format so the only thing different here is that Apple used their own design. Of course, it did restrict them from using obscene amounts of VRAM but Apple was never going to do that anyways. For instance, Apple did not use AMD's Hawaii GPU not because it wouldn't fit in the mac pro, but because it was too hot to get much benefit over Tahiti. (However, now looking at that picture it may because they couldn't get 8 GB of memory in there too).

It's horrible from a supply-chain standpoint, like I said. Words aren't there just for decoration!

HBM1 is really only used by AMD, in only one product line; that product line has had at-best "ok" sales (at least relative to the overall market size).

HBM2 is better on essentially every relevant metric, has multiple customers already lined up, and is already beginning volume production. The HBM2 standard also includes a legacy mode making it easier to drive an HBM2 stack from an HBM1-targeted memory controller (it's not drop-in easy, and you lose some of the performance, but it's a lot less work than a full redesign).

If you're Apple, and you might not put out another update for 3 years, do you want to be on the hook to keep buying a dead-end product? Maybe you do, but probably you either stick with the tried-and-true stuff or you pick the obvious choice for the future (if it's ready on time).

If you're AMD, do you really want to be the only customer for a dead-end tech? Maybe, maybe not; it depends on sales projections and inventory levels into which I have no insight.

Eh, HBM1's biggest failing is that it only appeared on a graphics card that appeals to a very small niche. Fury is only a high end gaming card, and even 4 GB of VRAM is not that much for a card that sells for >=$500. I think they designed it as a competitor to the Nvidia GTX 980 and then Fury got delayed and trumped by the 980 Ti. HBM1 is not "dead-end tech", its just the first generation of new tech. It will be interesting to see if Fiji and HBM1 live on in AMD's next graphics lineup because of how big and expensive it is.

FYI, it is possible to ignore (as in hide their posts) specific members of this forum ;)
 
Last edited:

mattspace

macrumors 68040
Jun 5, 2013
3,341
2,975
Australia
Otherwise every nMP is a ticking time bomb prone to failure under load. This would seriously impact second hand prices of systems no longer covered by warranty.

You have described the situation perfectly. The recall is due to a design issue - the gaming cards the Dxxx models were based on were never designed to run the amount of VRAM Apple uses. They are all susceptible to thermal failure, as are the replacement cards, which are the exact same part.
 
Last edited:

Hank Carter

macrumors 6502
Oct 1, 2015
338
744
I keep telling people this and no one tends to listen...

The reality Apple is acknowledging is that large jobs are being done on server farms, not locally. You need a machine powerful enough to do previewing and editing, but you're not going to be doing rendering or encoding locally if you're a serious business. Pixar isn't rendering their next movie at their desks.

Shops that are still trying to do heavy lifting at their desks are either not very large, or are doing things really inefficiently.

If you're not storing the whole project at your desk, you don't need a lot of local storage either. You've probably just pulled down the part you're currently responsible for.


Yes and no.

Nobody is going to attempt to render something as complex as a 500 frame Vray render on their local workstation, unless they are a one-man-band or it's an emergency. Jobs like that should be handled by a farm and distributed rendering. That said it is extremely common in VFX facilities to tie local workstations into the renderfarm during off hours, and yes, these workstations get pounded 8-14 hours at a time just like the blades.

But regardless there are many cases when you need an extremely powerful workstation to even setup a shot before it can be sent to the farm for rendering. That includes CGI lighting, particle systems, physics based simulations and tasks such as digital intermediate color timing among others that require realtime feedback. Some of these tasks can't be executed on a distributed farm, because of how they are calculated. Specialized programs like DaVinci Resolve or Autodesk Flame, Lustre need extremely powerful workstations for realtime operation. So, the need for heavy desktop iron is very real.

It is common to see dual XEON 10 core boxes with 1-2 Titan X class cards under the desks of animators, lighters and compositors. These days they tend to come from HP often running Linux. Unfortunately the current nMP even in it's fastest configuration is no longer competitive for heavy duty desktop tasks.

And to be perfectly honest outside of editorial, graphic and sound departments Mac workstations have never had a significant presence in post production outside of a few companies like Pixar. First it was SGI workstations, then Windows NT and LINUX. Now it's LINUX and Windows 7 with 10 making inroads.

I still have a nMP 12/D700 on my desk for NUKE, but it's starting to show it's age. Since it can't be upgraded it will eventually be swapped for something like a dual CPU HP workstation running LINUX. The single CPU simply does not pack enough of a punch, the GPU does not have
enough RAM and is 4 years old. nMP boxes frying themselves under heavy sustained use is not uncommon and on a whole the machine is a dead end, because of the custom GPU cards that can't be upgraded.

Apple needs to pull their head out of the sand and release a dual CPU box with PCI slots that can handle up to 4 GPU cards and is updated on a regular basis. Otherwise they may as well just call it a day and stick to prosumer machines.
 
Last edited:

pat500000

Suspended
Jun 3, 2015
8,523
7,515
Okay...this June, the month of WWDC, may not even release nMP based on the previous rumor of OS X el capitain with internal mac pro code. Since they will have OSX 10.12, I wonder if it's possible to release after march...because if not...I doubt they will even release it this year.
 

Demigod Mac

macrumors 6502a
Apr 25, 2008
839
288
They apparently burned up 10@ nMPs because someone mistook them for workstations and tried to get work done on them.

nMP boxes frying themselves under heavy sustained use is not uncommon and on a whole the machine is a dead end, because of the custom GPU cards that can't be upgraded.

This is the ultimate dealbreaker for any workstation. Companies and professionals are willing to shell out big bucks for even the [relatively] minor stability improvement that Quadros/Firepros, ECC and Xeons offer. If the GPUs are regularly burning themselves out under load then they won't give a damn about asynchronous compute or how thin/shiny/small/beautiful the machines are.

Time to admit it, folks: Apple screwed up with the nMP. It really is indefensible.
 

zephonic

macrumors 65816
Feb 7, 2011
1,314
709
greater L.A. area
And to be perfectly honest outside of editorial, graphic and sound departments Mac workstations have never had a significant presence in post production outside of a few companies like Pixar. First it was SGI workstations, then Windows NT and LINUX. Now it's LINUX and Windows 7 with 10 making inroads.

Exactly. Heavy animation software wasn't even available for Mac until a few years back. So that wasn't a market they were going to lose.

Macs have had strong presence in sound and video editing, but 3D/Animation stuff was never really an OSX thing to begin with. It looked like that was changing around 2011, when Apple's success with iPhone/iPad gave the Mac enough momentum for Autodesk etc. to finally port some of their titles to OSX, but that was an exception rather than the norm.
I remember looking at Pixar's Renderman back in 2009 or so, and even that wasn't available for OSX back then.
 
Last edited:
  • Like
Reactions: Hank Carter

MH01

Suspended
Feb 11, 2008
12,107
9,297
From your signature, it seems like you're taking a "pass" on the MP6,1 - a machine with a couple of Radeons that Apple is selling as workstation cards.

;)

I'll trust AMD firmware in those over said individuals cards ;)

A £300 - £400 markup on a 980ti, and apple is the bad guy.

Apple left the pro workstation markert about 5 years ago...., I find it funny that people are still debating this. Yes you want a real workstation, buy a PC.

The nMP works really well for some though, not everyone "needs" a workstation, you will find quite of few of those own a Mac Pro, let's be honest it's apples desktop where you can do heavy lifting tasks without the thing melting , the iMac being the closest.

Hope that is okay with you ;)
 
Last edited by a moderator:

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
One of the reasons why people need Dual CPU workstations is that using CUDA hardware - there is no hardware scheduling on hardware level.

In essence. CUDA hardware needs software to work properly, because it lacks hardware scheduling. Last GPU from Nvidia which had Hardware Scheduler was Fermi, and was hot inefficient and burnt to death. Thats why Nvidia got rid of it, because they were not able to control it properly. Kepler and Maxwell have GigaThread Engine which need drivers to work properly. It cannot adapt itself to application. If it would have Hardware Scheduling - it would. All of Scheduling is on CPU.
Thats why the more CPU cores you have the better for Nvidia hardware.

On AMD Side: you don't need it. There are Hardware Schedulers. That makes that hardware capable of adapting itself to application. This is exactly why I have been writing for very long time that in future GPU farms will do the heavy lifting jobs, and CPUs will do much lighter jobs. You will not need that high power on CPU side. All you will need is: Thunderbolt 3 or even faster external connection, GPUs that are connected with coherent fabric like NVlink, OmniPath or what AMD currently works on, and CPU that can handle the work like draw calls. This is of course AMD vision of GPGPU and GPU farms. Nvidia can have different.

P.S. http://wccftech.com/amd-teases-standardized-external-gpu-solution-for-notebooks/
 
Last edited:
Status
Not open for further replies.
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.