Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Status
Not open for further replies.
have you seen anybody running maxwell with 2 nMP 12cores yet?

I've never seen an nMP with 2 12 core chips but I'd buy one immediately if it existed. :D

Interesting though that relying on parallelism in CPU's for performance is viewed as 'old' while relying on parallelism in GPU's (even more so) for performance is viewed as 'new'.

Who knows some day it might work for production rendering but I've been hearing that for about 10 years now.
 
Last edited:
I've never seen an nMP with 2 12 core chips but I'd buy one immediately if it existed. :D



nmp55.jpg


2 :)
 
nMP are going to be stuck with the D700 for a long time (my guess: forever, or until they swap out their whole computer), cMP users are just waiting for drivers to unlock tons of new amazing hardware that blow the D700 away. That's not to mention future video cards, beyond the 290x.

If the cMP had the proper power supply.

Which it doesn't.

The cMP can't even run the 290x at this point without a lot of spaghetti cable and a hacked together secondary power supply solution.

I could put a 290x in a G5 too. Well, I mean aside from all that firmware business...
 
I personally don't think that spending over $3500 to upgrade a classic Mac Pro with two X5690s and two 280Xs is really worth it to edge out the new Mac Pro in a handful of specific tests by a margin of around 2% on average. At least it goes to show that people shouldn't be too upset with their old machines just yet if they're willing to invest more in them.
 
have you seen anybody running maxwell with 2 nMP 12cores yet?

i don't think it would be entirely practical or cost effective to do that but i'm just curious if anyone has done it yet..

Not sure if you're being facetious of what:confused: But, we all know the nMP only has one CPU and the GPUs are, at the moment proprietary. There are no Nvidia Graphics offered for the nMP either by Apple or the AfterMarket.

Lou
 
I would not call it a good clock cleaning.

If you had $10k to drop today, you would not be buying a 5 year old machine , you would be bonkers.

Now if you have a cMP already, it shows what an upgrade can help you to achieve. If you hunt around on ebay, you can upgrade to those specs way under $3500. You just need patience .

Personally I don't have 10k to drop, but I am confident I can buy a second hand dual core 2010 MP and upgrade it to match a new 12 core MP for around $5k, probably less as time goes on.

Though as time goes on new revisions of the nMP will outpace the cMP, as for now cMP owners can still be happy they have a machine that is capable to match it with the new nMP if they choose to upgrade components.
 
Not sure if you're being facetious of what:confused: But, we all know the nMP only has one CPU and the GPUs are, at the moment proprietary. There are no Nvidia Graphics offered for the nMP either by Apple or the AfterMarket.

Lou

2 mac pros-- network rendering in maxwell.. that was the question

one, TWO, three, four, .... :)

nmpx2.png
 
I personally don't think that spending over $3500 to upgrade a classic Mac Pro with two X5690s and two 280Xs is really worth it to edge out the new Mac Pro in a handful of specific tests by a margin of around 2% on average. At least it goes to show that people shouldn't be too upset with their old machines just yet if they're willing to invest more in them.

I'd be picking up a 2010 5.1 2x 2.4 like below, and upgrading it.

http://www.ebay.com/itm/Mac-Pro-5-1...1865477?pt=Apple_Desktops&hash=item2339b2a985

If you get the same performance for 1/2 the price..... Not bad
 
Has anyone actually bothered reading the article (OP included) ? I mean except the small differences shown in those cute colorful bars.

The article concludes that the cMP of the comparison is pricier, way larger, noisier it has some installation caveats/tricky parts in order to achieve this performance and - of course - still missing latest technologies like usb 3.0 and Thunderbolt 2.0.

It is faster, indeed. But that's hardly an "ouch" case.

Rading skillz r overratd
 
Did YOU actually bother to read the article? You can get the cMP decked out as seen there for $7200 (>$2 grand less than nMP) if you don't want to pay for the $2500 PCIe expander and want to go with an external PSU instead, which there are multiple guides on this forum on how to do it ( I wrote one). Plus you have room for internal hard drives, Optical drives, and highly economical /reliable PCIe expansion, where you can add *yet another* video card if you were so inclined (unclear if this will affect performance in these apps, but if it does, woohoo!).

I guess you missed the part of the article that says "But installation can be tricky if you lack "McGiver" skills". Not a very reassuring sentence to make me invest on a "new" (deprecated/discontinued) machine.

Again, how's the linked article justifies the "Ouch!" from OP ?

Also realize: This is just the beginning. When they get the R9 290X working with OS X, it's going to tear it up-- about 20% faster than the 280x (d700) at opencl.

Can you honestly say to a potential future Mac Pro buyer that this is just the beginning concerning the cMP ? For real ?


nMP are going to be stuck with the D700 for a long time (my guess: forever, or until they swap out their whole computer), cMP users are just waiting for drivers to unlock tons of new amazing hardware that blow the D700 away. That's not to mention future video cards, beyond the 290x.

Maybe they will be upgradeable, maybe they won't. I don't know that, neither do you. In any case, totally irrelevant to this thread's subject. Or is it ? I mean, if this is another "bash the nMP crusade" thread, then my bad.

The 7970 (basically the same chip as D700) was released 2.5 years ago. How much longevity did anyone think these things were going to have?

There's a newer firepro then ? Even if it was, again, it is totally irrelevant with the thread.

I still cannot see how's the - so mature - "Ouch" word from the OP is justified. I mean, beyond the obvious personal reasons he has to hate the nMP.
 
Last edited:
network rendering in maxwell.. that was the question

No it wasn't. Interactive working in Maxwell was. Test renders, tuning materials, etc. Having 2 separate machines is useless in this scenario. Network rendering is left to the farm with a wall of servers. :D
 
How many Mac mini's would that be ? ;)

ha.. probably like 8 or 10 to get similar samples per second.. i imagine the minis would throttle back more/sooner than the xeons

not sure if or how well maxwell could handle that many nodes but i wouldn't want to be the person running that setup ;)
 
No it wasn't. Interactive working in Maxwell was. Test renders, tuning materials, etc. Having 2 separate machines is useless in this scenario. Network rendering is left to the farm with a wall of servers. :D

yes, it was the question

"have you seen anybody running maxwell with 2 nMP 12cores yet?"
i see that the way i phrased it was confusing but that doesn't mean i was curious about anything other than if you've seen anybody using 2 nMPs.. with 12cores each


and argue it all you like but you know full well that you don't run test renders at full rez and that 12 modern cores is fast enough to get judge_able feedback in an acceptable amount of time.

if time were of such importance as you're making it out to be, you wouldn't be using maxwell in the first place.. it's one of the slowest renderers out there these days and the trade off is quality.. if quality is important to you then be prepared to wait 30 seconds for a preview.. and do 100 previews for a project.. an hour worth of waiting on previews isn't anything when considering A) the final images are going to be of great quality. B) you're going to spend much more than an hour doing things to set up the previews prior to pushing the start button C) once the previews are ready, there are many more hours for the computer(s) to chug away at in which it won't matter if cpus are on a network or in one machine.
 
The lack of performance progress is pretty amazing. Release a new machine that is slower or similar in performance to the 4-5 year old predecessor? Brilliant!

That is more Intel dragging its feet with workstation processor upgrades over anything that Apple has done.

Why is Apple being taken to task over processor clocks?
 
That is more Intel dragging its feet with workstation processor upgrades over anything that Apple has done.

Why is Apple being taken to task over processor clocks?

Apple chose to make their Mac Pro line half as fast as it could be for my primary application. Not "taking them to task", just disappointed about being forced off the OSX platform. Like OSX but not enough to warrant accepting 50% slower machine.
 
In my opinion this new generation of the Mac Pro is a complete dud -- too expensive, TB peripherals are WAY too expensive, lack of future upgradability.

The classic Mac Pro is the best design possible for people - expandability, still future proof in terms of updating video cards, and still extremely powerful.



I guess, in a way, this is to be expected with the first redesign of anything. I remember when Apple first switched to Intel and the MacBook Pro Core Duo's came out...they lost features that were eventually returned in about a year with the launch of the upgraded Core 2 Duo models.

If the leaked next gen TB specs are true, then hopefully the next gen Mac Pro will incorporate the technology. It will make adding things like external GPUs worthwhile, albeit at a massive expense of having to buy a new external enclosure.



What baffles me is how anyone can think this current generation Mac Pro is innovative. It's a MASSIVE step backwards. You can't upgrade anything, add-on devices are freaking expensive, and cords for peripherals are now everywhere. Okay, so you've got your portability and small package out of it...but then you immediately lose your portability and small package if you depend on now necessary wired peripherals...so the size of it is a moot point.
 
12 modern cores is fast enough to get judge_able feedback in an acceptable amount of time.

It is never fast enough. Hardware is so much cheaper than artist time that using a slower machine than necessary is not an acceptable tradeoff.
 
It is never fast enough. Hardware is so much cheaper than artist time that using a slower machine than necessary is not an acceptable tradeoff.

I get that. problem is, you're (seemingly) focusing on an area which doesn't offer the returns you're claiming. that being CPU speed directly correlating to the amount being paid to a renderer.

buy a 96core machine. it's 8x faster than the 12.. artist spends one hour a day waiting on previews. with the 96core, it's 8minutes per day.

artist gets paid 4hrs less per week due to hypothetical computer which is 4x faster than current computers-- savings = $150/wk x 52wk = $8000/yr

96core computer costs more than 8000/yr (if it even existed in the first place)

meanwhile, the boss is sitting around wondering why you're still at work at 4:15 when he just spent all that dough on the computer to replace you.

it's because he's put efficiency enhancing focus in the wrong area (CPU core count).. why are you leaving work less than an hour earlier?
 
I get that. problem is, you're (seemingly) focusing on an area which doesn't offer the returns you're claiming. that being CPU speed directly correlating to the amount being paid to a renderer.

Thanks for the guesses but it is entirely inaccurate in our situation.
 
it appears as if apple is breaking away from ghz race (and more importantly, the core count race) and challenging developers to do the same.. get rid of that 1999 code which relies on hardware improvements to double the speed of their not-so-efficient algorithms every 1.5yrs.. it's a big bet and it's going to be a couple of years (as in- at least 2yrs) for us to see how well they've played their cards.. in the meantime, i'm optimistic about their decision.

[EDIT] and a bonus- a real bonus for users/consumers if this pans out as apple is expecting and the developers start writing smarter, more efficient code which doesn't focus solely on cpu is that the software is going to run a lot better on laptops etc where it's impractical to stuff more than a handful of cores in there.

Yeah, just like developers rewrote all of their code to take advantage of Altivec.

How did that turn out?

You would be looking at a minimum of 2 development cycles before a usable version would pop in. That is a minimum of 4 years down the road (assuming that companies ditched their current development cycle and started over). What are the odds that Apple would still be pushing OpenCL in 4 years?

Most of my mission critical software just got rewritten to ditch '90s code and go 64-bit. I don't see a lot of folks rushing to rewrite the software again.
 
Yeah, just like developers rewrote all of their code to take advantage of Altivec.

How did that turn out?

You would be looking at a minimum of 2 development cycles before a usable version would pop in. That is a minimum of 4 years down the road (assuming that companies ditched their current development cycle and started over). What are the odds that Apple would still be pushing OpenCL in 4 years?

Most of my mission critical software just got rewritten to ditch '90s code and go 64-bit. I don't see a lot of folks rushing to rewrite the software again.

hmm.. i'm not even talking so much about parallel processing.. that's the only stuff which really applies to multicore and/or gpgpu..

it would be different if someone finally cracked the puzzle of how to make inherently linear processes run on multiple threads.. pretty sure some of the smartest people in the world have been trying to do just that for at least a decade and it's not happening..

basically, developers can either recognize the GHz race is over and they need to write their programs with the mindset that 4GHz is roughly the max they can depend on via hardware -or- don't worry about it and let their application grow stale while hoping for some magic new cpu which one again will start doubling their code speed every year or two.

idk, of the 8 main applications i use on a typical project, they're all constantly being reworked in seemingly 3 areas.. features, performance, and stability.

the three which could gain noticeable (very noticeable in some circumstances) enhancements via openCL are working on just that..

idk, i'm not too worried about it and my apps are improving over time.. i'm also aware of and willing to admit that of all the things which cause time to add up in a project, hardware specs rank very very low on that list.. (assuming i have something decent)
 
When the nMP announced in that 1st video, I said it should have been compared to a "souped-up" cMP! Now it is obvious why apple did not compare the two! For a 2013 nMP (yet to be released at the time) to not blow the doors off the cMP would have been embarrassing.
So yes this is "ouch, a clock cleaning" has occurred.
I don't think anyone is suggesting to purchase an oMP and spend $$$$ to get the specs of the Bare Feats machine. A large percentage of the oMP's were "souped-up" to perform certain tasks years ago. Also, the nMP was not even on the horizon when "souping-up" was being done.
As for as USB 3.0 and TB v. PCIe speed goes, still waiting for TB numbers to be posted. My PCIe 8HD Raid 0 numbers are:
 

Attachments

  • DiskSpeedTest 8HD.png
    DiskSpeedTest 8HD.png
    739.3 KB · Views: 97
When the nMP announced in that 1st video, I said it should have been compared to a "souped-up" cMP! Now it is obvious why apple did not compare the two! For a 2013 nMP (yet to be released at the time) to not blow the doors off the cMP would have been embarrassing.
So yes this is "ouch, a clock cleaning" has occurred.
I don't think anyone is suggesting to purchase an oMP and spend $$$$ to get the specs of the Bare Feats machine. A large percentage of the oMP's were "souped-up" to perform certain tasks years ago. Also, the nMP was not even on the horizon when "souping-up" was being done.
As for as USB 3.0 and TB v. PCIe speed goes, still waiting for TB numbers to be posted. My PCIe 8HD Raid 0 numbers are:

Can I ask what you are doing for an 8hd raid? External enclosure or something like http://www.maxupgrades.com/istore/index.cfm?fuseaction=product.display&product_ID=354&ParentCat=414
 
A popular comment seems to be that the future of workstations is all about the GPU, not CPU. I agree, that's why I stuck with my 4,1 and put a GTX 760 in it...I highly doubt you'll be able to upgrade the GPU's with COTS products like you can the old Mac Pro...yeah, I know there was an article where OWC said you can upgrade the GPU in the new Mac Pro, but when is Sapphire going to release those upgrades, huh? Or PNY? Or any other major GPU company??...probably never! Why would they? Hardly any market for those highly customized GPU's. After this Mac Pro runs its course and I can no longer put new COTS GPU's in it, I'm building my own rig again or going HP workstation route.


If you don't know, COTS = commercial off the shelf.
 
Status
Not open for further replies.
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.