Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Thanks for the info Mgmx!

I'm looking to build a system to run Mari, so I'm going with the d700. And going back and forth between the 6 and 8 core because I do render at times.

How big is your Mari scratch disk? Or how much scratch does that 5mil, 40x8k texture sceen use? The foundry recommends 250gigs scratch for large screens and min 50gig for small ones so I have been thinking I want the 1tb ssd internal drive. But am wondering if I can get away with the 512 internal put that saved money towards a Pegasus2 r4. Any thoughts?
 
Last edited:
And that isn't true either because it is somewhere in the middle ;) This is what the German article says:


With most benchmarks and even with Dirt II they didn't quite hear the fan (aka need to put ear very close to the fan) but they could definitely and noticeably hear the fan when running FCP and LuxMark. They measured 2.7 Sone and defined that as being loud. This, however, is exactly the same as what other reviews have concluded.

Wikipedia has a nice article about Sone. There they have this nice table of various Sone levels. If the amount of Sone is 1~4 it is defined as "people talking 1m away". For a fan that indeed is loud. The others are around something they call "very calm room".

"Noisy under load" is what I replied to. I hate open ended and subjective statements lime that. I do not see "noisy under load" anywhere in the article. ;)
 
"Noisy under load" is what I replied to. I hate open ended and subjective statements lime that. I do not see "noisy under load" anywhere in the article. ;)
It's actually in the part I quoted. They used Dirt II for a more sustained load (längeren spielen). However it still is a short test (Kurztest) like most other reviews and thus comparable. I do agree that we need to have more tests that focus on the sustained loads for several ours. To my knowledge only MacFormat has done something like that. One of their editors is test driving the Mac Pro over a few days and is putting up a video diary on their YouTube channel. The sustained load is discussed in the day 3 video.
 
Hey spaz8,
My scratch disk is a 256gb mini Pegasus j2 thunderbolt array.
Super fast at around 800-900mbs (when plugged in) and good size for Mari to work with. It's also how I can walk projects between machines easily.

That particular project was using about 6 gig of scratch when unarchived.
 
Hey spaz8,
My scratch disk is a 256gb mini Pegasus j2 thunderbolt array.
Super fast at around 800-900mbs (when plugged in) and good size for Mari to work with. It's also how I can walk projects between machines easily.

That particular project was using about 6 gig of scratch when unarchived.

Cool, that's an interesting idea (the j2 as the scratch disk). Thanks for all the great info. I can't wait to put Mari and nukex on this machine.
 
1. That is NOT Apple's plant. It is their contract manufacturer's plant.

2. That video isn't of just one plant. The cases aren't made there for one.

Citations, please.

Edit: For the second point, I mean. I'm aware it's technically a Flextronics plant, but this sort of arrangement has more to do with financial structure than anything substantive to the actual manufacturing processes. Apple likely designed the whole line — it's not like they just handed some CAD files to Flextronics and came back later with cameras to film the results.
 
Last edited:
that the nMP will be silent under full load. Because of the enormous 130W power budget the chip will spank any i7 you care to compare it to, per this article

chuckle.

i7 4820K <== largely equivalent to ==> E5 1620 v2
i7 4930K <== ............................ ==> E5 1650 v2
i7 4960X <== ........................... ==> E5 1660 v2
http://www.cpu-world.com/news_2013/..._E5-1600_v2_and_E5-2600_v2_CPUs_launched.html

on price and TDP there isn't much of a difference at all. ( which not much of a case for using i7 that are derived from the E5's basic architecture implementation. )

The mainstream variants of i7 that are limited to 4 cores? Again it is multiple core benchmarks that drive any substantive differentiation.

130W isn't a major problem.
 
As an aside, things like cinebench and geekbench have never been accurate estimates of a machine's performance in the field, the constant request for those scores as a benchmark has always baffled me. They're never really stressing a system the way production does.

I'm one of those people asking for cinebench numbers and to be honest I don't understand why people think cinebench isn't testing real world performance.
Cinebench is basically a stripped down version (no features, no saving etc) of cinema 4d running real world scenes and animations.

How fast a computer renders the cinebench test scene is a clear indication of how fast a machine could render a still with cinema 4d.
An application used in production and with scenes a professional would create daily in his work.
And if I remember correctly the still image test scene was created by aixsponza a real studio using cinema 4d as their 3d application. I also think the still was part of a bigger animation they did a few years ago.
You can't get any more real world than that!

Maybe it's because I'm using cinema 4d but I find this test indispensable when I'm thinking of buying a new machine. When I see the results I know exactly how the new system will feel compared to the one I'm using at that moment.
 
Last edited:
I'm one of those people asking for cinebench numbers and to be honest I don't understand why people think cinebench isn't testing real world performance.
Cinebench is basically a stripped down version (no features, no saving etc) of cinema 4d running real world scenes and animations.

How fast a computer renders the cinebench test scene is a clear indication of how fast a machine could render a still with cinema 4d.
An application used in production and with scenes a professional would create daily in his work.
And if I remember correctly the still image test scene was created by aixsponza a real studio using cinema 4d as their 3d application. I also think the still was part of a bigger animation they did a few years ago.
You can't get any more real world than that!

Maybe it's because I'm using cinema 4d but I find this test indispensable when I'm thinking of buying a new machine. When I see the results I know exactly how the new system will feel compared to the one I'm using at that moment.

Because its a benchmark using a rendering algorithm for cinema 4d. So its only a comparison for cinema 4d users. Also its a benchmark with a runtime of about a minute, normal batch renders take waaaay longer then a minute, where imacs and mbp's will underclock for thermal reasons and thus are not comparable with mac pro scores.
 
chuckle.

i7 4820K <== largely equivalent to ==> E5 1620 v2
i7 4930K <== ............................ ==> E5 1650 v2
i7 4960X <== ........................... ==> E5 1660 v2
http://www.cpu-world.com/news_2013/..._E5-1600_v2_and_E5-2600_v2_CPUs_launched.html

on price and TDP there isn't much of a difference at all. ( which not much of a case for using i7 that are derived from the E5's basic architecture implementation. )

The mainstream variants of i7 that are limited to 4 cores? Again it is multiple core benchmarks that drive any substantive differentiation.

130W isn't a major problem.

I'm not talking about bare benchmarks but real world use. It was discussed elsewhere, but I have a desk with a 2009 MP and 2012 MBP (and other macs) on it, and I switch between them frequently (software development.) Consistently the older MP is faster for what I do. I see the MBP pull ahead on pure computational and sustained tasks like transcoding. Which emulates the benchmarks.
 
I'm not talking about bare benchmarks but real world use.

To be fair, you did say (and deconstruct did quote) ...

Because of the enormous 130W power budget the chip will spank any i7 you care to compare it to, per this article

You didn't really qualify the statement to include the kinds of workloads and whatnot. And simply put, any Core i7 can be happily overclocked to 4.5GHz or more, perfectly safely. That includes the 6-core K and X-series chips. Once done, they'll run circles around the Xeons in most applications.

Highly threaded apps are obviously one of the exceptions here. A 12-core Mac Pro, with 24 virtual cores, will be able to get a lot of work done with an application that can slice itself up that many times, and not bog itself down with context switching.
 
Because its a benchmark using a rendering algorithm for cinema 4d. So its only a comparison for cinema 4d users. Also its a benchmark with a runtime of about a minute, normal batch renders take waaaay longer then a minute, where imacs and mbp's will underclock for thermal reasons and thus are not comparable with mac pro scores.

I can't say I agree with your arguments but it's ok. We all have our opinions!
I'll just try to quickly answer your arguments.

Yes cinebench is only a benchmark for cinema 4d. But it's a good type of benchmark because it's a real world scenario type of thing compared to geek bench or these types of measuring tools where filling polygons or lines on the screen don't mean much to the user. The same goes for ripping mp3's to itunes and other types of silly little benchmarks reviewing sites invent. They don't make much sense and have serious flaws in testing the hardware.
With cinebench you have a controlled type of environment for repeating the same test over and over again that actually stresses the CPU and gpu and at the same time gives you a meaningful real life result.
This way you can avoid situations like in the current Mac Pro reviews where most of the reviewing sites didn't actually stress the system...
It's a test that actually makes some sense for a professional user.

If there were more types of these benchmarks for all the programs like a final cut benchmark tool or after effects tool etc I would be all for it. But discounting cinebench because it only applies to cinema 4d users means that you're basically discounting a valid benchmark tool that stresses equally the CPU and gpu which are some of the things you're interested in learning. Especially for a professional workstation (compared to running a game I mean)

What you need is more of these types of benchmarks that apply to other programs rather than less.

As for the runtime of a minute or two. It's perfectly fine for benchmarking purposes. I wouldn't want a benchmark to run for 5 hours or 24 hours. I have better things to do with my computer!
I don't know what mbp's would do in these types of situations but at least the imac I own which is the maxed out late 2012 model never under clocks to compensate for the heat. I've done renders with it that lasted for days and the machine didn't under clock and the temperature stayed the same during the whole render. From beginning to end. So there was no overheating and no under clocking. So even if this type of thing occurs in some models I wouldn't want to see a test that takes hours to complete. No one would care to run it!

Anyway that's my 2 cents. I began to write a short answer but somehow I ended up with a long drawn out one... Sorry!
 
If there were more types of these benchmarks for all the programs like a final cut benchmark tool or after effects tool etc I would be all for it.

FWIW, there's an unofficial benchmark tool out there for Lightwave users. You have to have LW installed, of course, but the render test is included and this thread details the setup and has a fairly decent history of posted results you can refer to.

http://forums.newtek.com/showthread.php?133251-11-5-s-BenchmarkMarbles-lws-share-your-machine-s-render-time-here

Again, it's only of use if you are a Lightwave user, and all it really tells you is how fast you can bang out a very demanding image, but at least it's something else you can try.
 
To be fair, you did say (and deconstruct did quote) ...



You didn't really qualify the statement to include the kinds of workloads and whatnot. And simply put, any Core i7 can be happily overclocked to 4.5GHz or more, perfectly safely. That includes the 6-core K and X-series chips. Once done, they'll run circles around the Xeons in most applications.

For a time. My understanding is that it doesn't have the thermal budget to sustain that performance for too long. Again ...

http://www.marco.org/2013/11/26/new-mac-pro-cpus

While the MacBook Air can match the 13” MacBook Pro’s clock briefly, it won’t hold it for as long because it can’t afford the heat. Those giant 130 W TDPs in the Mac Pro can accommodate much more than the laptops and iMac under a sustained heavy workload — especially if the CPU is being stressed but the GPUs aren’t, due to the shared-giant-heatsink design. (And if the GPUs are being stressed, the Mac Pro should be justifying itself quite well already.)

But there’s little reason to get the higher-end Mac Pro CPUs unless you know you’ll use all of the cores. And if you won’t be sustaining heavy parallel loads and you won’t take advantage of heavy GPU power, there’s a lot less reason to get the Mac Pro at all.
 
FWIW, there's an unofficial benchmark tool out there for Lightwave users. You have to have LW installed, of course, but the render test is included and this thread details the setup and has a fairly decent history of posted results you can refer to.

http://forums.newtek.com/showthread.php?133251-11-5-s-BenchmarkMarbles-lws-share-your-machine-s-render-time-here

Again, it's only of use if you are a Lightwave user, and all it really tells you is how fast you can bang out a very demanding image, but at least it's something else you can try.

Even though I don't really care about Lightwave it might be good for other users and good for the real world benchmarks we're talking about.
Maybe it would be worth compiling a list somewhere on a different thread with all these types of real world benchmarks?
 
Here is my issue with cinebench (and I'll make this as brief as possible).

It uses a scene, professionally designed or no, that was built to show only benefits to techniques such as raytracing and gl performance using geometry and shaders that were written for a non production renderer over 4 years ago. That shader ball and car animation are both so far antiquated that things most renders need to test hardware throughput are almost negligible on any modern hardware.

This is my opinion, obviously, but cinema4d's renderer is akin to the maya software (a non updated outdated renderer) and the light wave renderer.

Most of what I need to take in account averages on benchmarking hardware involve more than just a 300mb scene with minimal textures.

Things like subdivision calculation, high resolution output, and parsing large data sets are far more an indicator of the speed of a machine BEFORE a single bucket is rendered. The scene preprep in many industry renderers offers a wealth of information.

I tend to render more than a single frame because issues like motion blur calculation, motion vectors, and the like are very overall system wide performance indications for a majority of a shot.

I also compare apples to apples in that I will run a recent project that has hard data that I'm intimately aware of to get a better sense of performance on all counts from file parsing time to the moment the final render bucket fills.

Another test that is very common is to use an RT renderer to determine how fast the gpus can parse the same amounts of data at multiple resolutions.

That being said, cinema 4d is great for a 10-15 second test of scene without any frills. The harmony of multiple advanced modern rendering techniques are not present in cinema's test scene nor renderer and for my explicit purposes that makes Cinebench a less than ideal solution.

Again, this is my personal opinion, but I'd never base a purchase recommendation on any of those scores.
It may be a perfect test for you and for many others, but for myself I never use cinema's renderer even the few times I run the software. (We use vray for cinema4d)

Apologies to further pollute the thread with off topic material.
 
Last edited:
Luxmark is probably the best benchmark to test out the nMP in a way that highlights it's pro points.
 
Apple releases it's first bug fix

http://appleinsider.com/articles/13/12/19/apple-squashes-bugs-in-first-update-for-new-mac-pro

http://support.apple.com/kb/DL1714?viewlocale=en_US&locale=en_US

Notice the specific Bootcamp mention, so that is definitely supported.

----------

Engadget

http://www.engadget.com/2013/12/19/apple-mac-pro-2013-hands-on/

"Suffice to say, we've already seen it play back 16 simultaneous 4K streams in the new version of Final Cut Pro, with zero waiting time as effects were applied to the original footage."

Will these fixes already be updated to the nMP delivered in 02-2014?
 
I will enjoy seeing your honest review of the next one.

When is your ship date? Are you getting the D700?

Says January - ya, D700 8-core model. I ordered it really early in the morning.

----------

It uses a scene, professionally designed or no, that was built to show only benefits to techniques such as raytracing and gl performance using geometry and shaders that were written for a non production renderer over 4 years ago. That shader ball and car animation are both so far antiquated that things most renders need to test hardware throughput are almost negligible on any modern hardware.

This is my opinion, obviously, but cinema4d's renderer is akin to the maya software (a non updated outdated renderer) and the light wave renderer.

In my Ars Technica reviews, I benchmark mental ray for Maya, V-Ray for Maya and Cinebench R11.5. I agree about what you're saying about Cinema 4D's renderer but Cinema 4D's used for a lot of non-photo realistic stuff like logo animations for TV, so it's sort a good indicator of how it performs with a different, less capable renderer that is popular. mental ray for Maya is a piece of garbage as far as support goes (I switched to V-Ray years ago), but both are good benchmarks as long as they saturate the cores and both do. Just be glad Cinebench is fairly balanced across platforms because mental ray for maya is terrible on Windows and gets owned by Linux and OS X on the same hardware

I don't benchmark Lightwave because no one uses that thing anymore and I couldn't be bothered to try and navigate its horrendous UI.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.