Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Are you trying to pretend that you're also an expert on IT management also?
Oh well, back to the ignore list you go... No more time to waste on trolls.

no.. you asked me a question and i said i couldn't give you a real answer yet because i was unclear about a few things.. i just wanted to give you an accurate answer is all..
i still can't though because i see you're not interested in the answer.. you're not interested in anything i'm saying- especially because (or- only because) it's not inline with anything you're saying.
 
here's a video that's over 2 years old..
it's indigo which is a direct competitor to maxwell.. (meaning they're both unbiased render engines.. the slowest type but arguably the best results)


http://www.youtube.com/watch?v=-SQtyw3rEEw
YouTube: video

are those previews not fast enough for you? just watching the video, it appears you can have a usable preview in under 5 seconds.. don't like it? switch it up and wait another 5 seconds..

if maxwell isn't going to go the gpu accelerated route and you'd rather blame apple for only offering a 12core machine then that's your prerogative i guess..
but it's actually not a very good argument as to why someone needs more than a single socket.

Actually, watching the youtube video, the CPU Fire preview in maxwell is about as fast, and without the issues one runs into with GPU renders. We have evauluated Indigo in the past but its missing a lot of features that we require. Plus with over 500 licenses of Maxwell and other renderers already, we'd need a big reason to switch.

My question is, why settle for one socket? Of course its possible for us to do our work on one socket machines, most of the time. But why settle for a slower machine when the cost of the second socket is nothing in a professional context?

Plus, all of our workstations join the farm during off-hours so even if there is no advantage to dual sockets for a particular user, they still get a dual socket machine. The total software cost for us is too expensive to waste a host license on a single socket box.
 
Please send me your copy of Maxwell that does this. Mine doesn't have this functionally and neither does the upcoming 3.0 release. Did you get your hands on 4.0?

V1.0 is supported actually. You're just not setting it up right probably.
 
My question is, why settle for one socket?

if that's really your question, you have to only look at available answers to figure out you're asking a question which can't be answered.

because it's exactly the same question as 'why settle for two sockets?'

so instead of trying to have me answer why settle for one (which i already have answered a million times), how about you answer 'why settle for two sockets?'
 
if that's really your question, you have to only look at available answers to figure out you're asking a question which can't be answered.

because it's exactly the same question as 'why settle for two sockets?'

so instead of trying to have me answer why settle for one (which i already have answered a million times), how about you answer 'why settle for two sockets?'

We've tested higher socket counts and scaling tends to drop off a lot more quickly above 2 sockets in CPU bound tasks. Though we do keep a few 4P machines around for big memory jobs.

2 is clearly the sweet spot if you actually test it.

----------


?

I'm talking about maxwell, specifically related to preview workflow. You said it didn't make sense as an example of wanting another CPU socket because it runs better over the network, which it does not in that context.

The generic example you pose is kind of accurate for other packages though. Except that waiting 5 minutes to preview a texture is unacceptable.

Also - i think you meant 1000BaseT. Otherwise your 100Base stuff is running 10x faster than anyone elses. ;)
 
so instead of trying to have me answer why settle for one (which i already have answered a million times), how about you answer 'why settle for two sockets?'

Because that's not really a comparable question in this situation. We're talking about the new Mac Pro and its competition. All of the major competitors offer a dual socket system. That's why the 1 vs. 2 argument is relevant here. Asking "why settle for two" instead just deflects from the actual discussion. No one can run off to HP or Boxx and spec a system with more than 2 cpus.
 
We've tested higher socket counts and scaling tends to drop off after 2 more rapidly in CPU bound tasks.

2 is clearly the sweet spot if you actually test it.
I've never tested it so Im just going to have to take your word for it.

It's just I have a very (very!) hard time believing in three years from now once everybody gets used to seeing two socket machines, you all will be saying exactly the same thing except it's 4 sockets.

and really.. the only reason the argument is now for 2 sockets is because the new mac is showing as having 1... if the new mac had 2 sockets, we'd all be having the exact argument, right this very minute, about 'the new mac should have 4 sockets".. or are you saying that if the new mac had 2 sockets, you'd be arguing the opposite of what you're saying now..

as in you'd say "no, the new mac does not need to have four sockets.. ignore what hp is doing.. they're making a mistake.. I've tested this and have found 2 to be the sweet spot"

you know, maybe you would say that. I hope I can believe that's what you would do. but you can at least understand my doubt, right?
 
I've never tested it so Im just going to have to take your word for it.

It's just I have a very (very!) hard time believing in three years from now once everybody gets used to seeing two socket machines, you all won't be saying exactly the same thing except it's 4 sockets.

and really.. the only reason the argument is now for 2 sockets is because the new mac is showing as having 1... if the new mac had 2 sockets, we'd all be having the exact argument, right this very minute, about 'the new mac should have 4 sockets.. or are you saying that if the new mac had 2 sockets, you'd be arguing the opposite of what you're saying now..

as in you'd say "no, the new mac does not need to have two sockets.. ignore what hp is doing.. they're making a mistake.. I've tested this and have found 2 to be the sweet spot"

you know, maybe you would say that. I hope I can believe that's what you would do. but you can at least understand my doubt, right?

I doubt it. 2 sockets has been the standard in the high end workstation space for about 12-15 years now, and 4P and higher systems have been around too. There are, and always have been, use cases where 4P+ systems are worth all the tradeoffs they force.

But sure if Intel made a 4P Xeon, chipset, and memory architecture that scaled sufficiently well and made business sense to deploy more widely when all factors are taken into consideration, we'd go for it.

Maybe some day the industry will truly all decide that 1 socket is enough, and 2P+ systems won't be an option. At that point, we'll go to 1. Today is not that day.
 
Not sure why workstations need to be expandable.

The first Sun Sparcstations I worked on had no expansion.

Most early workstations didn't have expansion.

A workstation is a fixed box that does what it needs to do to get the job done. The new Mac Pro fits that definition perfectly.
 
That's why the 1 vs. 2 argument is irrelevant here.

I understand you. I understand why the one vs two argument is irrelevant when compared to what exactly most of you guys are talking about.

but in a different regard, it's entirely and completely relevant because the new mac has only one socket and it's not as if the designers simply picked numbers from at hat to arrive at 1..

some of the smartest people in the engineering world arrived at 1 socket. and while I don't know exactly how the arrived at that number, Im willing to bet at least some of what I'm saying came into play.. that's why it's relevant
 
?

I'm talking about maxwell, specifically related to preview workflow. You said it didn't make sense as an example of wanting another CPU socket because it runs better over the network, which it does not in that context.

The MP6,1 12 cores and dual GPUs isn't fast enough interactional previews? That's the topic here right.. the MP6,1. :) How about another similar 12 core with dual FirePro cards - since the MP6,1 isn't actually available to test. ;)

The generic example you pose is kind of accurate for other packages though. Except that waiting 5 minutes to preview a texture is unacceptable.

That wasn't a preview render estimate. That was a final quality time. But if you're into Maxwell like you're showing you already know all the possibilities. Smaller frame sizes, GI or Radiosity features switched off temporarily, Clip-plane insertion, more cores (and I don't mean more than 12 LOL - if 12 fast cores and 2 top end GPUs can't do you right you need a workflow change.), faster GPUs, Workflow change to include external node(s) "preview" frame rendering, mesh hiding, and so on. In the case of Squidnet is handles Maxwell fine and can split a single frame between multiple nodes IIRC.

Also - i think you meant 1000BaseT. Otherwise your 100Base stuff is running 10x faster than anyone elses. ;)

Ya, it looks like I goofed on that one. I did in fact mean 1000BaseT - what the MacPro has. Actually I guess that's 1000BaseTX right?
 
Last edited:
IMO a workstation it's just a powerful computer that get the job done. No matter how it get the job done as far as the price is right.

Generally workstations have server components, design for operating at high capacity for extended periods of time, have error correcting memory, stuff like that...so yeah I'd consider it a workstation.

Edit: okay, so we have folks that build stuff on their own claiming they are workstations...no, that's just a desktop computer, they are different things, but I suppose everyone has a degree in tech management around here....
 
I doubt it. 2 sockets has been the standard in the high end workstation space for about 12-15 years now, and 4P and higher systems have been around too. There are, and always have been, use cases where 4P+ systems are worth all the tradeoffs they force.

But sure if Intel made a 4P Xeon, chipset, and memory architecture that scaled sufficiently well and made business sense to deploy more widely when all factors are taken into consideration, we'd go for it.

Maybe some day the industry will truly all decide that 1 socket is enough, and 2P+ systems won't be an option. At that point, we'll go to 1. Today is not that day.



thank you for providing sane reasoning.

one of the more consistent 'complaints' about apple is they jump the gun too quick (ie- removing optical etc).. while some people are in scenarios in which they can adapt quickly, others can not because they need to be more in tune with industry standards.
(an example using me as a carpenter.. if dewalt suddenly quit selling phillips bits because they think smiths bits are better.. well, I'm a bit screwed for a while because I do in fact need phillips for at least the near future.. regardless of smiths bits being a better technology)

[two p.s's.. smiths bits are a fictional product. 2. my example started swaying from what exactly you said.. but I do think I hear and agree with where you're coming from]
 
Looking through my applications folder...

Maxwell
Realflow
FumeFX
Maya's native simulations
V-Ray
PRman

There are others that can technically be distributed but the performance hit makes it not really worth it. Or you can do it, but why, when there are plenty of dual socket machines available on the market.

Maxwell: http://support.nextlimit.com/display/maxwelldocs/Setting+up+a+network+render
Realflow: http://www.realflow.com/product/realflow_nodes/
FumeFX: http://forum.cgpersia.com/f32/fumefx-network-rendering-settings-10939/
Maya Native Simulations are OpenCL, so I'm not going to bother talking CPU on that one...
V-Ray: http://www.spot3d.com/vray/help/150R1/distributed_rendering.htm
I'm not even going to look up network rendering for PRMan, given that I already cited that as an example.

Not a single one of these examples is what I asked for. I asked for apps that require all cores to be local to the machine. All these applications can still use more cores from other machines on the same network, still breaking you out of the 12 cores on the Mac Pro.
 
Maxwell: http://support.nextlimit.com/display/maxwelldocs/Setting+up+a+network+render
Realflow: http://www.realflow.com/product/realflow_nodes/
FumeFX: http://forum.cgpersia.com/f32/fumefx-network-rendering-settings-10939/
Maya Native Simulations are OpenCL, so I'm not going to bother talking CPU on that one...
V-Ray: http://www.spot3d.com/vray/help/150R1/distributed_rendering.htm
I'm not even going to look up network rendering for PRMan, given that I already cited that as an example.

Not a single one of these examples is what I asked for. I asked for apps that require all cores to be local to the machine. All these applications can still use more cores from other machines on the same network, still breaking you out of the 12 cores on the Mac Pro.

Yes they can but less effectively, depending on the task, then doing it locally.
Are you really going to transfer gigs of data over network for a simple preview run? Beside, are you taking into account the time to transmit and retrieve your data from the render farm? How about the bandwith cost or the RF cost?
 
But sure if Intel made a 4P Xeon, chipset, and memory architecture that scaled sufficiently well and made business sense to deploy more widely when all factors are taken into consideration, we'd go for it.

I believe you need to go Xeon E7 for that..
 
Yes they can but less effectively, depending on the task, then doing it locally.
Are you really going to transfer gigs of data over network for a simple preview run? Beside, are you taking into account the time to transmit and retrieve your data from the render farm? How about the bandwith cost or the RF cost?

Bandwidth cost? Render farms are typically on the local network... that doesn't have a monetary cost... Unless you really need that network link open it doesn't have much of an opportunity cost either... And if you're really worried about bandwidth, do what most pros do and use fibre channel...

RF cost? That only applies to wireless really...

And don't most preview renders support either OpenGL or OpenCL? The Mac Pro's arguably ample GPU power would be more important for a preview render, not the core count... And even if it was core count based, a preview render isn't going to stress the CPU that much at all.
 
Bandwidth cost? Render farms are typically on the local network... that doesn't have a monetary cost... Unless you really need that network link open it doesn't have much of an opportunity cost either... And if you're really worried about bandwidth, do what most pros do and use fibre channel...

RF cost? That only applies to wireless really...

And don't most preview renders support either OpenGL or OpenCL? The Mac Pro's arguably ample GPU power would be more important for a preview render, not the core count... And even if it was core count based, a preview render isn't going to stress the CPU that much at all.

RF = Render farm silly...

Local network have a cost. Installation, maintenance and managing a network isn't free. Beside small to medium enterprise don't have the means to build a render farm just so you can justify apple decision to drop one cpu...

GPU renderer have their share of problem and OpenCL isn't pannacea. At most it's presently playing catch up with CUDA and not up to the task yet. As for OpenGL, OS X is slow to upgrade the version that they officially support. 10.9 will go to OpenGL 4.1 but the latest version is 4.4
 
And don't most preview renders support either OpenGL or OpenCL? The Mac Pro's arguably ample GPU power would be more important for a preview render, not the core count... And even if it was core count based, a preview render isn't going to stress the CPU that much at all.

well, it's not just during the preview.. the entire thing can be rendered with gpu at a fraction of the time using cpu..

the preview bit came into focus because that's a point in which the user has to sit at the computer and wait and they generally can't just walk away or work on other tasks while running previews..
 
RF = Render farm silly...

Ahhh. Yeah, there is an investment in a render farm but... there is also an investment in the second CPU.

If you're a group, giving everyone a second CPU makes less sense. Why buy 10 dual CPU machines when you can buy 10 single CPU machines plus a few dual CPU machines for everyone to share?

Local network have a cost. Installation, maintenance and managing a network isn't free. Beside small to medium enterprise don't have the means to build a render farm just so you can justify apple decision to drop one cpu...

Local networks do have a cost, but I'd guess this is probably a cost already being incurred by most render houses. What pros don't have their machines on a network?

Also, you don't have to buy a render farm. Buy some of these. 16 more cores each.
http://www.boxxtech.com/Products/renderpro

Problem solved. No huge network infrastructure. No render farm rooms.

GPU renderer have their share of problem and OpenCL isn't pannacea. At most it's presently playing catch up with CUDA and not up to the task yet. As for OpenGL, OS X is slow to upgrade the version that they officially support. 10.9 will go to OpenGL 4.1 but the latest version is 4.4

Partially agreed. You wouldn't use a GPU for a full quality render, but for a preview render?

4.4 just came out a week ago. It might take Apple a bit, but it's not really a problem yet.
 
you're worrying about thunderbolt and render farms? because no, there's not going to be a bottleneck.

a typcial example of a renderfarm service:
http://www.renderrocket.com

put 2&2 together and realize you'll be getting 800 cores at your disposal over the internet.. i mean, yeah- you might want to upgrade your 56k modem if you're going to be using a renderfarm but thunderbolt speeds should be the least of your concerns..
Checked that link... they don't work on After Effects or Premiere. :(
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.