Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Not sure why workstations need to be expandable.
The first Sun Sparcstations I worked on had no expansion.
Most early workstations didn't have expansion.
A workstation is a fixed box that does what it needs to do to get the job done. The new Mac Pro fits that definition perfectly.

Exactly. It's a marketing term. It started out as a product segment name to differentiate a terminal (that the secretary got) from a standalone computer (that the engineers wanted). As things changed, the definition has changed. Everybody now has a PC. The only thing that has remained constant is that a workstation is bigger and better than what the grunts in the office have. At the moment, that is ECC RAM and lots of it, big GPUs with lots of RAM, and a CPU (or two) that can grind through the tough stuff, which these days is 3D and video.

Some people seem to be getting hung up on the number of CPUs. When dual CPUs started gaining traction, there was no such thing as a 12 core CPU. Now there is. Is that enough? You can always have more, but it's probably sufficient for most of the people for most of the time in this market. Time will tell.

Others are annoyed that they can't sell upgrade parts anymore and are channelling that annoyance into a complete dismissal of The Tube. It was a brief moment in the sun, now the sun has moved on. Same as it did for the guys who made a good living selling iMovie plugins.
 
Ahhh. Yeah, there is an investment in a render farm but... there is also an investment in the second CPU.

If you're a group, giving everyone a second CPU makes less sense. Why buy 10 dual CPU machines when you can buy 10 single CPU machines plus a few dual CPU machines for everyone to share?



Local networks do have a cost, but I'd guess this is probably a cost already being incurred by most render houses. What pros don't have their machines on a network?

Also, you don't have to buy a render farm. Buy some of these. 16 more cores each.
http://www.boxxtech.com/Products/renderpro

Problem solved. No huge network infrastructure. No render farm rooms.



Partially agreed. You wouldn't use a GPU for a full quality render, but for a preview render?

4.4 just came out a week ago. It might take Apple a bit, but it's not really a problem yet.

Full machine cost more than adding a second cpu.

You do know that there is a whole world of application beside renderer for which a second cpu is vital. We have offices in place were the only link is via satellite or cellular... Multiple gigs of data travelling up and down over half a continent isn't really practical. We can't also setup cluster everywhere. Some of those workstation are in trailers in the middle of nowhere.

People have to stop thinking that workstation are only used for rendering...
 
Exactly. It's a marketing term. It started out as a product segment name to differentiate a terminal (that the secretary got) from a standalone computer (that the engineers wanted). As things changed, the definition has changed. Everybody now has a PC. The only thing that has remained constant is that a workstation is bigger and better than what the grunts in the office have. At the moment, that is ECC RAM and lots of it, big GPUs with lots of RAM, and a CPU (or two) that can grind through the tough stuff, which these days is 3D and video.

Some people seem to be getting hung up on the number of CPUs. When dual CPUs started gaining traction, there was no such thing as a 12 core CPU. Now there is. Is that enough? You can always have more, but it's probably sufficient for most of the people for most of the time in this market. Time will tell.

Others are annoyed that they can't sell upgrade parts anymore and are channelling that annoyance into a complete dismissal of The Tube. It was a brief moment in the sun, now the sun has moved on. Same as it did for the guys who made a good living selling iMovie plugins.

And simulation and cad and database etc...
There is more than media/3d work out there...
 
Yes they can but less effectively...

You do realize that you've taken the discussion into the realm of massive render farms where everyone else is (or was) talking about adding a single machine of between 8 and 12 cores in order to make up for the fact that the MP6,1 is a single socket system right? And you do realize that none of your arguments about efficiency, maintenance. initial cost, and etc. apply at all when talking about one or two (typically headless) machine nodes right?

Just saying...

This started with someone saying MP isn't a workstation because it is a single socket system - which of course is poppycock. And then it went to you won't need more than 12 local cores with those two high grade GPUs - which is completely true. Because adding a second multi-core machine node is cheaper or the same and offers an almost exactly 200% more efficient workflow than trying to work with the same number of cores all locally. There's really not any studio or any TD in any studio on the planet worth their salt (with any chops at all) who would disagree with this. CAD, CG, Comp, Matte, etc, Video PP, it doesn't matter it's true for all.

And now you're talking about massive farms with what, 32 to 256 machine nodes? OK, I guess. But realize these are two completely different worlds!
 
Last edited:
You do realize that you've taken the discussion into the realm of massive render farms where everyone else is (or was) talking about adding a single machine of between 8 and 24 cores in order to make up for the fact that the MP6,1 is a single socket system right? And you do realize that none of your arguments about efficiency, maintenance. initial cost, and etc. apply at all when talking about one or two (typically headless) machine nodes right?

Just saying. This started with someone saying MP isn't a workstation because it is a single socket system - which of course is poppycock. And then it went to you won't need more than 12 local cores with those two high grade GPUs - which is completely true. Because adding a second multi-core machine node is cheaper or the same and offers an almost exactly 200% more efficient workflow than trying to work with the same number of cores all locally. There's really not any studio or any TD in any studio on the planet worth their salt (with any chops at all) who would disagree with this.

You're wrong, but what else is new. Maybe you should spend less time trying to tell how people should be running their business or being such a fanboy.

Having 2 firepro gpu doesn't mean squat if your application are cpu bound... You do know that rendering isn't the only thing being done with workstation now, do you?

I bet you don't even work.

In any case, go play with flat five in my ignore list.
 
Who said anything about Hypershot? I've never used it.

dunno.. you said bunkspeed and shot.. so me, being the idiot that i am, assumed you meant bunkspeed's hypershot because that's the only thing i know about bunkspeed from trial_ing it a few years ago..

gaahhh. i'm so embarrassed :eek:
 
You're wrong, but what else is new. Maybe you should spend less time trying to tell how people should be running their business or being such a fanboy.

Having 2 firepro gpu doesn't mean squat if your application are cpu bound... You do know that rendering isn't the only thing being done with workstation now, do you?

I bet you don't even work.

In any case, go play with flat five in my ignore list.

So that's how it is? You're completely wrong (absurdly so!) and if you had any experience you would know this, so instead of looking at the facts and admitting your error you go on the attack telling people with 25 years of experience in these specific fields, who try and correct you, that they are fanboys and should go away? And rudely so I might add? Yeah, got it. I have seen children act more adult and professional than this.
 
Maxwell: http://support.nextlimit.com/display/maxwelldocs/Setting+up+a+network+render
Realflow: http://www.realflow.com/product/realflow_nodes/
FumeFX: http://forum.cgpersia.com/f32/fumefx-network-rendering-settings-10939/
Maya Native Simulations are OpenCL, so I'm not going to bother talking CPU on that one...
V-Ray: http://www.spot3d.com/vray/help/150R1/distributed_rendering.htm
I'm not even going to look up network rendering for PRMan, given that I already cited that as an example.

Not a single one of these examples is what I asked for. I asked for apps that require all cores to be local to the machine. All these applications can still use more cores from other machines on the same network, still breaking you out of the 12 cores on the Mac Pro.


What?

Literally all of your example links are incorrect, except the V-Ray one which is partially right.

The Maxwell render link is about final renders over the network. I am talking about preview, interactive artist work. The only thing that speeds this up is more and/or faster CPU cores.

Both RealFlow and FumeFX are similar, sending a sim off to a network machine to run, but multiple machines do not work together except in some rare cases where you can IDOC it out. The vast majority of the time is spent iterating on the attribute settings to tweak the setup, not running the big final run. Even then, those secondary machines need, ahem, as many cores as you can throw on them.

Maya simulations are, in fact, not GPU accelerated at all. Not even a little. There have been some tech previews of it, so maybe some day, but not in the shipping version.

PRMan is not GPU acceralated at all, except for a new collaboration to provide some GPU preview inside Katana. This was announced three days ago at Siggraph by Pixar and is not available publicly. Even so, its based on Optix, and requires nVidia hardware.

V-Ray does have some network interactive render capabilities, but like its GPU component, is pretty unreliable once you get to any reasonable level of complexity in the scene. It also doesn't scale very well past 1 or maybe 2 additional hosts. They are also dropping OpenCL support and moving to CUDA and/or CPU. We'll see if Vlado follows through on this when 3.0 comes out of beta.
 
Last edited:
You won't need more than 12 local cores with those two high grade GPUs - which is completely true.

Poppycock.....

If you had even a shred of credibility left that statement destroyed it.

:rolleyes:

----------

Dude all the pro's run their databases on GPU. It's the way of the future. :D


Yes.... They're much more colorful that way ! :D
 
Last edited:
Sssshhhhhhh.....

Are you trying to get me banned from MacRumors?

:rolleyes:

no.. what i'm saying is more to do with the fact that if someone already knows windows, your choice of computers opens up super wide..

for me, i'd have to ask someone to show me how to get a web browser up in order to come on here and whine about how lost i am on windows..

i mean, i don't even read about or look at prices/spec of windows computers.. the few tidbits i get are from here listening to people do comparisons.

so for me to switch to windows, i have to spend the next 3 years learning it then probably another 3 years to get to where i'm at with osx.. i could give a rats ass about the hardware cutting my render speed in half because i'd be paying a helluva lot more in learning time.. but if you already know windows? that's a good place to be in.

i started computing with sketchup on a G4.. then, maybe 4-5 years ago, i started to realize i was progressing in my skill and the software wasn't coming along with me.. waited it out for a few more years, hoping.. the development never came and i decided to quit.. i had to quit.. learning new software to it's highest capability at max efficiency/speed is a very long and often frustrating process..
point being- i've been there.. i was faced with a difficult decision and chose the difficult option and it has worked out well so far.


but if this 1 socket thing is really bumming you out that hard -or- affecting you work, i would definitely suggest looking towards windows.. i hear it's great (try not to get too many of your opinions around these parts regarding that.. ask around in other places too ;) )... if you're not happy with your tool, then you have to get one you like.. even if it's going to take years worth of adjustments.. or work sux

*assuming we're not talking about the tool you were born with
 
Last edited:
for me, i'd have to ask someone to show me how to get a web browser up in order to come on here and whine about how lost i am on windows..

Ha. I can actually relate. After being Mac based for so long I recently took a full-time gig where I was thrown back into a Windows environment. I still keep inadvertently hitting that damn windows key on the keyboard.

I'm still hoping to stay in the OSX environment for my personal setup, though I am happy to see the software constraints really aren't there anymore. Avid and Autodesk both provided windows and OSX installers. And I suppose I could cave and subscribe to Adobe. Final Cut was the one thing that kept me on Apple hardware for so long and I wouldn't mind having FCPX at my disposal, but it's no longer the deciding factor on which platform I work on.
 
Okay; I've had a coupla drinks so I'm not gonna hold back here . . .

All ye peeps squeaking on about CPU cores? You freakin' killin' me.

You only need ONE core.

...

Uhuh!; ALLA YOU PEEPS are doing the equivalent of arguing about who can lap the neatest edge on a piece of prime flint.

GIGA-Hertz?! Cmon, ma droogs!

...

You talkin' 'bout the ultimate 'puter? HAH! You're beating each other silly yapping over a piece of ELECTRonics.

PLASMonics will blow that hoo-hah right out of the water. It's right around the corner, kinda . . .

GIGAHertz? How about one core running at a coupla TERAHertz?

All this electronix stuff soon be so old-school.

...
(2° edit: wikiP is really dry on this; don't know where i read the really exciting article on this a few years ago....)
 
Last edited:
My response to this argument over 1 vs 2 CPU sockets is that yes, there are obviously going to be tasks that are better suited to a dual-socket setup. However:
1) Most of those tasks could, and should, be ported to GPGPU.
2) Many of the remaining tasks require the extra socket for the extra RAM; and considering that virtual memory using a PCIe SSD would be extremely fast, the benefits of extra RAM vs SSD VRAM are attenuated significantly.
3) For the few people who are left, they would almost all be better off building a hackintosh with whatever specs they like.
4) It is unlikely to be in Apple's best interests to design a completely new computer, or even update the cheese grater for these few remaining potential customers.
5) Whining and screaming at MacRumours is not going to change Apple's present course, which was set many years ago by SJ, and been steadily followed ever since through all their products.
 
Well I guess this is all settled and that everyone agrees that the new Mac Pro represents a real workstation.

It looks like there is no requirement for the number of CPUs or expandability or any other "rules" needed to define a workstation, since clearly history tells us these rules never really applied.
 
My response to this argument over 1 vs 2 CPU sockets is that yes, there are obviously going to be tasks that are better suited to a dual-socket setup. However:
1) Most of those tasks could, and should, be ported to GPGPU.
2) Many of the remaining tasks require the extra socket for the extra RAM; and considering that virtual memory using a PCIe SSD would be extremely fast, the benefits of extra RAM vs SSD VRAM are attenuated significantly.
3) For the few people who are left, they would almost all be better off building a hackintosh with whatever specs they like.
4) It is unlikely to be in Apple's best interests to design a completely new computer, or even update the cheese grater for these few remaining potential customers.
5) Whining and screaming at MacRumours is not going to change Apple's present course, which was set many years ago by SJ, and been steadily followed ever since through all their products.

There is really no need for a second socket for extra RAM Sockets. A HP Z420 has 8 RAM slots and 1 Socket. It is only that Apple have not in the past previously used more per socket.
 
There is really no need for a second socket for extra RAM Sockets. A HP Z420 has 8 RAM slots and 1 Socket. It is only that Apple have not in the past previously used more per socket.

True. However, dual sockets would also increase the available PCIe lanes (to 80 when using the E5).

Imagine a MP with 80GBytes per second of available throughput -- Five PCIe 3.0 16x slots. That would be an exciting addition to the workstation market. Apple's taken a huge step back by having this [!future!] product equipped with less than the present standard (nMP maxes out at 38GBps with all TB2 and PCIe combined).
 
OK, so another vote for "It's good enough for me, therefore it's good enough for everyone, quit yerbichen". All the while doing verbal gymnastics to avoid admitting that it has moved down market.

Thanks for your valuable input.

What has moved down in market is my impression of you, a vendor. I understand that the new MP doesn't fit in with your business. Fine, move on. Your constant bitching about the new MP day in, day out, really makes you look like Glen Close's character in Fatal Attraction.
 
True. However, dual sockets would also increase the available PCIe lanes (to 80 when using the E5).

Imagine a MP with 80GBytes per second of available throughput -- Five PCIe 3.0 16x slots. That would be an exciting addition to the workstation market. Apple's taken a huge step back by having this [!future!] product equipped with less than the present standard (nMP maxes out at 38GBps with all TB2 and PCIe combined).

That's actually an extremely good point I hadn't considered deeply enough. I probably wouldn't have used the phrase "step back" but it ends up meaning the same thing I guess.

So far I haven't given up hoping there will be a dual socket solution offered at some point (even though I still believe 12 cores at > 3Ghz each is enough for almost anything a workstation is likely to be tasked for) - but I had been looking at it merely from the POV of number of cores and compute horsepower.

Yup, good point, thanks!

I am talking about preview, interactive artist work. The only thing that speeds this up is more and/or faster CPU cores.

This is generally true for 3D content but we're not there yet. If you only need a few billion polygons, a gig or two of textures, inexpensive lighting models, and no motion or lens effects, then 12 fast cores produces near instant feedback (0.1s to 5s). It's not until you try and edit interactively with FX like SSS, volumetrics, depth of field (DOF), motion blur, etc. with expensive illumination models and FX like Radiosity, reflection blurring, shadow mapping, etc. and of corse very complex surface shading models too, that there becomes a problem.

And that problem isn't solvable yet. Currently with 12 cores at 3.2GHz scenes with those attributes implemented can take anywhere from 1min to several hours for an interactive render to show nicely - yet alone complete. So, what is it a good doctor will tell you?: If doing that causes you a problem... then don't do that. :)

And indeed this is the solution everyone I've worked with (and that's literally hundreds!) employs. I know some may think it silly for someone else to tell them how to work but if no one ever did there would chaos! People would be pounding nails with wrenches and wearing their cloths on backwards - the world would actually fall apart. :).. No, really.

Just think, how many 3GHz cores it would take to get a 1min (12 core) interactive render down under 5sec. I count 144 cores at 3GHz to get there. And 72 cores at 3GHz will get you there in 10sec. We're really not there yet. So in almost all cases it's wireframe in the gooey all around. Especially when it can't actually be fast enough anyway. To check things which need full render feedback we're mostly still at the point where we need to press the render button and when those are longer than about 1min. or so it's always more advantageous to do this on a separate networked machine controlled by a render manager.

Interactive rendering is great for scene and surface setup on the simple, but there's a point at which it's rendered useless (pardon the word-play). You're right in saying that generally, only more cores will help but the number needed to actually make a difference is currently beyond single desktop workstations. And, we're not there yet. :)

It is getting closer though. Users have reported that the Intel Phi for about $4k is between 3 and 15 times as fast as an 8 to 12-core 2.8 to 3GHz dual CPU system depending on the software, api, and etc. And the much faster "Knights Landing" 14mn process chips are right around the corner. I guess with 2 or more of those in a machine or on TB2 cables, we can finally begin to accept interactive preview engines as more viable solutions? Currently, it's interesting when it works but most studio and professional people don't/can't base their workflow(s) on it yet. I wanna see some real-world Phi examples though.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.