Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I want to find this magical land where everything works on the GPU and the Phi with no issues whatsoever.

In 5 years, maybe, but I've got work to do in the mean time.

Or you could just put the GPU's and Phi in your tower and still be faster. :D
 
I work freelance in CAD Design and Product Visualization (rendering). Although I do pretty well I haven't been able to convince my wife that we really need to set up a render farm in the spare bedroom.

However, If I can render on 24 cores as opposed to 12 guess what? My render times are cut in half! This allows me to produce twice the amount of work in the same amount of time. Show me one business person that would not appreciate that.

ive been through this before a couple of times around here in the past and present.. your argument would make complete and total sense if you could show how you will be able to produce twice the amount of work in the same time.. you know the complete rendering process from blank cad file through finished image.. you know how much work you do in order to arrive there..
doubling the amount of processors is not going to take away even 1 second of that work (except maybe during the phase of running previews)

*your* work stays the same

----------

So, can we declare "12 Cores is enough for everyone, forever" officially "debunked" now?

well, the argument went from number of sockets to number of cores but i do think it's feasible to say "a single socket is enough for everyone.. if you choose to buy apple computers"
 
except maybe during the phase of running previews

That's the expensive part.

----------

well, the argument went from number of sockets to number of cores but i do think it's feasible to say "a single socket is enough for everyone.. if you choose to buy apple computers

With Apple you can have any number of sockets you want, as long as that number is 1.
 
That's the expensive part.

see.. yeah. maybe i have a different type of work than you even though we're seemingly in the same ball field.

do you draw the models prior to rendering as well? (or are you even working with cad files in the way i'm assuming?).. or are you straight setting up renders all day long?

----------

With Apple you can have any number of sockets you want, as long as that number is 1.

lol
 
maybe the question shouldn't be "is this a professional enough computer computer for me?"... it should be "is this computer too professional for me?"
:wink:

So everyone who wants more than 12 cores is likely not "professional" enough to own this computer.

Nice.
 
So everyone who wants more than 12 cores is likely not "professional" enough to own this computer.

Nice.

no.. it's just super likely that someone who wants more than 12 cores (for things other than gaming (ie-geekbench) actually wants a few hundred cores..

maybe i'm just making assumptions here but if you're not happy with 12, you won't be happy with 24 either.. especially when you see hp just came out with 32.. now you need 32 all of a sudden..
we've seen this same scenario play out over and over again.. why in the world would #of cores not fall into the same trap?

(actually.. it's obviously already fallen into the same trap.. and you guys are stuck in there mucking about.. meanwhile, i'm over here sipping on this refreshing kool-aid)
 
:) Yes, MacPro sales is the major source of funding for the MacPro line - this is my assumption. Are you saying that's wrong?
Yes, that's wrong, because you've changed the subject to only Mac Pro sales. Mac Pro sales are a small fraction of Apple's sales, and the Mac Pro line can *lose* money while maintaining value to Apple by having a flagship product that stands above the competition.
My assumption is that people buy them in order to get work done. Not so they can tinker with the insides after the machine's usefulness is or should have expired. Again, is that wrong?
Yes, I also thing that's wrong, because you've changed the subject again. They buy them because they can customize them with PCIe cards *in order to be capable* of getting work done. Those that don't need this feature often buy iMacs or Mac Minis.
Actually I think it works out to about the same maybe. Right, MP1,1 as an example... by the time MP3,1 released people were upgrading the 1,1 CPU which at that time were like $600 each ($1,200 total) and 3 months later a GPU the X1900 or whatever it was for like $450. Then a year to 18 months later another GPU at $400 and probably some RAM at $800. Right about then bigger faster hard drives for $250 a pop. And now recently a dual SSD PCIe card at around $1k. And with all that it's still not as fast as the current 5,1 stock system with just a single SSD placed in bay1. So it's not really twice the money, it's just a different sales model. A model I see as superior if you wanna have the fastest stuff possible or close to the fastest stuff - continually.
For me, the math is much different. I only spent about $300 updating my 4,1 to a 5,1 via the CPU. I also spent about $250 updating the GPU after selling my 4870, and I may update my current GPU yet again. Other upgrades... I would have spent that money anyway, so they don't apply.

That would be a legitimate point if the Mini was as fast as the MP6,1 with dual GPUs and all. Or even as fast as the MP5,1. What are the speed differences anyway? Isn't the MacMini about a quarter to an eighth the system the MP5,1 is?
Then perhaps Apple should sell two Mac Pros - an iTube MiniPro and a full-sized Mac Pro for people that want slots everything else they cut back on. That way you won't have to pay a full Pro tax for full specs.

Both points have been long ago debunked. It will neither have cables going all over the place nor is it anything like a Mini.

Are you so opposed to purchasing a TB2 to PCIe adapter for an extra $150 to $250? So your MP6,1 will cost you $3,250 where everyone else pays $3,000 even. OK, sounds fine to me. You will have your SAS interconnect that way. And since I don't need or want one I won't have to pay $3800 for a system that could support a card I'll never own. Yup, I'm fine with that math. :)
Not even slightly debunked! It *will* have power bricks and cables that didn't exist in the old box. How can you deny this?

I don't see any TB2 to PCIe adapters for less than $700, and those only have a single TB port, which is only x4. I would need one that is:
  • PCIe v2 x8 lane - at least two TB2 ports passing that data
  • Full size, in order to fit a full length/height PCIe card

So no, that math does not work, nor are the devices you cite in existence. If the nMP is $3000, I have to buy that, plus the $700 TB external box that doesn't meet my needs. I've gone from two boxes (MP and RAID) to:
  1. iTube
  2. External PCIe box
  3. RAID box
  4. 2nd external PCIe box
  5. 2nd RAID box to hold all the internal drives I had in my old MP

Note, that's two boxes to five boxes, all else remaining unchanged, and doesn't even count the extra power and TB cables. $3000 + $700 + $700 + TB2 cables + another $300 RAID box. All that has to be strung out somewhere, and I've also lost half of the data throughput going from x8 to x4 lanes in that TB2 cable. When they make an x8 lane dual-TB2 external PCIe expansion box, how much more than $700 (x2) will *that* be, just to maintain my current available speed?

I hope you're starting to see why it's not such a brilliant design.
 
Last edited:
ive been through this before a couple of times around here in the past and present.. your argument would make complete and total sense if you could show how you will be able to produce twice the amount of work in the same time.. you know the complete rendering process from blank cad file through finished image.. you know how much work you do in order to arrive there..
doubling the amount of processors is not going to take away even 1 second of that work (except maybe during the phase of running previews)

*your* work stays the same

You're waxing semantical again. While he'll still need to do he same work modeling, rigging, etc. how do you not figure reducing render time in half would lead to more efficiency and better results (which would ultimately lead to more jobs). Even though you're not physically doing anything while the render is going, the amount of time it takes certainly factors in to the overall job time and cost. You're argument seems to paint a picture where core count doesn't matter at all.
 
Me. I would fire you. :) Using your edit machine to do your rendering? Yup, you're fired. :D Of course you use one system to edit and one [typically headless] to render. It doesn't need to be a farm. If it's the MP6,1 a single TB2 attached SBC with between 4 and 60 cores (per device!!!) is available or you may select any other WS including another MP. I dunno about the Phi but in some scenarios you'll pay about the same for two separate 12-core systems as you would for one 24-core and since you're operating the CAD while the other system or systems are rendering you're already at twice the speed (even tho it's rendering on another [only] 12-core system). And the same is true for any SBC as well.

This argument should be a sign that you might wanna look for a different kind of work. If you don't have the tools to do the job and can't get them (because of wife or whatever) then it's time to expand your horizons. :)
I would fire *you* for being so rude as to tell someone they should find another line of work. There are little businesses like my own that take a lot of work from people that think they know better. I know, because I did precisely that, and continue to do so presently.

I think you're just trying to argue and piss people off, rather than debate rationally.

The only thing that is 'debunked' is that the nMP will work for everyone.
 
You're waxing semantical again. While he'll still need to do he same work modeling, rigging, etc. how do you not figure reducing render time in half would lead to more efficiency and better results (which would ultimately lead to more jobs). Even though you're not physically doing anything while the render is going, the amount of time it takes certainly factors in to the overall job time and cost. You're argument seems to paint a picture where core count doesn't matter at all.

in this instance, i wouldn't say i'm waxing semantical.. because i'm talking about things that i have experience in..

if i'm doing a project which is going to include rendered images, the renders happen while i'm off work.. i don't really care one bit if 4 images finish in 4 hrs instead of 8.. i mean, when i wake up the next day, they're done.. a computer can work 160hr work weeks.. i can't

if i _really_ needed an image asap (like in the next 15 minutes or so), i'd upload to a farm and pay a hundred bucks for 15mins on their 200 i7s.. but i personally don't run into that type of time crunch.. just go into the project with somewhat of a gameplan and the rendering times won't affect me.. even on my lowly quad2.66
 
ive been through this before a couple of times around here in the past and present.. your argument would make complete and total sense if you could show how you will be able to produce twice the amount of work in the same time.. you know the complete rendering process from blank cad file through finished image.. you know how much work you do in order to arrive there..
doubling the amount of processors is not going to take away even 1 second of that work (except maybe during the phase of running previews)

*your* work stays the same

You can't base my work flow on your work flow. My machine is unusable during hi-res, final renders. If I can cut a 16 hour render to 8 hours by doubling the cores I have freed up 8 hours of useable "work" time on my machine. Eight hours that I would have been twiddling my thumbs.
 
in this instance, i wouldn't say i'm waxing semantical.. because i'm talking about things that i have experience in..

if i'm doing a project which is going to include rendered images, the renders happen while i'm off work.. i don't really care one bit if 4 images finish in 4 hrs instead of 8.. i mean, when i wake up the next day, they're done.. a computer can work 160hr work weeks.. i can't

if i _really_ needed an image asap (like in the next 15 minutes or so), i'd upload to a farm and pay a hundred bucks for 15mins on their 200 i7s.. but i personally don't run into that type of time crunch.. just go into the project with somewhat of a gameplan and the rendering times won't affect me.. even on my lowly quad2.66

That works as long as you can easily schedule rendering during down time. But suppose I have a 24 hr render or even 48? Cutting that in half dramatically increases up time during "work" hours. Not only that, but being able to render much faster also opens up the ability to put more work/detail into the scene file itself, resulting in a higher quality end result. And it applies equally to the smaller projects as well. I've had numerous broadcast projects come in with a single day turnaround. Being able to render in a fraction of the time would allow me attempt much more ambitious designs with such a close deadline.

I guess I really don't understand your argument.
 
T
I guess I really don't understand your argument.

it's the same one (or maybe one of the 6 or so ;) i've been having all along..

you're talking about making a 2 day render into a 1 day render in a fast paced/short scheduled environment?

really? a 24hr render under tight deadlines is acceptable? not for me it wouldn't be.. not even close..

and there are plenty (i don't know how many exactly but enough to matter) of people that are in those situations and they sure as heck aren't using a 2socket x12cpu computer
 
That's not kool-aid it's tainted Apple Juice....

haha
yum

----------

oh. fwiw.
I know most of the talk here lately is either for or against the new mac.. (definitely most of my talk at least)

but in this case, i've been spewing this same crap for a couple of years..

not that it really matters but this isn't me seeing the new mac's single socket and bending towards apple. I was already bent.
 
do you draw the models prior to rendering as well? (or are you even working with cad files in the way i'm assuming?).. or are you straight setting up renders all day long?

very rarely have to do any modeling. 98% of the time it's iterating on shaders or test render setups, or iterating simulation setups.

For instance with Maxwell, most of the artist time is spent tweaking the shaders, scene layout, lights, etc to get things just right in CPU-driven preview mode. The CPU's are working 100% during this time. Then you might launch a low(ish) res preview to evaluate what a final might look like. So your preview might take 5 or 10 minutes to get resolved enough to make a decision. Just long enough that its annoying to wait but not long enough to send it to the farm.

We bill our designers time at $1K-$2K per day depending on skill level so the extra money for the 2nd socket is worth it. We've tested going to higher socket count machines but the lower core speed on those machines combined with the other disadvantages of bigger iron rarely makes it worth it. We do have a few 4 and 8P machines on the farm though for the occasional fluid sim that gets large enough to need that much memory.
 
haha
yum

----------

oh. fwiw.
I know most of the talk here lately is either for or against the new mac.. (definitely most of my talk at least)

but in this case, i've been spewing this same crap for a couple of years..

not that it really matters but this isn't me seeing the new mac's single socket and bending towards apple. I was already bent.

I'm beginning to think it's completely useless to try and discuss or debate anything with a few of the most vocal regulars here. They have their opinions usually not based on any actual facts, and logical or not (usually not) they're sticking to them (stubborn). If one tries to point out the irrationality of some of the ridiculous claims being made or show how their opinions don't fit current technology and long accepted work practices, they just flame the person (childish). It's a bit difficult to have a rational debate based on reason and logic which considers current facts when all three of those things (reason, logic, and fact) are rejected out of hand in preference to some arbitrary coveted opinion. Hehe, and oh the twistings, distortions, and contortions they go through to protect or promote those opinions... Sad and laughable.

At least the "crap you're spewing" is factual and logical. :p That's already two steps above. :)
 
Last edited:
if i _really_ needed an image asap (like in the next 15 minutes or so), i'd upload to a farm and pay a hundred bucks for 15mins on their 200 i7s.. but i personally don't run into that type of time crunch.. just go into the project with somewhat of a gameplan and the rendering times won't affect me.. even on my lowly quad2.66

Have you tried to locate a farm that can render Bunkspeed Shot files. There aren't any. Which is fine by me because as a business owner I prefer to do my own work rather than farm it out. Profit margins are higher too.

----------

I think you're just trying to argue and piss people off, rather than debate rationally.

You just pegged TESS to a tee.....

:D
 
basically works like so with rendering:

Image

so it's not as if the computer as the whole is looking at the image and combining all of it's cpu power into one (which would be about the most awesome thing out there if someone could figure out how to do that ;) )

more like, it divides the image into smaller chunks and lets each processor go to town on it's own little section..

What about this render?
 

Attachments

  • yes.jpg
    yes.jpg
    161.2 KB · Views: 48
very rarely have to do any modeling. 98% of the time it's iterating on shaders or test render setups, or iterating simulation setups.

For instance with Maxwell, most of the artist time is spent tweaking the shaders, scene layout, lights, etc to get things just right in CPU-driven preview mode. The CPU's are working 100% during this time. Then you might launch a low(ish) res preview to evaluate what a final might look like. So your preview might take 5 or 10 minutes to get resolved enough to make a decision. Just long enough that its annoying to wait but not long enough to send it to the farm.

We bill our designers time at $1K-$2K per day depending on skill level so the extra money for the 2nd socket is worth it. We've tested going to higher socket count machines but the lower core speed on those machines combined with the other disadvantages of bigger iron rarely makes it worth it. We do have a few 4 and 8P machines on the farm though for the occasional fluid sim that gets large enough to need that much memory.

yeah, see, we're doing some stuff the same regarding rendering but we're also different in that i don't have to test shaders and materials etc very often.. i have a custom library of 26 materials that i've made which covers all of my building materials.. it took a while to build up the library but after that, it's reusable.. so i don't have to test as much as i assume you are.. for lighting, i have 4-5 favorite hdrs which can cover most of my situations..

anyway-- point being, i usually only need to run 3-4 tests per scene because i've mostly already tested what i'm using.. at 800px wide, i can have a scene ready for the final in under 30 minutes.. if i doubled my cpus, i could cut that to 20 minutes.. (i used to use cuda during that time but i blew up an 8800gt in less than 2yrs doing that.. but those cuda backed previews were coming up very quick.. maybe these firepros/openCl will give me a better experience)
-----------------------------------

regardless of all that and what our (not just me&you) needs may or not be.. i do think it's a little shortsighted to think software and/or different types of raytracing accelerators via thunderbolt aren't where the significant improvements are going to occur.. just saying- be careful with getting too caught up in this core count race..


for instance- check out this thing that just came out of development a couple of months ago.. Neon..rhino's realtime viewport rendering. (this demo is with a caustic but those aren't required)



that's where the future is going to be regarding preview renders and renders in general.. and i don't mean 2-3 years in the future, i mean last week in the future.. (of course, rhino still hasn't been officially released on mac so if you want neon now, you're still going to have to go to windows but hey, i'm patient and mcneel is doing a great job on mac rhino so far)
 
Last edited:
it's the same one (or maybe one of the 6 or so ;) i've been having all along..

you're talking about making a 2 day render into a 1 day render in a fast paced/short scheduled environment?

really? a 24hr render under tight deadlines is acceptable? not for me it wouldn't be.. not even close..

and there are plenty (i don't know how many exactly but enough to matter) of people that are in those situations and they sure as heck aren't using a 2socket x12cpu computer

No, you completely misread my statement. I addressed both planned long renders and short deadline driven ones. Dramatically reducing render times is extremely advantageous to both. I'm not trying to be mean here, but I seriously question your reading comprehension.

At least the "crap you're spewing" is factual and logical. :p That's already two steps above. :)

Logical? In this thread he's trying to argue that reduced render times (by up to half) don't affect the time spent "working" and in an other he was trying to analogize displays and input devices with the kind of external expansion boxes to be used with the new MacPro.

I'm not privy to all of his posts here, but those seem pretty nonsensical to me.
 
Have you tried to locate a farm that can render Bunkspeed Shot files. There aren't any. Which is fine by me because as a business owner I prefer to do my own work rather than farm it out. Profit margins are higher too.


isnt' hypershot pretty damn fast anyway? i remember trying it out a few years ago but didn't stick with it (didn't really like the navigation and it seemed, at the time anyway, geared towards industrial design - lots of metal and rubber emphasis).. i remember it being fast though.







.






----------

What about this render?

@seandempsey wins at internet this week #






.
 
Dramatically reducing render times is extremely advantageous to both. I'm not trying to be mean here, but I seriously question your reading comprehension.

look.. you have to at least try to understand that i know _e x a c t l y_ what you're saying..

2 is more than 1.. of course it is

less computing time is better than more.

just so we're clear. that is what you're saying, right? i get it ok. it makes complete sense.

the problem is-- that's all you're saying
you have stopped thinking about it when you should in fact continue seeing the problem through because there's more to it..


likewise, you have stopped listening to anything i am saying after you reach the part where i say 1 socket makes more sense in a pc..
there's no more reading comprehension left for you because you've found where we are in disagreement in your mind.. but i'm saying more than what you're hearing
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.