no.. i could.. i could also use openCL.
gpu acceleration options:
(indigo renderer)
..i (very) rarely use either.
current implementation is limited or of little to non-observable benefits.. there are specific types of renderings (things like: lighting (sun vs lights vs .hdr vs global), materials, methods (MTL, bidirectional, alpha, path)) in which you'll experience perceptible gains but the flip side , using just one example-- is that enabling gpu assist limits you to path tracing mode.. however, in many conditions, rendering using (say) bidirectional mode will allow the image to resolve to desired level faster than rendering the same scene with path tracing.. (as in, using cuda or openCL would lengthen the render time.. not because of anything to do with either.. but because you're using standard path tracing as opposed to MLT.)
another thing of note (not meant as a point or whatever) is that, under current implementation, you're using only a small amount of the potential power in the gpu.. if i put the 64MB radeon from my old powerbook alongside the much faster cpu/ram of my imac then, then i might experience a maxed out gpu.. other than that, any supported low to mid grade gpu will handle the load just fine (or- if you use a high end gpu, all that betterness just means more gpu potential sits idle)
i think much of the software you presently see labeled as supporting openCL or cuda incorporates it in a similar fashion as above.. which is more along the lines of keeping the core of the software the same then adding a few side nuggets of gpgpu.. like a boost or a bonus or whatever.. the software itself still, under the hood, behaves the same way as before except a few of the routines can offload to the gpu to execute the same code/algorithm.. it's the easy way to bring gpgpu into the loop but the benefits are average to slim to detrimental in some cases.. and by easy, i don't mean easy.. it still takes a talented coder to get it hooked up.. just that the difficult way is considerably much more difficult than the sidecar method (but-- results in orders of magnitude greater usage of hardware).. it involves digging into the program core-- in cases, we're talking about legacy code.. and re-writing nearly from scratch under the notion the target hardware meant to execute this code is a GPU instead of a CPU..
yes, on certain levels, those two things are very similar.. on other levels, they're very different and the code simply isn't interchangeable (well, one of positive things that can be said about openCL is that it attempts to negate these differences and allow the code look at it as just PU instead of Graphics
or Central PU)
we've still seen very few examples of code that was started from scratch with gpgpu in mind.. one that's likely more of us here have seen is filter usage in fcpx.. when people used that specific capability within fcpx, they were floored with the performance enhancement.. it wasn't 1.25x faster.. it wasn't 2x faster.. it's not 5x faster.. it's way effing faster.
that 'holy cow.. freaking incredible!' type of example can be utilized in certain other computing tasks but it's not something that can happen overnight.. it's not something that can happen in a month.. however long it took the original application to be written is a decent gauge of how long it might take to rewrite it.
software is one of the major 'problems' in computing today.. not hardware.. the hardware is very very good for nearly every single use case.. arguing over many of the hardware specs and benchmarks is just an exercise in futility right now.. thats great you get suchandsuch fps from that card but how is that helping most people needing or wanting performance enhancements? games? ok, cool.. you'll have a better gaming experiences.. that's awesome.
but what else? what other applications can i get some super duper gpu and experience super duper enhancements? not many.. like maybe 5 or 6 that are commercially available.
Anyhow, to sum up, you made numerous posts in 2013 saying you were going to buy a 6,1.
and i was.. i was under the impression my software would be updated prior to my 1,1 endoflife.. but that didn't happen.. as already explained in the wall of words post.
Eventually you realized that it failed to meet your needs and bought a different machine instead.
no, as mentioned earlier in the thread, the nmp met my needs and fell within my budget.. but i didn't buy a different machine instead.. i bought two machines which are capable of running in tandem in order to match the performance expected from 6,1..
in my flow, can an imac match the speed i was looking for in my modeling apps? yes.. these apps want fast clock rates- usually in spurts instead of continuous 100%.. the top end imac is very fast.. the imac (or similarly outfitted windows/linux/etc boxes) is a very good spec/computer for running the majority of modern day modeling/cad functions..
what about multicore, which i also take advantage of (for beautifying(?) the models created on the above system?) no.. an imac can't match it or even really come close to matching mp ..it's four cores instead of 6 and further, those 4 cores will run even slower when asked to do the things i need it to do.. (or else it will overheat)
in order to balance out and regain the speed loss from lower core#/throttling ..i had to also buy a macbook pro..
those two computers, when running together, have given me a similar overall project experience (ie- start to finish) as the single mp hex would.
and realize this as well.. i don't view my laptop as a companion to my desktop.. i use the laptop, often (everyday), away from my desk.. it was a worthwhile upgrade on it's own accord..
anyway.. it took two new computers to replace one mac pro 6,1
(but once my software goes to the next version, this will be a completely different story.. it's not as if i'll be piggy backing the laptop to the desktop anymore.. if all goes according to (my understanding of) the plan, the laptop cpus would maybe contribute an additional 1% of overall processing power instead of what they're currently doing at maybe 35-40%.. unless, of course, the laptop's gpu will be able to be used via network alongside the two in the macpro.. then i'd probably still link up.. but i don't imagine this will be the case upon initial release.. maybe down the line?
)