Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
if OS X decides to use them as the parallel processing powerhouses they are.

Thats the key. If and when this technology gets implemented and refined enough to make a consistent, measurable and useful improvement to the guy working on his Mac Pro, a lot of sand will go through the fingers.

This isn't the first time, similar issues have happened in the past. Even in this very tread someone shared his experience. Being on the early adopter of "revolutionized" solution is not generally a good idea, unless you know what you are doing (and I trust nMPs targeted audience does know). As in paying a hefty extra for something you don't (or can't) use.

what I am saying is that they took a niche product and pushed it further "underground", making it even less mainstream than before. I believe this is what makes people (on Macrumors) unhappy., since they all seemingly wanted an opposite - a more accessible xMac.

I don't know if this move is good for customers or bad. I don't get to vote on their new strategy because I will never buy neither this, nor even the classic Mac Pro. Just sharing my opinion.
 
Thats the key. If and when this technology gets implemented and refined enough to make a consistent, measurable and useful improvement to the guy working on his Mac Pro, a lot of sand will go through the fingers.

maybe.. but what would you expect to see instead? more cpus?

because there's a pretty big problem with putting more cpu cores into a system as a means of enhanced performance..

if you go from 4core to 8core, that's pretty good.. your 10 hour render can potentially finish in 5.. same with going from 6core to 12.. to get that type of speed increase again, you'll now have to go from 12 to 24cores.. then next time to 48 etc..

each time you want to double your performance, your cost increases exponentially.. it's a dead end street which pushes valuable improvements in speed beyond the reach of most people (most pros too) very early in the equation..

gpgpu brings that type of performance hope to the masses because the cost per core is way way cheaper in a graphics processor

i mean really, the computer is going to be the fastest one they've ever made on the day it's released.. by far in certain areas.. and it's prepackaged with a lot of reserve power which i think even most early adopters will have unlocked at some point throughout its lifespan.
what exactly aren't you satisfied with?
 
maybe.. but what would you expect to see instead? more cpus?

because there's a pretty big problem with putting more cpu cores into a system as a means of enhanced performance..

if you go from 4core to 8core, that's pretty good.. your 10 hour render can potentially finish in 5.. same with going from 6core to 12.. to get that type of speed increase again, you'll now have to go from 12 to 24cores.. then next time to 48 etc..

Since when is doubling the core count the only worthwhile improvement? Simply going from 12 to 16 is substantial. Though I agree about the GPU acceleration.


i mean really, the computer is going to be the fastest one they've ever made on the day it's released..

C'mon, that's the same tired argument we've heard time and time again from within the Apple bubble. Faster than the machine they last updated 3 years ago? And that's ignoring the competitors who can offer a more powerful configuration.
 
Since when is doubling the core count the only worthwhile improvement? Simply going from 12 to 16 is substantial. Though I agree about the GPU acceleration.

i guess i just don't see it that way.. there's really no notable difference..
4hr render in 3hrs?
does one really sound better than the other? they both suck to be honest

C'mon, that's the same tired argument we've heard time and time again from within the Apple bubble. Faster than the machine they last updated 3 years ago? And that's ignoring the competitors who can offer a more powerful configuration.

well, that and you gotta remember i'm coming from a 1,1 which is two years past replacement due to waiting on this thing.. so my enthusiasm towards performance gains is very likely exaggerated when compared to most other mp users.
 
i guess i just don't see it that way.. there's really no notable difference..
4hr render in 3hrs?
does one really sound better than the other? they both suck to be honest

1 hour is not a notable difference? And if all you do is rendering, you can buy multiple mac mini's and make a render farm.
 
Apple is inconsistent. Since ages ago they kept telling us that GPUs arent what we want.They shipped computers with deliberately crippled graphics subsystems. See all the people raging about lack of proper graphics power in every generation of every Mac. They forced integrated GPUs on us (even the atrocious abominations that was Intel GMA).Deep inside I understood where Apple was coming from. Very few people play games. Even fewer use that one application that uses GPU power.

Now they suddenly put it into reverse and offer a computer with 2 Pro-grade and extremely expensive GPUs as standard. Further, those GPUs will be utilized by only a fraction of people who will actually by this very niche computer.

I don't know how Apple comes up with this stuff.

This is one of the main reasons in my opinion, the graphics/visualisation/video market is one of the few growth areas in the traditional PC market.

Personally, I see the nMP and the 4gb GPU option now in the iMac as there first move back into the pro market place.
 

Attachments

  • image.jpg
    image.jpg
    57.5 KB · Views: 83
i guess i just don't see it that way.. there's really no notable difference..
4hr render in 3hrs?
does one really sound better than the other? they both suck to be honest

A 25% reduction is a fantastic improvement in my opinion. Almost any double digit improvement is. Sure, we can keep reducing it down to make the difference seem negligible, say a 15min render is only reduced to 11min or a 7min render is only reduced to 5min. Now if that's for a 30 second spot (900 frames) then I'm saving about 30 hours of render time.

I can always build a farm or send it out to one, but there's still a need to render locally. A render farm doesn't solve all problems.


well, that and you gotta remember i'm coming from a 1,1 which is two years past replacement due to waiting on this thing.. so my enthusiasm towards performance gains is very likely exaggerated when compared to most other mp users.


I'm pretty much in the same boat. I sold my G5 years ago and have been on a Macbook Pro ever since. Now I'm looking for a workstation again in order to bring more of my work home with me. So even the base Mac Pro will offer me a significant improvement. But I'm not comparing it to what I'm replacing. I'm looking to see what Apple and its competitors are offering, and whether it will fulfill my wants for the right price.
 
paying a hefty extra for something you don't (or can't) use.
That's not true for everyone; some apps do already use OpenCL, and I'm sure there are plenty more around the corner. The people that should be buying new Mac Pros are people that use these apps, i.e - they know they can take good advantage of OpenCL now, or will be able to in the near future.

Otherwise you're wasting at least one GPU, unless you decide to assign it to a Bitcoin mining program or something.

Even if you don't really need the second GPU, it's going to be the best Mac offering if you don't think a high end iMac will cut it, or if a Mac Mini won't do, and you don't want a machine with integrated screen. That doesn't mean it won't suck to be essentially paying for a GPU you don't need, but the progression toward more OpenCL for complex tasks means there's a chance it isn't going to be wasted money, and might give the new Mac Pro some built in longevity, though I wouldn't want to bet too heavily on that.

what I am saying is that they took a niche product and pushed it further "underground", making it even less mainstream than before.
I'm not so sure about that; I think Apple is genuinely trying to predict a future trend, or at the very least to establish the Mac Pro in its own niche, rather than pitting it directly against workstations like it was in the past.
 
Last edited:
It's all about OpenCL. Pro apps will be using a lot of OpenCL, this is a pro machine.

Stuff like disk compression isn't part of the equation. Moving data from CPU to GPU to disk takes to long. This is more big pro number crunching tasks, not normal operation.
 
1 hour is not a notable difference? And if all you do is rendering, you can buy multiple mac mini's and make a render farm.

A 25% reduction is a fantastic improvement in my opinion. Almost any double digit improvement is. Sure, we can keep reducing it down to make the difference seem negligible, say a 15min render is only reduced to 11min or a 7min render is only reduced to 5min. Now if that's for a 30 second spot (900 frames) then I'm saving about 30 hours of render time.

I can always build a farm or send it out to one, but there's still a need to render locally. A render farm doesn't solve all problems.

yeah, i could of been less flippant in my 'that sucks #" remark.

it's just that a 25% decrease in render times doesn't do much of anything to change and/or speed up my workflow..

it's still going to be preview renders during the day then launch them during downtime.. it makes no difference to me if the renders finish at 5am instead of 6a.

it's similar to a painter whose paint dries 25% faster.. it's not going to affect him at all because A)he still has to do the same amount of work and B) the paint will still be drying overnight when he's not even there..
(and yes, watching a render resolve is no different than watching paint dry :) )

regardless, more of the point i was making was how expensive it becomes after a certain point when trying to increase performance via adding cpus..
i can increase my speed by 30% for ~$300 (from 4core to 6).. that's good and it's within reason for many people's checkbook.. but going from 12 to 16 cores costs a lot more than $300.. and once you start looking beyond 12 or 16core, the cost is just too crazy for most people to even consider.. it would make more sense, from a user/buyer pov, if it went something like this:

4core =$300
6core = +$300
10core= +$600
16core= +$900
24core= +$1200
36core= +$1500
54core= +$1800
etc

but it definitely doesn't work that way.. not even close. the law of diminishing returns kicks in way too fast and the cost penalties become very brutal very quickly in the core#-for-performance game.
(not to mention electric supply etc.. a similar diminished return thing happens in other regards besides upfront cost)
 
Last edited:
Even if you don't really need the second GPU, it's going to be the best Mac offering if you don't think a high end iMac will cut it, or if a Mac Mini won't do, and you don't want a machine with integrated screen

Right, a mini doesn't cut it, and once I spec out an iMac I'm at $2.7k, for a computer with a integrated monitor I don't want. From that perspective an extra $300 for the Mac Pro is a no-brainer.
 
Apple is inconsistent. Since ages ago they kept telling us that GPUs arent what we want.They shipped computers with deliberately crippled graphics subsystems. See all the people raging about lack of proper graphics power in every generation of every Mac. They forced integrated GPUs on us (even the atrocious abominations that was Intel GMA).Deep inside I understood where Apple was coming from. Very few people play games. Even fewer use that one application that uses GPU power.

Now they suddenly put it into reverse and offer a computer with 2 Pro-grade and extremely expensive GPUs as standard. Further, those GPUs will be utilized by only a fraction of people who will actually by this very niche computer.

I don't know how Apple comes up with this stuff.

What happens if OS X decides to start using those GPU's for system tasks?

That was what i was thinking too, not just applications utilizing the GPU...but the operating system as well.

Its not about being inconsistent, its about a changing market and where its headed or thinking where its headed. Technology always changing.

See all the people raging about lack of proper graphics power in every generation of every Mac

As far as the previous Mac Pro's, They put in a bare minimum on specs to allow the user to customize as needed. Either on build to order options or third party sellers.

With the new design obviously they will go with a much powerful GPU as most of its integrated.

Now they suddenly put it into reverse and offer a computer with 2 Pro-grade and extremely expensive GPUs as standard. Further, those GPUs will be utilized by only a fraction of people who will actually by this very niche computer.

As in paying a hefty extra for something you don't (or can't) use.

Same can be said for the people who always complain about substandard specs on the base models.

Not all people will utilize these dual graphics. I'm sure we will have some people who will go for the best bang for the buck thing, but I'm wondering if these are the people who its was not intended for in the first place.

As was already mentioned about these graphic cards, one might be tasked primarily for high res 4K monitors. It might be possible for the operating to use openCL/GPU for tasks too.
 
I think innovation was at the core of this new machine. I'd like to hear some discussions about what other people are reading between the lines.

Bring your own everything else. The exact opposite of all in one, iMac or Prev MP. When I look at the interior triangle I see computing power and a hub.
I design the storage, Display set, external rig to fit my projects. Extremely flexible from a point of view. Obviously Apple thinks something is evolving, so let it happen outside the box. Why build an array of bays that will be outdated soon and cool that large area.

I think they should have a giant hand pointing at those TB ports ;)
 
it's still going to be preview renders during the day then launch them during downtime.. it makes no difference to me if the renders finish at 5am instead of 6a.

But you're still using an arbitrary "1 hour" improvement to suit your argument. That 1 hour could easily be 4, or 5 hours, or even the 30 I mentioned in my previous comment. Sure, if you're just rendering overnight and you only see a 1 hour improvement, that's negligible. I'm not sure what you work on, but all of my renders are different based on the complexity of my projects. I'm not setting up the same thing every night where the processing time is easily predictable. I could set something to go overnight and see what used to be a 16 hour get knocked down to 12. So I could feasibly set that render up at 8pm the night before and be ready to get back to work again at 8am. Whereas I'd be waiting until noon previously. But this hypothetical really only scratches the surface on what a gain like that could to for productivity.


regardless, more of the point i was making was how expensive it becomes after a certain point when trying to increase performance via adding cpus..
i can increase my speed by 30% for ~$300 (from 4core to 6).. that's good and it's within reason for many people's checkbook.. but going from 12 to 16 cores costs a lot more than $300.. and once you start looking beyond 12 or 16core, the cost is just too crazy for most people to even consider.. it would make more sense, from a user/buyer pov, if it went something like this:


I agree with you there and, in my opinion, is one of the drawbacks to a single CPU system. Spreading the total cores over 2 CPUs is more cost effective than just 1.

You're also right that dual CPUs are still highly expensive when you get into the higher core numbers. But for a lot of people, that one time cost of an extra few thousand dollars can mean a whole lot more return in productivity over the life of the system.
 
But I'm not comparing it to what I'm replacing. I'm looking to see what Apple and its competitors are offering, and whether it will fulfill my wants for the right price.

C'mon, that's the same tired argument we've heard time and time again from within the Apple bubble. Faster than the machine they last updated 3 years ago? And that's ignoring the competitors who can offer a more powerful configuration.

Apples increasing performance along with new technology with each new machine is consistent within the Apples ecosystem.

If you really want to compare performance/price with equivalent PC workstations you also have to figure the cost in workflow change too.

If your currently using FCP 7/FCP X or simular Mac only professional software will have to possibly change the software your currently using or change in licenses for windows versions. Hardware / driver compatibility issues have to be resolved.

All these will be added costs and workflows changed.

Also will these performance/price comparisons make sense if they constantly change from one platform to another year to year? The constant changing of platforms does not make sense.
 
Last edited:
But you're still using an arbitrary "1 hour" improvement to suit your argument. That 1 hour could easily be 4, or 5 hours, or even the 30 I mentioned in my previous comment. Sure, if you're just rendering overnight and you only see a 1 hour improvement, that's negligible. I'm not sure what you work on, but all of my renders are different based on the complexity of my projects. I'm not setting up the same thing every night where the processing time is easily predictable. I could set something to go overnight and see what used to be a 16 hour get knocked down to 12. So I could feasibly set that render up at 8pm the night before and be ready to get back to work again at 8am. Whereas I'd be waiting until noon previously. But this hypothetical really only scratches the surface on what a gain like that could to for productivity.

yeah, i'm definitely speaking loosely/arbitrarily with a lot of these numbers as there's an underlying tone which is more of my main point.. that being dual gpu - 1 cpu systems vs continuing to try to add more and more cpu to the computer..
the latter is more of a spec bump thing which requires me to spend a lot more money on nominal gains.. (just using virtualRain's poll, look how many people are buying a 12core nmp.. 1 person :) ...i'm sure everyone else in that poll wouldn't mind a 12 core but it's beyond their mean$.. )

whereas i can spent a lot less money on dual gpu and potentially see 20x or more speed increase in my renders.. speed increases that will actually change/benefit the way i work instead of what happens with more minor jumps in speed.

i guess what it comes down to is that i'm saying i really don't care if the nmp is single socket cpu.. i, along with the vast majority of other buyers, simply can't afford/justify much beyond an 8core system regardless of whether or not we want them..

i'm pretty sure apple knows that too which is why they did away with the dual sockets.. along with the fact that the next logical step in that type of development is a quad socket.. it's just way too expensive.. we can argue about it all we like but i honestly believe only a minuscule portion of buyers will even consider anything beyond 12core.. much less actually pay for it.

especially when considering the potential for meaningful speed increases in parallel processing is in the gpus.. cheap cores with considerable performance increase has won out over expensive cores with less beneficial improvements.. it just seems like we've reached the tipping point as far as multicore systems go.. it would be different if cpu prices were dropping over the years which could put ,say, 24core systems in the hands of many more people but that just doesn't appear to be happening.. that's why i call it a dead end street.. there's no future in that type of design (imo)

----------

oh.. and i guess it would also help to realize my main rendering program is already written in openCL (iteration #1 and more improvements in the pipeline) so i'm already seeing some of these types of gains i'm speaking of in my own work.. even with the 5770 which is currently in my desktop.

i understand it's a much different set of cards for someone whose current apps are reliant on cpu alone for performance increase.
 
i get it.. you want the old mac pro with a spec bump.. you're wanting increased electricity usage

What a childish argument. But I can stoop down to that level:

I get it.. you want to force people who only need one video card to have two. You're wanting increased electricity usage.
 
What a childish argument. But I can stoop down to that level:

I get it.. you want to force people who only need one video card to have two. You're wanting increased electricity usage.

Your going from 980 watts down to 450, your using less electricity. If the new Mac Pro is not currently using the second video card in the system, its running at idle not using much electricity anyway.

Oops, may not be applicable here...lol
 
Last edited:
I get it.. you want to force people who only need one video card to have two. You're wanting increased electricity usage.

actually, i don't want to force anybody to do anything.. buy what you like.. if you want a 16core system then go buy one.. i'm not trying to stop you.

all i'm saying is that it's a waste of money for me personally when considering the type of gains i'll actually be getting from it.
 
What a childish argument. But I can stoop down to that level:

hmm.. just saw this edit.

if it's coming across as a childish argument then i'm mis-communicating..

i thought i was just repeating things you've said or implied.. that being you would of liked apple to use the cheese grater and put a higher rated power supply in it.. that is what you want, right?

not only aren't they increasing the wattage - they're actually decreasing it..
not only aren't they increasing the cpu sockets - they're actually decreasing it..
not only aren't they decreasing the base gpu - they're actually increasing it..
etc.

dunno- i thought i was just pointing out the obvious in that your design ideals are different -- as well as-- moving in opposite directions as apple's ideals

or have i misinterpreted what you've said here?
 
If you really want to compare performance/price with equivalent PC workstations you also have to figure the cost in workflow change too.

If your currently using FCP 7/FCP X or simular Mac only professional software will have to possibly change the software your currently using or change in licenses for windows versions. Hardware / driver compatibility issues have to be resolved.

All these will be added costs and workflows changed.

Also will these performance/price comparisons make sense if they constantly change from one platform to another year to year? The constant changing of platforms does not make sense.

We're at a point where "Mac only" software is extremely limited. And what is available doesn't necessarily need a Mac Pro to run it. Most professional applications are compatible on multiple OSs, and they usually include installers for each. So moving to a Windows machine wouldn't change my workflow whatsoever. I wouldn't mind sticking with Apple in order to have something that can run FCPX, but it's no reason for me to stay if I can find a machine that would suit my other software better.

Hardware and driver compatibility checks should be done any time you upgrade a machine. So that's nothing new.

And who said anything about changing platforms year to year? Not sure where you got that.
 
hmm.. just saw this edit.

I took things personally and I apologize.

I do think dual low end GPUs is dumb. I think it would be dumb in the cheesegrater too. It's not a MP vs nMP argument.

To try to be concise it is my opinion that, for the same budget, a faster GPU is superior than two GPUs in most scenarios today and probably tomorrow.

  • Anything not using GPUs won't benefit at all, like over on the musician thread, so it's unused cost and power.
  • Any GPU tasks not suited to parallel GPU computing also won't benefit.
  • Any GPU tasks that can be parallelized for greater speed would have simply benefited from the greater speed of a single faster GPU anyway.

I'm not sure what scenario is left for those dual, low-end GPUs. GPU tasks that perform far better when parallelized than they would when sharing cycles on a single GPU that's twice as fast? Is there such a thing? Is it common? Admittedly I don't know, but I don't see people talking about it.

To me, dual GPU setups make sense at the high end when one top performing card isn't powerful enough by itself.
 
That was what i was thinking too, not just applications utilizing the GPU...but the operating system as well.

I know I'm bumping a thread that is a little stale, but...

10.5 started using OpenGL for some heavier drawing tasks. 10.9 adds some OpenCL to the mix. And QuickTime's H.264 encoder is GPU compatible. So there is already a bit of this going on.

There is a limit though. There are some basic things that just aren't GPU compatible. And automatic OpenGL optimizations are still pinned to the GPU the monitor is on.
 
There is a limit though. There are some basic things that just aren't GPU compatible.

It's worse than that, there are actually very few things that can be GPGPU compatible. I've been programming for 25 years and I've not once parallalized an application this way, though I've wanted to. The most is simply some threaded design. The reason for this is simple; application reflect how people work, because that's what they are written for. People spend a lot of time at the computer doing nothing, so most software is a callback system which simply reacts to user input. Not a lot of parallel in that. If you don't have an application that is specifically written for this, such as graphics rendering, you won't be using it.

And automatic OpenGL optimizations are still pinned to the GPU the monitor is on.

It's possible, but I think unlikely that they plumbed these and enabled CrossFire support.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.