Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

resotek

macrumors member
Original poster
Sep 25, 2013
32
0
I agree, that if you think in traditional terms it does seem quite odd. These traditional terms dictate that X, Y, Z software/vendor will have to add support for OpenCL to be of much use. This is completely true, but it may also be missing an entire piece of the puzzle.

What happens if OS X decides to start using those GPU's for system tasks? For example, perhaps Mavericks' uses the GPUs for it's memory compression. There seem to be a lot of other computational OS tasks that could use these cards to great affect. How would these pro systems then compare to iMac, Mini, Hackintosh, or Workstation?

I don't pretend to know, but I do think something bigger and more out-of-the-box is at play with the nMP. I can't wait to see how this all unfolds and, for the record, I don't think Apple is thinking in traditional terms with the nMP.
 
I want the D700's for the openGL and Vram.. if lots of venders hop on the OpenCL bandwagon in a timely manner, and Apple itself pushes all sorts of GPGPU implementations into the OS's then we are sitting pretty. OpenCl 2.0 sounds great.

Apple having a pretty limited hardware install base, as well as a lot of people moving to the free 10.9 maverick's OS puts a lot of people in the same pool. So interesting and extreme things can be done when most people benefit.
 
What happens if OS X decides to start using those GPU's for system tasks? For example, perhaps Mavericks' uses the GPUs for it's memory compression. There seem to be a lot of other computational OS tasks that could use these cards to great affect. How would these pro systems then compare to iMac, Mini, Hackintosh, or Workstation?

while i'm sure at least some of the OS stuff is on the gpus, it's not taxing it too hard.. and it's doing the same thing on all of the computers.

but making osx run better on the mac pro than my laptop (for instance) is not too good of an idea.. osx needs to run equally well on all the relevant hardware (imo)
 
This isn't exactly the first time Apple has released something that will make good use of something that isn't widely adopted yet.

Thunderbolt is a good example. They released Thunderbolt Macs before much Thunderbolt accessories existed. Now they are releasing dual GPU workstation to make use of OpenCL before much software uses it. Someone has to be the first to take the step, either hardware manufacturer or software developers.
 
In my opinion it would have been better to offer a single gpu option, be it firepro or regular gpu. A lot of professional applications and usage will not benefit at all from these dual firepros. A single gpu option could have brought the entry level price to 2500,- and would provide the possibility to have more processor power for less overall cost for the tasks mentioned above, so the target audience would have been bigger.
 
I definitely think that looking to OpenCL for the future is a good move, the question is whether it'll take off during the lifetime of this first generation of new machine.

I made the mistake of betting on 64-bit adoption when I bought my PowerMac G5, and was sorely disappointed, as 64-bit didn't really fully take-off until after I replaced it with a Mac Pro. I mean, I wasn't wrong to expect more 64-bit adoption, and the advantages of it, I was just wrong to be an early adopter as I never saw the benefit.

People that know they'll need GPUs for software they have now, or that is coming out soon, can't really go far wrong with the new Mac Pros. It might be a bit misguided to expect to get the machine and that it'll only get faster with time, as you might end up disappointed like I did :(


For the right applications though, dual GPU's in the new Mac Pros means you're getting a very powerful dual processor machine, potentially a triple processor depending upon whether the CPU and GPUs can both run at full speed at the same time or not.
 
In my opinion it would have been better to offer a single gpu option, be it firepro or regular gpu. A lot of professional applications and usage will not benefit at all from these dual firepros. A single gpu option could have brought the entry level price to 2500,- and would provide the possibility to have more processor power for less overall cost for the tasks mentioned above, so the target audience would have been bigger.

at some point, i think it's important to accept the fact that the nmp is a dual gpu system (not necessarily important but it will make things easier in your head)

look at the design and imagine apple selling it without one of the gpus on there.. they wouldn't do that strictly from a vanity point of view.. (and personally, i agree on that type of thinking.)

i'm actually surprised they're selling 3 sticks of ram on the 3g configuration.. that's like selling you a 3 legged dog.. (not that i don't have love for 3 legged dogs #)

i don't know.. it is a professional grade computer and you should expect professional grade parts/prices.. if it helps, think of it as having 1x gpu @ $500 which, to me, is within reason of a base level pro system's gpu cost.
 
Was removing the PS2 ports dumb?
Was removing the floppy drive dumb?
Was removing the optical drive dumb?

Just because you are not a visionary and do not understand the reasons behind certain things does not mean that those things are dumb.
 
...
What happens if OS X decides to start using those GPU's for system tasks? For example, perhaps Mavericks' uses the GPUs for it's memory compression. There seem to be a lot of other computational OS tasks that could use these cards to great affect.

It isn't really an "if", it is more a matter of how much. Apple Frameworks already have hooks into OpenCL. Looking to add more but this is already in flight:

"... Working with internal clients such as CoreImage to implement optimized OpenCL kernels, performance tuning and debugging issues ... "
http://www.linkedin.com/jobs2/view/6956703

Some vendors, like Pixelmator, tend to max out on their Apple Framework utilization. That is probably a contributing reason why Pixelmator is in the new Mac Pro's marketing page for 'Performance'.

How would these pro systems then compare to iMac, Mini, Hackintosh, or Workstation?

If OS X ( via GCD and the OpenCL dispatcher) tend to load some of this OpenCL to the GPU not driving the screen first then the Intel HDx000 (iMac, MBP 15" ) are going to lag behind any of the Mac Pro offerings D300-D700. For systems with a single GPU then the OpenCL workload had to timeslice with the graphics workload.

The Hackintosh... I don't think really worries about that much. If Hackintosh got to a be a significant percentage the lawyers would solve that issue. Windows workstations, again if OS X isn't providing significant differentiation then there are bigger issues than dual GPUs.

That said it really as much dependent though on the data being driven about as much the apps. For smaller data workloads the iMac/Mini/MBP will leverage OpenCL to do more work that was once mostly confined to Mac Pros. The new Mac Pros will move up a notch to avoid that encroachment on the old territory. Will they move up to cover any Hackintosh/Windows Workstation configuration possible? No. The old Mac Pro didn't cover them all either.



Is the Apple Framework going to kick out enough OpenCL work to keep a second GPU busy constantly? Probably not in most situations.


I don't pretend to know, but I do think something bigger and more out-of-the-box is at play with the nMP.

Been true for a while. The MBP 15" models have been dual GPU for a while. Most of the iMacs of late have been dual GPU. It isn't like the OS X Frameworks that leverage it would be solely restricted to just one Mac model. Even without dual GPUs the whole entire 2013 Mac line up is OpenCL on GPU capable; top-to-bottom. Since it is universally available there isn't much of any good reason the OS should ignore that resource. ( Sure there are more older Mac than newer ones, but there isn't much good reason to kneecap the newer models just because the older ones are slow. )


I can't wait to see how this all unfolds and, for the record, I don't think Apple is thinking in traditional terms with the nMP.

In some sense they are in this GPU selection case. If you want to look relatively thin then stand next to the Fat Lady in the circus side show. Custom FirePro cards that don't cost quite as much as mainstream FirePro cards is going to make it easier to pitch the price points that will have.

Apple tends to stay way from lowest end priced major components but the "drama" with systems built from those tends to myopically narrow to simply just price followed closely by control.
 
With no Crossfire and little to no software in place to support it, I feel that the vast majority of users would prefer a faster single GPU option. In current systems, where people have a choice, this is what they choose. You only see people opting for dual GPUs in rare instances when a single fast one isn't enough by itself or when 3-4 monitors isn't enough.

Part of me cynically wonders if the dual GPU was included, even in the base configuration, because it's required to support so many Thunderbolt ports. Thunderbolt ports which in turn are required in large numbers because storage was forcibly made external.

In my opinion the nMP is a joke. A bad joke. A Mac Mini with faster CPU/GPU. But this has all been said before.
 
With no Crossfire and little to no software in place to support it, I feel that the vast majority of users would prefer a faster single GPU option. In current systems, where people have a choice, this is what they choose. You only see people opting for dual GPUs in rare instances when a single fast one isn't enough by itself or when 3-4 monitors isn't enough.

Part of me cynically wonders if the dual GPU was included, even in the base configuration, because it's required to support so many Thunderbolt ports. Thunderbolt ports which in turn are required in large numbers because storage was forcibly made external.

In my opinion the nMP is a joke. A bad joke. A Mac Mini with faster CPU/GPU. But this has all been said before.

This is a perfect example of applying completely traditional thinking to the nMP. This is not meant as an insult and I fully admit that you may end up being 100% correct in your assessment.

However, my point in posting this thread was to posit that if OS X brings the GPU's to bear for computational tasks, the nMP will have an opportunity to redefine the workstation. IOW, it will innovate which I think was their goal from the beginning (look at the final product). I don't think they had any intention to build just another Apple branded PC (like the cheese grater).

That's why I rhetorically asked how such a system would compare to traditional systems. Said another way, the nMP has the potential to be a disruptive technology. Traditional metrics and what is today understood to be good/bad would have to change to account for the new kid on the block.
 
With no Crossfire and little to no software in place to support it, I feel that the vast majority of users would prefer a faster single GPU option.

In current systems, where people have a choice, this is what they choose. You only see people opting for dual GPUs in rare instances when a single fast one isn't enough by itself or when 3-4 monitors isn't enough.

# or #

look at 64bit.. are you glad, now, apple did it? for as ram hungry as people seem to be these days, i'm pretty sure the answer is 'yes'

it's not like the programmers were going to sit around coding 64bit apps when there wasn't hardware which could utilize it.. the computer had to happen first.



Part of me cynically wonders if the dual GPU was included, even in the base configuration, because it's required to support so many Thunderbolt ports.
i think it's much more complicated than that.. it's not like they can just say 'should we put one or two gpus in there' and that's that.. one gpu would require a complete redesign (well, i'm sure the one gpu draft already happened quite a few times in the initial phases).. there are hundreds if not thousands of factors which must of been considered during the process and we're now seeing what the designers/engineers have found to be the correct balance..


Thunderbolt ports which in turn are required in large numbers because storage was forcibly made external.

i'm not really so sure about that.. there is internal storage and it seemingly very nice storage at that.. it does appear to be room for another ssd and, for me, would give the ability to have more storage in a nmp than my current mp.. (2x 1TB vs 1TB/500GB/400GB)

it's about the same as when the mp1 first came out where having 2TB (4x500) would run you around $2000 at 1dollar/gig

the ssds allow for the high price of storage game to happen again.. it's not like 1TB ssds are bleeding edge tech and we're just hoping they can figure out a way to make 2TB drives.. that's already been figured out but the technology is going to be trickled out to the consumers in such a way that they can sell $1000 drives for the next 6-7years.. (1TB now for a grand.. 2TB in a couple of years for $1000.. then 4.. etc)

point being, the computer appears to be designed to accommodate 8ish TB internally.. the designers didn't blow it in this regard...it's the apple accountants (or whoever) that are holding it back by creating a guise of storage scarcity in order to make more money
 
However, my point in posting this thread was to posit that if OS X brings the GPU's to bear for computational tasks, the nMP will have an opportunity to redefine the workstation.

It's great to bring GPUs to bear on computational tasks, but you can do that whether you have one GPU or two. Having dual slow GPUs at the bottom end makes no sense to me over the same budget being used for one fast GPU, because the vast majority of use will be faster with the latter.

look at 64bit.. are you glad, now, apple did it? for as ram hungry as people seem to be these days, i'm pretty sure the answer is 'yes'

it's not like the programmers were going to sit around coding 64bit apps when there wasn't hardware which could utilize it.. the computer had to happen first.

Yes, but I cannot accept that analogy as valid.

As I replied above, you can bring GPUs to bear on computational tasks whether you have one GPU or two.

The coding should have come first because, unlike the 64-bit scenario, it can be used right away by existing single and dual GPU setups.

By forcing dual GPUs, you are preparing for something that may or may not be implemented, and even if it is, the already two-year old GPUs that Apple has chosen will be obsolete. While in the mean time you are also drastically slowing down all existing software that would have benefited from the same budget being applied to a faster single GPU (which is the overwhelming majority of software).

I know from Crossfire/SLI experience that whenever possible, one GPU with the same computational power is always preferable over two GPUs that add up to the equivalent power. Because (A) software always takes better advantage of the single GPU, (B) even after years of working on this, it's still less compatible and more buggy than a single card implementation, and (C) some type of work simply isn't suited to splitting up for parallel processing.

It's the same reason why a quad core isn't twice as fast as everything that a same-clock dual core is.

In fact, had Apple simply improved the power supply connections, a 2013 cheese grater could be dual, triple, or even quad-GPU, so if you're right about the multi-GPU configurations being the cat's pajamas some day, then the nMP is actually worse in that respect than an updated cheesegrater would have been.

Maybe I'm missing something? I fully admit that I don't get it. But I feel like the counterpoints are very vague and optimistic about future possibilities. Seeing that the current state of multiple CPU and GPU setups rarely leads to linear improvements, I do not share your optimism.
 
There sure seem to be a few people really angry about the Mac Pro that nobody has used yet due to the fact that it is not even for sale yet. Just seems strange to me.
 
It's great to bring GPUs to bear on computational tasks, but you can do that whether you have one GPU or two. Having dual slow GPUs at the bottom end makes no sense to me over the same budget being used for one fast GPU, because the vast majority of use will be faster with the latter.

Unless driving lots of displays is your priority.

I'm guessing that the dual GPUs in the nMP are there primarily to support up to 7 displays and 6 TB ports. If you acknowledge that this may be a requirement for some users, dual GPUs make sense. However, you might make a compelling argument for one powerful GPU along with one weak one. That might make more sense in many situations rather than two weak GPUs or two powerful GPUs.
 
Yes, but I cannot accept that analogy as valid.

As I replied above, you can bring GPUs to bear on computational tasks whether you have one GPU or two.

The coding should have come first because, unlike the 64-bit scenario, it can be used right away by existing single and dual GPU setups.

the coding has come first in certain regards.. openCL isn't new.
there are a handful of rendering programs which run entirely on gpu(s) producing great images at a fraction of the time as their predecessors.. fully noticeable jumps instead of what you'd see if going from ,say, 8core cpu to 16core cpu.

the proof of concept is there and it's not a blind leap..

By forcing dual GPUs, you are preparing for something that may or may not be implemented,

at some point it's about trusting that the engineers/designers/visionaries at apple have made the right choice.. the money people at apple make some screwy decisions from a customer point of view but they're not designing the computers.

and even if it is, the already two-year old GPUs that Apple has chosen will be obsolete.
meh.. does it work right? is it fast and stable? most of the stuff in a computer is a lot older tech than 2 years.

While in the mean time you are also drastically slowing down all existing software that would have benefited from the same budget being applied to a faster single GPU (which is the overwhelming majority of software).
like what?

I know from Crossfire/SLI experience that whenever possible, one GPU with the same computational power is always preferable over two GPUs that add up to the equivalent power. Because (A) software always takes better advantage of the single GPU, (B) even after years of working on this, it's still less compatible and more buggy than a single card implementation, and (C) some type of work simply isn't suited to splitting up for parallel processing.

these aren't regular gpus.. for instance, the drive plugs into the gpu..
but i'm not quite sure crossfire/sli experience translates so well in this case.. i'd imagine the people who designed these things have much more crossfire/sli experience than you do


It's the same reason why a quad core isn't twice as fast as everything that a same-clock dual core is.
i'm not quite sure i understand what you're getting at..
if something can't be parallelized on a gpu then it's running on one gpu core at ~850mhz.. might as well run that process on a cpu i would think..


Maybe I'm missing something? I fully admit that I don't get it. But I feel like the counterpoints are very vague and optimistic about future possibilities. Seeing that the current state of multiple CPU and GPU setups rarely leads to linear improvements, I do not share your optimism.

right. i'm just more optimistic about it than some people here.. a pessimistic viewpoint is equally valid.. so some minor headbutting is sure to ensue
 
Low-end Macs will drive OpenCL

I think the key piece of the puzzle that folks are overlooking is that the new (current) crop of embedded Intel GPUs are now first-class citizens when it comes to OpenCL support. Developers (and Apple) need not do extra work just to target the nMP sliver of the market. They will be smart to use the GPU offload option to squeeze performance from high volume, LOW END hardware. The nMP users will get broad based benefits from fast GPUs as an unintended byproduct.

Of course a few Pro apps will be tuned for nMP use.
 
It's great to bring GPUs to bear on computational tasks, but you can do that whether you have one GPU or two. Having dual slow GPUs at the bottom end makes no sense to me over the same budget being used for one fast GPU, because the vast majority of use will be faster with the latter.


By forcing dual GPUs, you are preparing for something that may or may not be implemented, and even if it is, the already two-year old GPUs that Apple has chosen will be obsolete. While in the mean time you are also drastically slowing down all existing software that would have benefited from the same budget being applied to a faster single GPU (which is the overwhelming majority of software).

It's the same reason why a quad core isn't twice as fast as everything that a same-clock dual core is.

In fact, had Apple simply improved the power supply connections, a 2013 cheese grater could be dual, triple, or even quad-GPU, so if you're right about the multi-GPU configurations being the cat's pajamas some day, then the nMP is actually worse in that respect than an updated cheesegrater would have been.

I kinda agree with you and share your sentiments here. And another factor to consider is how the new Mavericks responds to multiple monitor workflow setups since the new Mac Pro has 2 GPUs and will ship with Mavericks. Here is a video about a Mac Pro user with 6 monitors setup and how he is having problems with the OS Mavericks. http://www.youtube.com/watch?v=Mi6AhogZCeg

With Mountain Lion, all 6 monitors are interconnected and you can stretch your app window to the other monitor. But with Mavericks, you cannot stretch out the app window to the other monitor. Your work area is just constrained to the center monitor. Though we may wait and see how the workflow goes with multiple monitor setups in the new Mac Pro and if a fix is in the works. And what is the percentage of users that will need 2 GPUs for a multiple monitors setup for a higher price point.
 
# or #

look at 64bit.. are you glad, now, apple did it? for as ram hungry as people seem to be these days, i'm pretty sure the answer is 'yes'

it's not like the programmers were going to sit around coding 64bit apps when there wasn't hardware which could utilize it.. the computer had to happen first.

There's something to that. Now that Apple is pushing an OpenCL platform I'm looking at getting the nMP and seeing what I can do with a OCL style parallelization. I've heard similar thoughts from others.
 
Unless driving lots of displays is your priority.

I'm guessing that the dual GPUs in the nMP are there primarily to support up to 7 displays and 6 TB ports. If you acknowledge that this may be a requirement for some users, dual GPUs make sense.

I agree.

Is 7 really the limit? Doesn't Thunderbolt2 support the newer MDP that allows for two monitor daisy chaining per port?

i'm not quite sure i understand what you're getting at..
if something can't be parallelized on a gpu then it's running on one gpu core at ~850mhz.. might as well run that process on a cpu i would think..

If something can't be parallelized on a GPU, then the dual GPU budget would have been better spent on a single GPU that's much faster.

I mainly think the dual FirePro is dumb at the low end. If you had instead one fast GPU you'd have the same computational power, no worries about inability to parallelize a task, and an open slot to plop in another card later.

Even if I concede slower parallel GPU's are the cat's pajamas compared to a single faster GPU at the same price point (which I don't), then it still brings me back to the fact that a nMP with dual GPUs from Apple wouldn't be as great as an updated cheesegrater with 4 GPUs from anyone you want. But I know that's a bit off topic from this thread's original question.
 
then it still brings me back to the fact that a nMP with dual GPUs from Apple wouldn't be as great as an updated cheesegrater with 4 GPUs from anyone you want. But I know that's a bit off topic from this thread's original question.

i get it.. you want the old mac pro with a spec bump.. you're wanting increased electricity usage while it's pretty clear apple wants less.. you want more space and apple wants less.. etcetc
you and them have different design ideals.. it's not so complicated

that aside, while you might of liked another round of spec bumping, i'm not too sure that would have been too healthy for the line as a whole.. it seems like they're doing what they're doing at the right time.
 
Apple is inconsistent. Since ages ago they kept telling us that GPUs arent what we want.They shipped computers with deliberately crippled graphics subsystems. See all the people raging about lack of proper graphics power in every generation of every Mac. They forced integrated GPUs on us (even the atrocious abominations that was Intel GMA).Deep inside I understood where Apple was coming from. Very few people play games. Even fewer use that one application that uses GPU power.

Now they suddenly put it into reverse and offer a computer with 2 Pro-grade and extremely expensive GPUs as standard. Further, those GPUs will be utilized by only a fraction of people who will actually by this very niche computer.

I don't know how Apple comes up with this stuff.
 
Now they suddenly put it into reverse and offer a computer with 2 Pro-grade and extremely expensive GPUs as standard. Further, those GPUs will be utilized by only a fraction of people who will actually by this very niche computer.

Ok, this is what I didn't want this thread to turn into. This thread is NOT about applying traditional metric to this machine under the premise that something different is afoot here. I have said before that this premise (my premise if you will) may very well be incorrect.

So, when you say those GPUs will be utilized by only a fraction of people, you are going right back to traditional thinking. IOW, those two FirePro's are there to drive monitors and pretty pictures on lots of monitors. While it's certainly true they will do that with reckless abandon, they could also be utilized to a great extent by EVERYONE (not a fraction of anything) if OS X decides to use them as the parallel processing powerhouses they are. Further, that means they are used all day, everyday, for whatever you do. Not just when you're editing 4K video on software that's OpenCL aware. That could be a very big deal.

I mean no bashing towards you with this response. It's just that I've been reading on here for months all the same old (traditional) arguments against this machine. That's fine, and to be expected when that's what everyone is comfortable with, but I think innovation was at the core of this new machine. I'd like to hear some discussions about what other people are reading between the lines. The nMP vs oMP arguments have been beat to death at this point.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.