Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
...wake me up when something actually ships, or maybe even when we know all the configurations.

We already know the configurations. They are listed on the specification pages. We do not know their prices though.
 
.... I also have to ask, how do you plan to control which it uses? ....

Really shouldn't have to do much unless OS X resources management is mind numbingly clueless. OS X already by default switches to the dGPU when start throwing higher loading graphics calls at it. Unless the user has explicitly set the GPU usage to only using the iGPU, that load will split under the normal policies the OS X has been following for a couple of years now.

Once the OS has a choice between a completely idle iGPU and a loaded down dGPU ... if it is picking the dGPU as the destination for the OpenCL work there is something mindboggling broken in the resource allocation heuristics. Assigning work to a completely idle resource isn't rocket science.
Same is true if running very lightweight graphics workload and all is assigned to the iGPU while the dGPU is asleep doing nothing.

Perhaps the only simplistic toggle Apple would need to add is a setting to the one that allows users to "enable"/"disable" to dGPU to add an additional variant which sends no graphics work but does allow more power to be used if there is computations to to be done.

Similarly if the OpenCL dispatch allocator doesn't take into account how loaded a GPU or CPU is with its "normal" workload... again that is pretty spectacularly piss poor job of OS implementation. The OS is handing out the work to the resources. There is really no good excuse for it not to "know" how loaded down the resources are. Even something as smiplistic as just looking at which way the "high power / low power" GPU 'switch' is set (i.e., where is the core graphics framework sending graphics calls to) is a simplistic, yet highly informative, clue as to which GPU is far more idle.

If Apple can't implement that..... really should question whether should be using the OS at all, because there is much more stuff with depth that the OS should be doing.
 
does anybody else not see the current implementation of discreet gpu in a laptop as a stop gap solution?
personally, i'd rather have a single gpu in a laptop. makes more sense (to me)

maybe more of the point i'm trying to make is that trying to figure out how to best utilize/code two gpus in a laptop is sort of a waste of time.. because ultimately, i think apple is working towards using a single gpu even in the high end mbp..
 
Do you have something that will control switching for optimal OpenCL performance? I'm genuinely curious, because it would be cool if such a thing would work.

Grand Central Dispatch is the answer. The following link contains some background information and it is very revealing as to why the nMP has one CPU socket and multiple GPU's.

Concurrency and Application Design
 
In my opinion the nMP is a joke. A bad joke. A Mac Mini with faster CPU/GPU. But this has all been said before.

I couldn't agree more!

Next to nothing can be upgraded on this 'Pro' computer. I'd much rather buy a dedicated Mac Mini to perform longer tasks, like rendering or compiling and use my current laptop for other tasks.
 
I couldn't agree more!

Next to nothing can be upgraded on this 'Pro' computer. I'd much rather buy a dedicated Mac Mini to perform longer tasks, like rendering or compiling and use my current laptop for other tasks.

Yes, because nothing screams pro like upgrading your computer and being your own tech support guy.
 

I'll skip over some of the rest but this one was a bit much....

For a real awakening, take a look at the Iris Pro OpenCL benchmarks. One reason that GPU is such a speed demon is it's not moving data over a PCI bus.

Image

Even the 4000 series IGPU is faster than the dedicated stuff due to the lack of the PCI bus hit.

Conclusions without much fact to it.

A. The charts on either side of this one don't have that line-up.

http://www.anandtech.com/show/6993/intel-iris-pro-5200-graphics-review-core-i74950hq-tested/17

Lux mark is a ray tracer.... what is the incoming data stream suppose to be here that is traversing the PCI-e bus ?


B. The other GPUs in question have several issues. They were selected to represent some base comparison points and not so much to measure impact of PCI-e data transfer times or latencies.

i. The GT 640 is old. It is a rebadged GT 545 with DDR3 ( not DDR5 ... but the substantively cheaper, slower stuff) and core clock about 720.
That is drags behind the mobile 650M should be clue it is there as a historical reference point (e.g., what desktop readers may now own that they bought a couple of years ago. )

ii. The AMD GPUs in this test suite are iGPUs and outside of this benchmark they slip slide around above and below the dGPU alternatives. This specific on is cherry picked only because it slides all of the iGPUs on top. None of the other graphs do this. As somewhat noted on the graph the AMD CPU+GPU options are in a lower TDP envelope. The GPU clocks are capped in the 723-260 range.

( Aside: AMD does a remarkable bang-up job of kneecaping their current, modern discrete GPUs by not implementing PCI-e v3 in the CPU/chipsets at this point years after Intel moved their whole mobile/desktop line up forward ( minus the lingering rebadaged older models ). [ largely because they have failed to add PCI-e controller into the CPU die or package. The whole "extra" transistor budget being thrown at iGPUs and that is kneecapping their dGPUs. ] )



iii. While the i7 4950HQ base graphics clock speed is 200MHz, the top end Turbo speed is 1.3GHz. As mentioned, the GT 640 tops out at 700. The 650M can be run from 745-900MHz ( guess which part of that ran Apple is using in their "ultra thin" retina models? ). The AMD's in the same zone.

Sure there is not an one-to-one matching between clock speeds and performance due to differences in microarchtecture but still the function unit pipelines are moving at substantially different speed to be a non factor.
One a single mult-add instruction the throughput is going to be different if do a back-of-envelope that each varying architecture can either do or issue one per cycle.
 
I am not against it. I just find the concept of attaching meaningless labels, based on one's own ill-defined criteria, laughable and banal.

The Mac Pro was always a great machine mainly because it wasn't a disposable product. Prosumers acknowledged that and are continuously doing so. Label or not, 'pro' does have a relation to customization being a part of the Mac Pro line.
 
The Mac Pro was always a great machine mainly because it wasn't a disposable product. Prosumers acknowledged that and are continuously doing so. Label or not, 'pro' does have a relation to customization being a part of the Mac Pro line.

I mentioned it in another thread, but the notion of it being disposable is nonsense.
 
GPU computing is the future as intel is hitting a wall. I think Apple knows exactly what they are doing and they want to push the developers to write their codes to use the GPU power that is available. Yes, it six that there is only one cpu NOW when the developers didn't rewrite their codes to use that power but once they do this conversation will be rewritten as genius move by Apple.
There is not much from intel going on and nothing in the near future whereas GPU is getting a lot of power in comparison. We just have to suffer now for a while until the software catches up.
 
GPU computing is the future as intel is hitting a wall. I think Apple knows exactly what they are doing and they want to push the developers to write their codes to use the GPU power that is available. Yes, it six that there is only one cpu NOW when the developers didn't rewrite their codes to use that power but once they do this conversation will be rewritten as genius move by Apple.
There is not much from intel going on and nothing in the near future whereas GPU is getting a lot of power in comparison. We just have to suffer now for a while until the software catches up.
Intel themselves are pushing people into OpenCL.
 
I mentioned it in another thread, but the notion of it being disposable is nonsense.

I saw what you wrote in the other thread and agree with it.. (or, I took you as saying barely anyone upgrades GPUs anyway which I agree with)

what happens though is that line of thought contradicts the arguement which says apple won't allow user upgrades in order to make more money by forcing complete computer buys..
it makes more sense for apple to think "we'll continue to cater to 95% of people who won't upgrade GPUs and for the five percent who do, we'll happily sell them the updated cards we have laying around"
instead of "those 5% sure are causing us to lose money".. because you're not making them lose money-- you're giving them more money.
as in-- if the GPUs aren't upgradable, that 5% of people won't buy the initial computer in the first place-- much less completely replace every 3yrs.
 
I saw what you wrote in the other thread and agree with it.. (or, I took you as saying barely anyone upgrades GPUs anyway which I agree with)

what happens though is that line of thought contradicts the arguement which says apple won't allow user upgrades in order to make more money by forcing complete computer buys..
it makes more sense for apple to think "we'll continue to cater to 95% of people who won't upgrade GPUs and for the five percent who do, we'll happily sell them the updated cards we have laying around"
instead of "those 5% sure are causing us to lose money".. because you're not making them lose money-- you're giving them more money.
as in-- if the GPUs aren't upgradable, that 5% of people won't buy the initial computer in the first place-- much less completely replace every 3yrs.

That's where we disagree. I don't see Apple catering to that hypothetical 5%. No one expects 100% customer retention, and that's even more true than ever with such a radical new design. If they were that worried about pleasing everyone then we'd see options like dual CPUs, Nvidia, etc.
 
That's where we disagree. I don't see Apple catering to that hypothetical 5%. No one expects 100% customer retention, and that's even more true than ever with such a radical new design. If they were that worried about pleasing everyone then we'd see options like dual CPUs, Nvidia, etc.

hmm. yeah, I definitely agree with what you're saying.. I think it's impossible to cater to 100% no matter what type of product we're talking about (though I do feel power users are of higher concern to apple than the way I hear you saying-- but that's irrelevant to what I was trying to say)

the actual point I'm arguing against is the one about apple locking it down in order to force more sales.

if they're completely ignoring the desires of 5% users and letting them go their seperate way.. then sure, that's entirely possible and I'm not saying they haven't done that.

but once those people are gone-- who does the idea of "forcing to buy every 3 years by not allowing upgrades" apply to?
the people who care about doing that aren't even their customers at that point.

do you see what I'm getting at?
 
if it is picking the dGPU as the destination for the OpenCL work there is something mindboggling broken in the resource allocation heuristics. Assigning work to a completely idle resource isn't rocket science.

Getting there

Arstechnica Mavericks Review said:
Modern Macs with integrated GPUs get some nice improvements in Mavericks. Any Mac with Intel’s HD4000 graphics or better can now run OpenCL on the integrated GPU in addition to the CPU and any discrete GPU. (Core Image now uses OpenCL in Mavericks, though the old GLSL implementation remains for backward-compatibility with existing Image Units.)
 
We already know the configurations. They are listed on the specification pages. We do not know their prices though.
Maybe you're prescient, but I only see 2 dead links on the Apple Site.

----------

Yes, because nothing screams pro like upgrading your computer and being your own tech support guy.
If you want something done right...
 
Maybe you're prescient, but I only see 2 dead links on the Apple Site...


Base config - Quad-Core and Dual GPU


3.7GHz Quad-Core Intel Xeon E5 processor

12GB 1866MHz DDR3 ECC memory

Dual AMD FirePro D300 with 2GB GDDR5 VRAM each

256GB PCIe-based flash storage

Upgrades for quad core:

CPU: Configurable to 3.5GHz 6-core processor with 12MB L3 cache, 3.0GHz 8-core processor with 25MB L3 cache, or 2.7GHz 12-core processor with 30MB L3 cache

Memory: Configurable to 16GB (four 4GB), 32GB (four 8GB) or 64GB (four 16GB)

GPU: Configurable to dual AMD FirePro D500, each with 3GB GDDR5 VRAM, 1526 stream processors, 384-bit-wide memory bus, 240GB/s memory bandwidth, and 2.2 teraflops performance; or dual AMD FirePro D700, each with 6GB of GDDR5 VRAM, 2048 stream processors, 384-bit-wide memory bus, 264GB/s memory bandwidth, and 3.5 teraflops performance

Storage: Configurable to 512GB or 1TB

Hex Core Config - 6-Core and Dual GPU


3.5GHz 6-Core Intel Xeon E5 processor

16GB 1866MHz DDR3 ECC memory

Dual AMD FirePro D500 with 3GB GDDR5 VRAM each

256GB PCIe-based flash storage

Upgrades for hex core:

CPU: Configurable to 3.0GHz 8-core processor with 25MB L3 cache or 2.7GHz 12-core processor with 30MB L3 cache

Memory: Configurable to 32GB (four 8GB) or 64GB (four 16GB)

GPU: Configurable to dual AMD FirePro D700, each with 6GB of GDDR5 VRAM, 2048 stream processors, 384-bit-wide memory bus, 264GB/s memory bandwidth, and 3.5 teraflops performance

Storage: Configurable to 512GB or 1TB



We know the base prices and we know all of the configurations available. We do not know the prices of the upgraded configurations, as I said.

All of the information is here:
http://www.apple.com/mac-pro/specs/ (upgrade and base spec info)

and

http://store.apple.com/us/buy-mac/mac-pro (base spec configs and price)
 
Last edited:
Base config - Quad-Core and Dual GPU
Hex Core Config - 6-Core and Dual GPU

Given Intel pricing we can ballpark CPU upgrade costs too

  • Hex core: +$500 minimum upgrade cost, probably more like $700
  • Octo core: +$1500 minimum upgrade cost
  • 12 core: +$2500 minimum upgrade cost

Also knowing that you get the following for an extra $1000
  • 4 GB RAM
  • Quad-Hex boost
  • D300-D500 boost

Figure that you are getting some kind of discount for this package (going from the base to the upgraded), so maybe given the CPU pricing you can figure the D500 upgrade will cost another $500-$700 when standalone.

No idea what a Flash upgrade will cost, but I'll throw out another $500-$700 for an added 256 GB.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.