Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Distributed computing, whether admitted or not. Maybe not specifically in OpenCL, but at this point OpenCL is the most open way to implement this on GPUs, which are only going to get more and more important in heavy-duty work. Apple can't choose to "drop" an entire model that the industry is moving towards. Might as well say they'll drop support for keyboards and mice, although even that would make more sense. Fact is, it's just not going to happen. This is the future, this is how competing is going to get done in the future.

Sounds like you're not aware that you can daisy-chain with TB? You sure as hell don't have to have all these spinning disks attached to their own TB port, would definitely save you some ports.



Apple can't drop? - Please.

Apple most certainly can and has dropped software and hardware in the past. Just a few years ago, Apple assured developers at WWDC that they were gung-ho on 64-bit Carbon. 365 days later, they informed the developers Oh, one more thing, all that work you did porting your 32-bit Carbon apps to 64-bit Carbon? Well, we finally got up off our asses & got the Cocoa libraries useful, so we flushed development and support for 64-bit Carbon - sorry you wasted a year of coding.

When the vendors that make the software I depend on look at OpenCL & say thanks, but we'll pass, it doesn't matter how much Apple champions it - they aren't going to use it if it doesn't add value to their products. Even if they changed their minds & started a change over today, I would be looking at 2 - 4 years down the road before I would depend on it (Version 2). I try not to use Version 1 of software rewrites - I am content to let others be the guinea pigs. And none of that addresses the fact that Nvidia & CUDA is still king today.

Yeah, I could daisy-chain TB - which would fill that pipe up pretty fast, don't you think? There is a reason that TB speeds are listed in Gb as opposed to MB.

As far as my spinning disks, nothing amuses me more than people who don't know how much data I have telling me how to set up my system.

I have 1 Raid 5 for backup (6TB for the moment, moving to 12 TB shortly), 2 dual RAID0 (1x4TB, 1X2TB) iTunes & Data, respectively. My goal is to consolidate & retire 15 (640GB - 2TB) or so hard drives to 10 (8x4TB & my 2x240GB SSDs).

About a decade ago, P.T. Barnum, I mean Steve Jobs, was pushing Make your computer the hub of your digital lifestyle. For me, that still works better than depending on iCloud & Time-Warner Cable. I am old enough to remember the main-frame & I am not going back.
 
Yeah, I could daisy-chain TB - which would fill that pipe up pretty fast, don't you think? There is a reason that TB speeds are listed in Gb as opposed to MB.

Yeah, funny that people are still regurgitating the "Daisy-Chain up to 1 Million Devices on Each Port" stuff.

I can take a SINGLE (1) SM951 drive, put it in a TB enclosure and BINGO, the poor thing is throttled from the get-go by the lack of bandwidth in TB2. No RAIDs, no monitors, no other devices at all, and the entire TB controller is passed the "maxed out" state and throttling the drive.

I can put the same drive in a cMP and get more bandwidth in a x4 PCIE 2.0 slot.

So much for that TB myth.
 
what software is that?

I do 3D art. Poser Pro 2014, Vue are my major go-to tools. Their render engines are CPU based. -

For Vue, I just load up my RenderCows on my render farm (a 1,1 MacPro & a 4 core Dell 690) and away I go. I don't have any available slots in my flashed 4,1 for any additional GPUs, so upping the GPUs doesn't make any sense at this time.

In Poser, I can use LuxRender via the Reality Plug-in and feed it to my renderfarm. With LuxRender, the developers refer to their OpenCL version as "a work in progress", which is why I ignore it & stay with the CPU version.

For me, it is more cost-effective than tossing everything out & starting over. Not to mention the fact that just because Apple supports something today, it doesn't mean that they have an undying love for the technology. They could change direction in a heartbeat & then I would be stuck in yet another cul-de-sac.
 
Yeah, funny that people are still regurgitating the "Daisy-Chain up to 1 Million Devices on Each Port" stuff.

I can take a SINGLE (1) SM951 drive, put it in a TB enclosure and BINGO, the poor thing is throttled from the get-go by the lack of bandwidth in TB2. No RAIDs, no monitors, no other devices at all, and the entire TB controller is passed the "maxed out" state and throttling the drive.

I can put the same drive in a cMP and get more bandwidth in a x4 PCIE 2.0 slot.

So much for that TB myth.

I'm not doubting multiple things daisy chained together might have bandwidth issues. Not everything may be used at the same time, or even turned on.
 
Not sure why it matters,


i was curious. surely that's grounds enough to ask what the software is on a forum like this, right?

since current Nvidia GPUs are faster at OpenCL than ATI GPUs.

nvidia is a member/promoter of openCL and the khronos group.. it's not like openCL just happens to work well on nvidia gpus.. nvidia makes that happen.

thing is, cuda only runs on gpu.. the early adopters of cuda that now have gpgpu programs (not just 'gpu accelerated') are computing on the gpus alone.. it's apparently very difficult and possibly ineffective to have gpgpu code running alongside the more traditional cpu based code..

openCL on the other hand runs on gpus and cpus.. i could show you some previews of rendering apps running openCL but they aren't publicly available yet so i'm nearly positive you'll just revert to your go_to vaporware argument as a means of dismissal.. i'll just wait til it's public to show you but these devs who are unarguably way smarter than you or i at this stuff-- who have in the past said openCL was too immature and finicky to work with.. are now saying it's coming of age and surpassing the possibilities of what's possible with cuda.

but basically, openCL allows for the whole kit to be used.. not just the gpu.. not just cpu.. not cpu with a little boost from gpu.. but all available cores of any flavor to max out

If the vendor passed on OpenCL - it's at least a mismatch between OpenCL and the product.
really? that's the only conclusion you can come up with if a developer says they'll pass on openCL?

----------

I do 3D art. Poser Pro 2014, Vue are my major go-to tools. Their render engines are CPU based. -

i'm somewhat familiar with both.. that said, i guess i misinterpreted your paragraph which started with the devs saying they'll pass on openCL then finishing with "cuda is king"..

i read it as if you were quoting CUDA developers saying openCL is not worthy when in reality, the developers aren't using either.

from what i can tell, the early adopters of CUDA are definitely interested in openCL and certainly see the advantages of it over cuda..
 
Last edited:
i was curious. surely that's grounds enough to ask what the software is on a forum like this, right?

I'm sorry that you took that as a slam....


thing is, cuda only runs on gpu.. the early adopters of cuda that now have gpgpu programs (not just 'gpu accelerated') are computing on the gpus alone.. it's apparently very difficult and possibly ineffective to have gpgpu code running alongside the more traditional cpu based code..

Having spent a week at the GPUtech conference earlier this month, it's really pointless to imagine that there's an advantage to generic code that runs on both CPU and GPU.

CPU-code is a complete failure for the tasks that CUDA accelerates. Literally "CPU-months" vs "GPU-minutes". (I'm bringing a system with three TitanX cards and 768 GiB of RAM up right now.)

The argument that "the same code runs on CPU and GPU" is irrelevant when the speed difference is many orders of magnitude. Do you want your results "tomorrow" or "before your grandkids graduate from college"?
 
I'm sorry that you took that as a slam....




Having spent a week at the GPUtech conference earlier this month, it's really pointless to imagine that there's an advantage to generic code that runs on both CPU and GPU.

CPU-code is a complete failure for the tasks that CUDA accelerates. Literally "CPU-months" vs "GPU-minutes". (I'm bringing a system with three TitanX cards and 768 GiB of RAM up right now.)

The argument that "the same code runs on CPU and GPU" is irrelevant when the speed difference is many orders of magnitude. Do you want your results "tomorrow" or "before your grandkids graduate from college"?

heh.. i already get my (conversation applicable) results tomorrow. i'd rather get them in 10 minutes..

i don't really care how it happens be it nvidia or amd.. currently, my renderer is gpu accelerated via cuda and yes, it's advantageous but only under certain circumstances/scenes/lighting.. those same developers have now switched full on to openCL and are showing 10x speed increases on modest hardware.. promising more as well as saying it scales accordingly (with better gpus or more gpus)..

idk, we'll see.. i'm pretty sure i'll make some sort of 'see, i told you guys :p' type of post when the release happens.
:D

---
but really.. openCL vs CUDA is sort of a lame argument anyway.. if one is better than the other than so be it.. from what i gather, they both have advantages over the other.. but still, making the argument as if one tech bests the other is seriously selling the real heroes short.. that being the developers.. give a crappy dev cuda or openCL and so what.. the app still sucks anyway. ;)

----
[EDIT]

well, for a while at least, i won't care if it's openCL or CUDA which opens the doors for leaps&bounds render times.. i have a 4GB 780M which should give it to me either way.. down the road though, i will care as i'm still in the market for a nmp.. just on hold for a bit to see how this all plays out.. i don't see myself switching to windows so with the current config of mac pro, i'm obviously rooting for openCL to pull through.
 
Last edited:
I surrendered and just bought a maxed out iMac Retina. I hope its a decision I don't regret. :eek:

as long as your bread&butter software doesn't make your computer do this all the time:

maxcpu.png


..i honestly don't think you'll regret the decision.. you'll probably love the thing.. it's very fast and real sweet to look at.

(also assuming you don't get a lemon etc.)
 
Apple can't drop? - Please.

As far as my spinning disks, nothing amuses me more than people who don't know how much data I have telling me how to set up my system.

I have 1 Raid 5 for backup (6TB for the moment, moving to 12 TB shortly), 2 dual RAID0 (1x4TB, 1X2TB) iTunes & Data, respectively. My goal is to consolidate & retire 15 (640GB - 2TB) or so hard drives to 10 (8x4TB & my 2x240GB SSDs).

About a decade ago, P.T. Barnum,…..

I would not suggest raid5 --especially for 12 TB. Why? As many have found out, when a disk fails and a replacement is inserted, all the bits in the 12TB array must be read without error to regenerate parity or the array is toast. Look up the error rates of modern disks and compare that to how many bits are in 12 TB. Use raid 6 at least. I use ZFS with 2 parity disks, sort of like raid 6.
 
nMP needs a second CPU and at least one more physical internal drive before I consider it viable for anything useful.
 
it's not even useful as a trash can anymore?
tough crowd


(the drive will probably happen.. the cpu won't happen. it can't.)

They need to abandon the design. Classic case of form over function on this one. I don't care how big it is and what it looks like. MacPro is supposed to be a workhorse that sits under a desk or in a rack, serverroom, closet.

Apple is no longer makes the machine I need and that makes me sad...
 
They need to abandon the design. Classic case of form over function on this one. I don't care how big it is and what it looks like. MacPro is supposed to be a workhorse that sits under a desk or in a rack, serverroom, closet.

Apple is no longer makes the machine I need and that makes me sad...

you don't care what it looks like but you care about where it sits?
 
I have little confidence that Apple will rise to the occasion and offer something truly "great".

But they will have a watch to sell you.

Sad.

Playing that fun, fanboy Apple Game has turned out to be such an activity for losers and people out of touch with reality.

Apple is no longer a great computer company though they still make some really nice ones. Other than OS X I can imagine nothing that a Mac Pro brings to the table for me that cannot be had in an appropriate Dell or HP workstation.

To be honest most of the things that we owners of Mac Pro towers are talking about and fretting over on this forum do not exist at all for Dell or HP workstation owners. USB3, workable high-quality GPUs'? Good heavens!

Sure, we love our Mac Pros. I would even consider getting another one but it's a dead end alley we are traveling down. If I had the $3500 I spent on my first Mac Pro, a CTO 2010, back in my hand I would have an HP configured to work far into the future on my desk in whatever shipping time plus a couple of days are.

Consider the thread "What is the state of USB 3.0 on Mac Pro?" Are you kidding me? The days of excitement are long gone.
 
They need to abandon the design. Classic case of form over function on this one. I don't care how big it is and what it looks like. MacPro is supposed to be a workhorse that sits under a desk or in a rack, serverroom, closet.

Apple is no longer makes the machine I need and that makes me sad...

You would think that Sir Idiot Boy would have learned from the Cube.
 
They need to abandon the design. Classic case of form over function on this one. I don't care how big it is and what it looks like. MacPro is supposed to be a workhorse that sits under a desk or in a rack, serverroom, closet.

Apple is no longer makes the machine I need and that makes me sad...


Spot on! +1

But I still keep the door of the cabinet open to be able to take a glance at my cMP 5.1 when I walk in my study. As a Dutch resident, why not show-of my beast of a cheese grater. I eat a lot of cheese ;-)
 
I have a nMP , D700s,8c,64GB. I do a lot of 4K (UHD) video using FCPX, underwater and wildlife photography (PS and LR) and a ton of scientific computing (C codes and IDL). My photographer wife has a loaded retina IMac. Her machine wouldn't even come close to satisfying my needs.

Having used both her 5K monitor and my own 4K UHD monitor, I can't see any practical value for a 5K external monitor nor would I have one even if the nMP could drive it. She has a second monitor(2560 x 1440) for her iMac but she never runs her 5K monitor at anywhere near full res. I also have a (2560 x 1440) monitor which I use to display UIs, web pages, etc and always run my other UHD monitor at full res.

IMHO, 5k is way over hyped. I will probably buy the next nMp, but couldn't care less about 5K capability. Faster CPUs .... hell yes ! I would prefer the classic case as well, but I don't think they will ever go back. I hate all the daisy chained TB peripherals.
 
The answer is no. $5,000 for my Mac cylinder is enough. I'm all tapped out.
 
Last edited:
I can imagine that triangle is far more perfect shape than a square, but couldn't the square be comparably efficient?

In terms of cost effectiveness? No. Roughly in the same dissipation capacity yes, but that ignores some component placement problems.

The triangle allows a series of non overlapping "internal" fins that each touch all the other sides. This allows both high surface area ( number of fins with sustained air flow ) and max reservoir transfer between sides ( sources ).

A square requires touching 3 other sides for cross transfer. If want to avoid overlap can't use all of the internal volume with fins. A square formed by two triangles with one edge each touching would max out the area but screws up the CPU placement which has distance coupling constraints.

CPU heat sinks are rectangular largely because what they individually sitting on is rectangular and the air is moving across; not up and through internals.

In this context, the triangle isn't the perfect shape based on some mystical (or "it looks hip" ) objectives. If only had 1-2 major thermal sources it doesn't fit. If have more than 3 then probably don't want just one chimney either (given the low noise constraint).
 
Finally

Sam,

I upgraded from a 2009 MP to the nMP - it was an amazing difference. I initially got caught up in the 4K wave, but to date I'm still utilizing my Apple Cinema Display 30. When the time comes, I'll upgrade the monitor.

If the primary purpose is for photography, I don't think there's anything worth waiting for - as in next, next or next Gen.

Avoid the rumors and don't concern yourself with what "may" be coming. Best of luck on your decision.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.