Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Maybe the PROBLEM is that people don't like to have one or the other crammed down their throats. They want to choose.

"Yeah, I know you ordered Coke, but drink this Pepsi instead, it's made of the same stuff, now DRINK"

False analogy.

Your computer and software is a tool.
It doesn't matter how the hardware or software gets the job done as long as it gets the job done. There is no difference in the job between OpenCL and CUDA.

OpenCL, CUDA, a million Chinese doing the calculations by hand. Who cares as long as it works.

Apple could get ride of GPU acceleration altogether for all I care as long as the tool (combination of hardware and software) is the best tool on the market to get the job done.

People have no idea what software developers have been working on with Apple.

Hell the developer of DaVinci Resolve said version 10 SCREAMS on the new Mac Pro.
That is the only thing that matters.

CUDA vs OpenCL / Intel vs AMD / Who gives a rats ass.
Apple could release a ARM based mac and as long the software kicks ass who cares?

Coke and Pepsi on the other hand are two distinct end products.
 
MADI? I know MIDI, but MADI? :D

Latency with TB could potentially be an issue. I know nothing about computer audio, apart from making a computing device play noises using a programming language, and I have no idea what that card does, but it looks awesome. I actually want one now because it looks so awesome.

It's a MADI card (Multi-channel Audio Digital Interface) - uses either SC-fibre or Coax cable for up to 1km cable runs... and each cable pair carries 128 channels (64 in / 64 out) at 24-bit / 48khz (or 32/32 at 96khz etc). Big studios use them to get lots of simultaneous channels from A to B and then into computers or big mixing consoles. The cable run length means big Outside Broadcast trucks parked... ahem.... outside a venue can run just two cables in to the venue to connect to all the mics / PA rig / whatever. Astonishing technology, and all the more amazing to think it's still amazing now even though it's been around since 1991.
 
Yep, the Magma 3-slot one has been around for a year or so, and one can but hope they'll do a TB2 version of their 5U 8-bay rack. It's an added cost to the capital outlay for the machine though, and one which could have been avoided by different design decisions. Like I said above, it's not the straw that breaks the camel's back, but an example of one design suiting part of the market but not all. To drop 5k on a machine is one thing, to have to add another 3k on "accessories" to make that machine work for you is another.

"One part of the market. " That's really the important bit. For me this is not a concern, as I am used to buyng accessories, but I know other people will not be happy with the need for an additional purchase to get their job done. I think this article is a pretty good summation of my thoughts about the new MP.

http://arstechnica.com/apple/2013/06/a-critical-look-at-the-new-mac-pro/

----------

It's a MADI card (Multi-channel Audio Digital Interface) - uses either SC-fibre or Coax cable for up to 1km cable runs... and each cable pair carries 128 channels (64 in / 64 out) at 24-bit / 48khz (or 32/32 at 96khz etc). Big studios use them to get lots of simultaneous channels from A to B and then into computers or big mixing consoles. The cable run length means big Outside Broadcast trucks parked... ahem.... outside a venue can run just two cables in to the venue to connect to all the mics / PA rig / whatever. Astonishing technology, and all the more amazing to think it's still amazing now even though it's been around since 1991.

Thanks for the explanation. I just like the shiny lights and silicon. :D
 
I have a lot of criticisms of the new Mac Pro, but I am a skeptical person. I also work in corporate IT at a broadcast facility, so I understand that my reservations are far from typical and represent a unique workflow even for broadcast.

That all aside, I am confused about a few design choices?
Why is the power button on the back with all the cables?
Why no SD card slot?
Why no headphone jack in a more accessible location?
Ditto for USB jack.

I know I will buy one to replace my old Mac Mini. I just wish that the form over function principle was tempered by some common sense usage.
There is no optical, and not all file transfers are going to work over the network or using "Airdrop". Real people use SD cards and thumb drives to sneakernet or snail mail large files.
 
Thunderbolt- new security risk?

Amongst all this we should be praising progress, and all you stick in the muds need to lead, follow, or get out of the way.

What glaring deficiency did this solve, what can you now do with this computer that you couldn't do with the old one?

TB in the same quantity could have been added to the old Chassis..


Please take a look at this article concerning Thunderbolt security:

http://www.theregister.co.uk/2011/02/24/thunderbolt_mac_threat/

>A company gave employees laptops that were secured using all the latest technology, such as encrypted boot disks and disabled USB ports. Users weren't given admin privileges. But the Firewire ports were open. We connected a device to the Firewire port on a laptop, and broke in with administrator access. Once in, we grabbed the encrypted administrator password (the one the owner of the laptop didn't know). We cracked it using L0phtcrack. That password was the same for all notebooks handed out by the company, so we now could log onto anybody's notebook. Worse -- that administrator account was also on their servers, so we could simply log into their domain controllers using that account and take control of the entire enterprise.
 
Of course anyone developing for CUDA instead of OpenCL already made the choice that they were going to lock themselves out of any non-NVidia hardware.

This isn't some brand new conundrum Apple has suddenly created. This is always the choice that's existed and if you chose CUDA you made that choice long ago. Now suddenly people want to cry about it. Should have seen this train coming when Apple came out with OpenCL years ago and started pushing it because it wasn't tied to proprietary hardware.

If any developer locked themselves onto Nvidia, or any user locked themselves onto an app that has no plans to move to OpenCL, they shouldn't be surprised when they're more limited in what machines they could buy. What's next, demanding that Apple offer both Nvidia and AMD GPUs on Macbook Pros? How about PowerPC chips for everyone who still has PowerPC apps?

This is a post just reveling in spurious logic:

- This is a brand new conundrum Apple has suddenly created. Right now, and for quite some time, you could develop against CUDA or OpenCL using a Mac Pro. Or hell, if you really wanted to, you could have an ATI/AMD and an nVidia in the same machine. Now you can't as far as we can tell. How is that not brand new?

- I saw this train coming, but I had hope Apple would maintain their flexibility. Because there are industries, including industries where Apple machines are heavily used, where they are not what drives purchasing decisions. I develop for CUDA because the folks who built the multi-million dollar cluster care not at all about what Apple is putting into their workstations. So yeah, I've seen it coming, but that doesn't mean I necessarily need to be happy at a loss of flexibility in the Mac Pro, and Apple having spent a couple years now slowly drawing a box around what they define as "Pro" that no longer includes me.

- A Macbook Pro isn't a workstation. And Macbook Pros, and the Powerbooks before them, have always had "Whatever GPU comes in the box" as a limitation to them. A week ago, this wasn't a problem in the Mac Pro. Now it is.

- The PowerPC analogy is flawed. First, Apple had Rosetta in their OS for quite some time, allowing legacy PowerPC code to continue to function. Second, there was a technical reason to migrate from the PowerPC platform - there really isn't a clear technical reason to reduce the number of GPU options.

The cylinder Mac Pro represents a loss of options and flexibility, and it happens to be in an area I care about. Enough that when it comes time to replace my current Mac Pro, or buy more machines for my lab, it's going to involve a considerable amount of thought and weighing of options, instead of just "Buy a Mac and be done".

----------

There is no difference in the job between OpenCL and CUDA.

There is if you're using code optimized for CUDA, developing against CUDA, or using specific libraries.

Apple could release a ARM based mac and as long the software kicks ass who cares?

Professionals using Intel's math libraries. Like me.

Just because you can't see a difference between the products doesn't mean they aren't there.
 
Please take a look at this article concerning Thunderbolt security:

http://www.theregister.co.uk/2011/02/24/thunderbolt_mac_threat/

>A company gave employees laptops that were secured using all the latest technology, such as encrypted boot disks and disabled USB ports. Users weren't given admin privileges. But the Firewire ports were open. We connected a device to the Firewire port on a laptop, and broke in with administrator access. Once in, we grabbed the encrypted administrator password (the one the owner of the laptop didn't know). We cracked it using L0phtcrack. That password was the same for all notebooks handed out by the company, so we now could log onto anybody's notebook. Worse -- that administrator account was also on their servers, so we could simply log into their domain controllers using that account and take control of the entire enterprise.
The article talks about FireWire. Please explain what that has to do with thunderbolt.
 
This is a post just reveling in spurious logic:

- This is a brand new conundrum Apple has suddenly created. Right now, and for quite some time, you could develop against CUDA or OpenCL using a Mac Pro. Or hell, if you really wanted to, you could have an ATI/AMD and an nVidia in the same machine. Now you can't as far as we can tell. How is that not brand new?

- I saw this train coming, but I had hope Apple would maintain their flexibility. Because there are industries, including industries where Apple machines are heavily used, where they are not what drives purchasing decisions. I develop for CUDA because the folks who built the multi-million dollar cluster care not at all about what Apple is putting into their workstations. So yeah, I've seen it coming, but that doesn't mean I necessarily need to be happy at a loss of flexibility in the Mac Pro, and Apple having spent a couple years now slowly drawing a box around what they define as "Pro" that no longer includes me.

Just curious. Why are Nvidia/CUDA required? Why not code to an open standard like OpenCL?

Second, there was a technical reason to migrate from the PowerPC platform - there really isn't a clear technical reason to reduce the number of GPU options.

What if there were good reasons to integrate the GPU, though?

The cylinder Mac Pro represents a loss of options and flexibility, and it happens to be in an area I care about. Enough that when it comes time to replace my current Mac Pro, or buy more machines for my lab, it's going to involve a considerable amount of thought and weighing of options, instead of just "Buy a Mac and be done".

----

There is if you're using code optimized for CUDA, developing against CUDA, or using specific libraries.

----

Professionals using Intel's math libraries. Like me.

Just because you can't see a difference between the products doesn't mean they aren't there.

I don't particularly care for the cylinder design. It is an odd hybrid of mini-tower, but with certain necessarily expensive components. For that much money, I would have preferred a 3U/4U many-PCIe-3.0 design with tower/rackmount option and beefy power supplies. But, apparently, there just are not that many folks who want that kind of flexibility. Well, there are, but, they are all running Linux-based servers, not OS X applications.

My preference would have been an inexpensive mini-tower based on the Xeon E3 series, and, an expensive E5-based multislot Xserve with a rackmount/tower option. But, they didn't ask me.
 
Just curious. Why are Nvidia/CUDA required? Why not code to an open standard like OpenCL?

Honestly, for the same reason stuff is still programmed in FORTRAN. Because a good many libraries, code, etc. were written in CUDA, and since I'm not actually paid to code (I'm paid for research, which happens to involve code) - and don't have the CS chops the people who originally wrote the libraries did, going back and recoding it isn't so much an option.

Basically "Because CUDA got there first, and OpenCL is just starting to be a viable alternative".

What if there were good reasons to integrate the GPU, though?

An integrated GPU I can see - what with the whole unified thermal core thing. But the PowerPC -> Intel switch was a platform switch. I can't see a compelling technical reason for an integrated AMD GPU vs. an nVidia GPU. If you forced me to go with one, I'd pick the nVidia (what with being able to run both OpenCL and CUDA). But I'd rather not be forced - I'd prefer to see two options. I'd prefer not to have a net loss of flexibility for dubious benefit.

Right now, best case, this requires a bespoke, For New Mac Pros Only nVidia GPU to be built by someone. As a very, very long time Mac user, I generally don't put much faith in "Oh, there will be plenty of peripherals for our proprietary design."
 
Just curious. Why are Nvidia/CUDA required? Why not code to an open standard like OpenCL?

Largely only because AMD didn't get their act together sooner. One, they had problems getting their OpenCL stack optimized. Two, they were slow as molasses to get ECC onto their Pro cards.

Either one of those will get you tossed from a HPC cluster. Both means not even on the consideration list.

The problem it created is in the interium folks built up libraries of what is now proprietary, one vendor code. Now that they have highly tweaked and hand optimized software they are quite OK with vendor lock-in for now.

The other problem in HPC cluster those is that Intel is quite competive with their Xeon Phi option. There is an every larger base of code that is hooked to x86 cores. The Phi allows ports of that code without switching to either OpenCL or CUDA.


What if there were good reasons to integrate the GPU, though?

Not going to convince HPC folks that Thunderbolt is a good motivation.
That context already has higher speed interconnects.
 
Largely only because AMD didn't get their act together sooner. One, they had problems getting their OpenCL stack optimized. Two, they were slow as molasses to get ECC onto their Pro cards.

Either one of those will get you tossed from a HPC cluster. Both means not even on the consideration list.

The problem it created is in the interium folks built up libraries of what is now proprietary, one vendor code. Now that they have highly tweaked and hand optimized software they are quite OK with vendor lock-in for now.

The other problem in HPC cluster those is that Intel is quite competive with their Xeon Phi option. There is an every larger base of code that is hooked to x86 cores. The Phi allows ports of that code without switching to either OpenCL or CUDA.

Not going to convince HPC folks that Thunderbolt is a good motivation.
That context already has higher speed interconnects.

Deconstruct nails it in one.

I'll be really curious to see where Xeon Phi goes, because as you said, the only folks who can swing a larger "Legacy Code" stick are the x86/Intel folks.
 
The article talks about FireWire. Please explain what that has to do with thunderbolt.

Please read the complete article---- Firewire is tunneled using the Thunderbolt connection- only if DMA is switched off, the hacking procedure is much more complicated. To get access to the main memory or the CPU, you can use a Firewire device (Firewire-> Thunderbolt adapter) or you can directly use the Thunderbolt connector. There are many examples even here on MacRumors in the MacBook Pro threads.
 
my 2 cents....

Oook, Let's go. About the mac pro itself: i am so excited i can hardly wait to see the full details about it(price; upgradable parts) AND TO BUY ONE. I see a LOT of INOVATION and VERY GOOD SIGNS from Apple regarding pro users. First the changes within OSX: OpenGL 4.1 and OpenCL!!! YEaaaah baby, finally moves for pros not just eye candy! I see huge benefits on the new pro: the ultra fast flash memory and FINALLY after YEARS AMD PROFESSIONALS GRAPHICS CARDS for all budgets. All this huge positive changes, but somehow folks still complaining. I am not sure i fully understand this. For those who want/need more power. Try to build render farms out of mac minis or just buy several mac pros. My only concerns at this point are: price and upgradable parts(and price for). But i need to say this: Apple had only 2 choices ahead of them. Either they killed the line, or they do it less upgradable so we buy more often. Otherwise the project will be unprofitable. Just how many of us change the graphics card and postpone a new machine? Let's face it: Apple needs to sell a machine to us every 2 years or less to maintain the line productive. My guess is we will not be able to upgrade much onto the machine, but we will see an amazingly low starting price.... Think of under 2K. I am so excited that i started to save for one! Yuuuuupiiiii! :p
 
....
Basically "Because CUDA got there first, and OpenCL is just starting to be a viable alternative".

Technically x86 got there first. There is an even bigger library if it. Getting Intel interested in hot plug capable Xeon Phi card integration drivers probably is probably going to be hard though.


I can't see a compelling technical reason for an integrated AMD GPU vs. an nVidia GPU.

There are economic and technical reasons why parts go in. Would not be surprising at all if AMD gave Apple better pricing terms than Nvidia for pro graphics components and certification work. AMD did win the design "bake off" for any of the other Macs in 2012 (and probably for 2013 too). AMD needs business. Don't have as large HPC business printing money off of.

In part this is on Nvidia's making also. If they had offered a better part (lower TDP ) at a lower price more than likely Apple would have taken it.
If they didn't then they would loose the design "bake off".

Most likely Apple, not either vendor, built the card. Just like the other current embedded GPU designs. If don't sell Apple the parts, won't get used.

Right now, best case, this requires a bespoke, For New Mac Pros Only nVidia GPU to be built by someone. As a very, very long time Mac user, I generally don't put much faith in "Oh, there will be plenty of peripherals for our proprietary design."

There is no custom hardware needed. It is largely drivers that is missing piece. A Thunderbolt driver for a Tesla Card would solve the CUDA problem for those that can afford one. The external Thunderbolt device would only need to be able to handle the load, but would not techically be custom for that. Any Xeon Phi , FirePro , Tesla card would work with drivers.

But yes getting folks to write IOKit drivers for OS X is going to be a challenge. However, custom hardware is not the hurdle at all to doing local development to bubble up to bigger boxes for production runs.
 
I am going to condense this all down into one poorly constructed analogy that somehow happens to hit the nail on the head.

The new Mac Pro is the extremely hot girl or guy who happens to be totally crazy. You know you'd like to get with them at least once to try that out, but you know the ramifications of doing so would just be way to costly.
 
Technically x86 got there first. There is an even bigger library if it. Getting Intel interested in hot plug capable Xeon Phi card integration drivers probably is probably going to be hard though.

Yeah, I was only speaking to OpenCL vs. CUDA. x86 got there before anyone else was aware there was a race.

There are economic and technical reasons why parts go in...snip the rest.

This was in response to the analogy of the switch the PowerPC, where switching opened up a ton of doors for Apple. This switch...doesn't really. There clearly are a ton of business-related reasons to go with AMD or nVidia over the other, but they're not as end-user benefitting as the switch to PowerPC was.

There is no custom hardware needed. It is largely drivers that is missing piece. A Thunderbolt driver for a Tesla Card would solve the CUDA problem for those that can afford one. The external Thunderbolt device would only need to be able to handle the load, but would not techically be custom for that. Any Xeon Phi , FirePro , Tesla card would work with drivers.

But yes getting folks to write IOKit drivers for OS X is going to be a challenge. However, custom hardware is not the hurdle at all.

The drivers are the missing piece for an external solution - and a functional GPU Thunderbolt enclosure, but it sounds like those are on the way. I more meant a bespoke, custom hardware for replacing your internal AMD card with an nVidia one. For folks who don't want an octopus-styled setup of a central node and multiple enclosures strung together, in the event a mix-n-match setup causes driver issues, as a BTO option for those of us who don't want to pay for AMD cards we're not going to use, or to replace the FirePros a few years down the line when there's something better coming down the pipe.
 
Wow

so many anti-iCan statements said here about the world's most smallest xMac. Yep, its really called the iCan.
 
I am going to condense this all down into one poorly constructed analogy that somehow happens to hit the nail on the head.

The new Mac Pro is the extremely hot girl or guy who happens to be totally crazy. You know you'd like to get with them at least once to try that out, but you know the ramifications of doing so would just be way to costly.

And my ideal workstation is a girl/guy who is flexible, runs hot and makes a little noise when things get going.
 
I'll be really curious to see where Xeon Phi goes, because as you said, the only folks who can swing a larger "Legacy Code" stick are the x86/Intel folks.

Those people happen to own most of the Top500 Supercomputers so they are a rather influential subset. Though on top the "big wins" that are about to show up on that list that are Phi powered. And on top of that that Intel bought

a. Cray's interconnect technology.
b. An Infinband implementer
c. Already an "affordable' 10GbE implementation

..... I wouldn't be ignoring them. At least in the "big boy" leagues. Intel is out to remove Nvidia from that list. They may not do it entirely but they are going to throw a ton of money at it.
 
Those people happen to own most of the Top500 Supercomputers so they are a rather influential subset.

That was what I was getting at with some of my posts. There are spheres, including scientific computing, where no one cares what Apple is up to.

Though on top the "big wins" that are about to show up on that list that are Phi powered. And on top of that that Intel bought

a. Cray's interconnect technology.
b. An Infinband implementer
c. Already an "affordable' 10GbE implementation

..... I wouldn't be ignoring them. At least in the "big boy" leagues. Intel is out to remove Nvidia from that list. They may not do it entirely but they are going to throw a ton of money at it.

Indeed. And don't get me wrong, I'd be as pleased as punch if there's enough re-penetration by Intel into the HPC space that I could forget all this AMD/nVidia silliness and go back to just writing x86 code. Though even that would mean I'd really like Phi drivers for the Mac, and I'm not holding out much hope for that.

As an Intel shareholder, I certainly wouldn't mind them eating some of nVidia's higher margin business either.
 
Please read the complete article---- Firewire is tunneled using the Thunderbolt connection- only if DMA is switched off, the hacking procedure is much more complicated. To get access to the main memory or the CPU, you can use a Firewire device (Firewire-> Thunderbolt adapter) or you can directly use the Thunderbolt connector. There are many examples even here on MacRumors in the MacBook Pro threads.

I am far too lazy to click. You should have quoted more of the article so that it made sense, or none at all.
 
Why are you on this community if you believe the Company that this community is based off has the single priority of taking as much money as possible off you.

http://en.wikipedia.org/wiki/Capitalism

I'm a part of the community because i own a feb 2008 3,1 that i upgraded. It was a good computer for the price at the time, and is still kicking.

I assume when i buy a pc next, i won't be coming back as much... maybe if i buy an apple laptop.
 
Ignore the number of memory slots, how much RAM would it have to support to be a "pro level workstation"? 64GB? Quite easily done now. 128GB might be possible.

As for the number of cores, it depends on what the system is being used for.

I suspect that the next iteration (next year?) might have a dual cpu option.
I am only finding 8gb parts in the new ram speed.
32gb for now, maybe 64gb by years end. 128? That would require 32gb ram chips. Good luck.
BTW our Mac Pros at this facility are running 64gb ram, we are thinking about going to a higher ram capacity next fiscal year. We just bought these in Feb, so the new Mac Pro won't be an option until 2016 for us. Stupid mandatory 3 year refresh cycle.
 
This was in response to the analogy of the switch the PowerPC, where switching opened up a ton of doors for Apple.

Money drove Power PC also. If Apple had paid IBM to make what they wanted, it would have been built. That Sony, Microsoft, and Nintendo did it showed it was really a matter of putting your money on the table. Apple kept their money in their pocket.

It was cheaper and less risky (which typically boils down to money) to go the Intel route. Feed off the common WinPC infrastructure market and probably with awareness that the broader market would drag its heels for a long time on BIOS. Your competitors providing segmentation primarily to your benefit... gotta love that.





I more meant a bespoke, custom hardware for replacing your internal AMD card with an nVidia one.

If Apple largely designed and built the card that isn't likely going to happen. It is in the same class as the current Mac Pro's daughtercard.

The current Mac Pro was a very different process. Apple evaluated reference design and there was boot issues to tackle. Sharing a heatsink with the rest of the Mac not likely something Apple is going to let some 3rd party vendor (with its own agenda) do.

as a BTO option for those of us who don't want to pay for AMD cards we're not going to use,

Proprietary software is a big driver of that. CUDA arrived first but so did Adobe Flash. It is taking HTML5 a while but the proprietary solution is being pushed out of the over time.

This is new Mac Pro design is one that Apple probably plans to live on over the next 5-7 years. Where things will be 2-3 years from now is a bigger driver of whether it is a good match or not. CUDA is loosing ground. Inertia may keep it around a bit longer in some HPC subsets (among some others), but with 3 of the 3 largest GPU vendors backing OpenCL and only one vendor behind CUDA the long term prospects for CUDA aren't that good. Up in the same category of "Android is never going to catch iPhone" , "Windows on open hardware is never going to catch Mac OS on proprietary hardware " , etc.



or to replace the FirePros a few years down the line when there's something better coming down the pipe.

It is a trade-off. I think that is way they are going to try to go "overkill" (relative to the software alignment to the design) so that it will take a while for a broader spectrum of the software to catch up and take advantage.

Some subset of the rapid GPU trading is trying to brute force bad (at least unoptimized ) software to go faster. Throwing hardware at it to flog it into going faster when it is doing distinctly not particularly optimal things.

May not be able to sit on a box infrastructure 5-8 years but won't be every 2 either.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.