I recently got a research grant of $40k to buy a personal (super)computer, and am anxious about the possibility of a Mac-Pro update. I would prefer a mac, but right now there are no options for memory upgrades above 64GB, or processor upgrades beyond 12 cores.
I know that the mac pro will likely be upgraded or killed in the relatively near future. Assuming that it is upgraded,
does anyone know how likely it would be for an upgraded mac pro to be expandable to >=128GB memory?
are there any hardware-fixes to expand the current generation mac pros beyond 64GB of memory?
also, does anyone know if mac pros (current or possible upgrades thereof) are compatible with NVIDIA's Tesla GPUs?
and finally, I'd appreciate any advice regarding how long I should wait for an upgrade before investing.
Thanks!
A couple thoughts from a research science grad student who spends too much time musing about things he would buy with infinitely much money.
1. Don't try to sink all of that money into a single computer. Just don't.
2. The current Mac Pros aren't compatible with the Tesla GPUs. If you're interested in using CUDA, there are other CUDA capable cards available for the Mac Pro. The Quadro 4000 has a Mac version, and there's a newer one as well, as well as the odd GTX cards. See here:
https://forums.macrumors.com/threads/1323132/ . Alternately, the ATI cards that come in the current generation of Mac Pros are capable OpenCL cards.
3. With 40 grand, you're starting to sink into smallish cluster territory. I'd do the following:
3a. Buy yourself a nice workstation/client/code mockup machine. A Mac Pro (or MBP if you spend lots of time traveling) with plenty of memory. A nice screen (I suggest a Dell Ultrasharp 23 inch or two). Plenty of storage. A quality mouse and keyboard, etc. Make this machine nice, but don't try to turn it into tens of thousands of dollars powerhouse.
3b. If you're at a university, talk to your university's high performance computing people. Many will allow you to buy "condo" nodes - essentially you pay to put nodes into the main cluster, and you get priority on those nodes, but they manage them, deal with software and hardware, etc. Heck, at my university, folks doing that even get higher access to other nodes. That's a nice, stress free way to give yourself computing power.
3c. If you don't want to do that, build yourself a small, modest cluster. A maybe four nodes, a nice shared storage system, and (hopefully) the space to store it all. If you need more power than this, it's a good place to run your code as a testbed before using something like Amazon EC2 to build a much larger on-demand cluster for running a job.
As much as I hate to admit it, for something like this, I'd probably use Linux machines. The hardware is cheaper, the OS more flexible, you have access to more potential components to use if you're building them yourself, and more vendors if you're buying them, Tesla cards, etc.
But if you're dead set on using Macs, you can make credible compute nodes out of Mac Minis, or use another Mac Pro as a second GPU-based computing machine. I've been musing about specing out and testing hackintosh compute nodes that are more purpose suited than minis, but haven't the time or funding.
Either Pooch or XGrid are decent enough tools to administer an all-Mac cluster, or you can use something like Rocks if you go with a Linux setup. I will tell you that a Mac Pro workstation/client plays rather nicely with a Linux based cluster.