More than two 6-cores?
That's what I'm thinking. Time will tell though.
More than two 6-cores?
$300 to $350 for 4x8GB (32GB) of RAM is not expensive. Anyone making this claim is kinda out to lunch IMHO! It's actually cheaper (or about the same is some cases) than an 8x4GB configuration at present. Four years ago you might have been right but not any longer - not for a very long while.
Apple is trying to force developers/users into a software model that is optimised for Apple hardware.
If they succeed then the new Mac Pro will be a powerful workstation for such use.
If they don't succeed then much of its power will be restricted to a few programs which may become too small in terms of market share to be properly supported with further development long term.
Apple have made the design decision to swap a CPU for a GPGPU.
BUT for many users (like myself) it has significant drawbacks.
1.) Many tasks are multi-data such as running several copies of a program on different data sets and these aren't suited to GPGPU speedups.
2.) For development software, academic experimental software etc the cost in run time is a sum total of development/experimentation and actual processor time and it is far too much of an overhead to write the code for efficient GPGPU speedup. So the GPGPU won't be used (the exception might be if good library code is available.)
3.) Even for commercial code suitable for GPGPU speedup much of this has been written for CUDA or openCL on nVidia which doesn't run properly on ATI hardware so users of such software will need to rely on it being rewritten. This will not happen for more than the most popular programs.
Maybe not even for some of those - if most customers run it on Windows workstations with nVidia cards it won't make commercial sense to write a particular Apple version.
4.) If you're not making good use of the GPGPU it is annoying to have to pay for it!
That's what I'm thinking. Time will tell though.
That's what I'm thinking. Time will tell though.
Yah, that single processor only has 12 cores. I want 24 cores!!!![]()
And you are ready to pay for that?
If you need it for your work, the Mac Pro should meet your needs. If you're just wanting to show off that you have the latest and the greatest, go buy a BoXX and make the jump to Windows/Linux.
Exactly.Meaning we could have 24 physical cores if they kept the current form factor.![]()
Exactly.
Apple seems to have forgotten the purpose of a Mac Pro is to be a Mac workstation, not a Super Mac mini.
Frankly any academic software that doesn't want to run on GPGPUs systems also doesn't want to run on the top 10 of the TOP500 supercomputer list. In about a year that is going to be doesn't want to run on the top 20.
There are more than 24 cores in the 2013 Mac Pro.
You must be confused because Apple said up to 12 cores. Additionally the 2013 Mac Pro has not even been released yet so stop spreading false information.
HTT ≠ a physical core
Virtual core ≠ Physical core
He's refearing to cores in the GPUs.
I'm sorry, but "there are more cores because they're on the GPU" is marketing speech and nothing more.There are more than 24 cores in the 2013 Mac Pro. A lot more. No, they really didn't forget.
The issue is whether only x86 cores are worth counting. Apple has been and is still increasingly detaching itself from that viewpoint for future Macs ( not just the Mac Pro ). So is AMD. So is Intel. In fact, every major system vendor is.
And you are ready to pay for that?
If you need it for your work, the Mac Pro should meet your needs. If you're just wanting to show off that you have the latest and the greatest, go buy a BoXX and make the jump to Windows/Linux.
The nature of the problem being solved by the software has far more to do with how the programs are built than wether or not the developers want to use the top how ever many super computers. Even small tasks in my field still often require 4GB of RAM per process or are simply naturally linear computations. So you just can't load that into a GPGPU. With some tasks you can do some gymnastics to make it work, but there are trade offs being made that cut back the gains from being able to use 50 cores or more. Anyway point is, your use of "want" above is just incorrect. Of course everyone would love to throw an extra 50+ cores at a problem, but sometimes it just doesn't work that way.
Even small tasks in my field still often require 4GB of RAM per process or are simply naturally linear computations. So you just can't load that into a GPGPU.
Additionally, in research and development particularly, there is often the need to produce code to quickly try something. The code needs to be written quickly and also run quickly.
Wouldn't this be fixed if someone (e.g. Apple) made a library of easy to use GPGPU functions? Kind of like OpenCV, but for OpenCL? (Assuming it doesn't already exist)
I'm sorry, but "there are more cores because they're on the GPU" is marketing speech and nothing more.
In the case of requiring 4GB+ RAM per thread, and those threads are linear, so can't be divided further, would not a cluster of 6+ quad Mac Minis be better, and likely cheaper, than a 24 core Mac Pro?
even if the OS uses the GPGPU cores as well as CPU cores it won't matter until all of our professional applications get on board the GPGPU bandwagon.He's refearing to cores in the GPUs.
Very well put. This is the point I was trying to make, but perhaps not clearly enough.
Additionally, in research and development particularly, there is often the need to produce code to quickly try something. The code needs to be written quickly and also run quickly. This is a very different model to a team developing a commercial product that will then be used by lots of people over a long period of time.
Regarding the growth of GPGPU in the HPC community:
HPC is a bit like Formula 1, fun to watch and good to look at the leader board but you're not going to want to go on an expedition across the Sahara in a 4x4 designed by an F1 team!
Don't tell that to the people living in the dream world where everything uses OpenCL and GPGPUs with perfect efficiency.even if the OS uses the GPGPU cores as well as CPU cores it won't matter until all of our professional applications get on board the GPGPU bandwagon.
Heck we only just crossed over from 32 to 64 bit not too long ago, and we still dont have 100% 64 bit apps.