Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
I would personally say it depends on the brand. While HP has the Z820 workstation which is many times more powerful (seriously, look this thing up, it's insane), fully configured it is terrifyingly expensive. Even though the Z820 starts at $2,340, it is not as powerful as a Mac Pro, at its base price. I just configured both systems, with similar specs (I wasn't able to get them exact)

Mac Pro:
3.7GHz Quad Core Xeon Processor (Single 12 Core Max.)
16GB 1866MHz RAM (64GB Max.)
512GB SSD (1TB Max.)
Dual AMD FirePro with 2GB each (Dual FirePro with 6GB each Max.)
Cost: $3,500 ($9,700 Max.)

Z820:
3.5GHz Quad Core Xeon Processor (Dual 12 Core Max.)
16GB 1866MHz RAM (512GB Max.)
500GB Hard Drive (15TB Max.)
Dual NVIDIA NVS with 2GB each (Dual NVIDIA Quadro K6000 with 12GB each Max.)
Cost: $4,960 ($69,990 Max.)

*Maximum prices are for the computer only, no external peripherals such as monitors, keyboards or optical drives are included

With the Z820, I think that with even the base model it's more expensive for its massive upgradeability factor, but I can't be quite sure. The huge price tag for a fully configured model is mostly from the RAM, as a 512GB configuration adds $48,000 to the price tag.

So, I think the price is generally dependant on the brand, but to be honest, I think that the Mac Pro is one of the least expensive workstations you can buy out there.
 
I would personally say it depends on the brand. While HP has the Z820 workstation which is many times more powerful (seriously, look this thing up, it's insane), fully configured it is terrifyingly expensive. Even though the Z820 starts at $2,340, it is not as powerful as a Mac Pro, at its base price. I just configured both systems, with similar specs (I wasn't able to get them exact)

Mac Pro:
3.7GHz Quad Core Xeon Processor (Single 12 Core Max.)
16GB 1866MHz RAM (64GB Max.)
512GB SSD (1TB Max.)
Dual AMD FirePro with 2GB each (Dual FirePro with 6GB each Max.)
Cost: $3,500 ($9,700 Max.)

Z820:
3.5GHz Quad Core Xeon Processor (Dual 12 Core Max.)
16GB 1866MHz RAM (512GB Max.)
500GB Hard Drive (15TB Max.)
Dual NVIDIA NVS with 2GB each (Dual NVIDIA Quadro K6000 with 12GB each Max.)
Cost: $4,960 ($69,990 Max.)

*Maximum prices are for the computer only, no external peripherals such as monitors, keyboards or optical drives are included

With the Z820, I think that with even the base model it's more expensive for its massive upgradeability factor, but I can't be quite sure. The huge price tag for a fully configured model is mostly from the RAM, as a 512GB configuration adds $48,000 to the price tag.

So, I think the price is generally dependant on the brand, but to be honest, I think that the Mac Pro is one of the least expensive workstations you can buy out there.

I wouldnt argue that the nMP isnt a good price when compared to other Workstation class computers.

However I do feel that most people who buy nMPs, dont need the Workstation class components.
 
With the Z820, I think that with even the base model it's more expensive for its massive upgradeability factor, but I can't be quite sure.

Yes - and that's upgradeability that the Mac Pro does not have. Repeat - upgradeability that the Mac Pro does not have.

If you'd compare the Z420 (single socket, like the Mac Pro) you'd find the prices much closer. And if you'd realize that the second GPU has little value for most people - you'd be closer still after deleting the second GPU.

All of these "equivalent config" exercises are nonsense unless they're based on "equivalent performance for task 'X'".

The price for eight, twelve, or 24 cores is irrelevant if your task has trouble using more than four. Powerful GPUs are irrelevant if your task isn't graphics heavy or able to use GPGPU frameworks. Support for 16 GiB or 512 GiB of RAM is irrelevant if your task fits in 4 GiB.

The NMP will most likely win any "equivalent performance for FCPX" bakeoff, since it's an unbalanced system tuned for that application (and that application is tuned for the NMP). GreenWater pointed that out earlier.
 
Yes - and that's upgradeability that the Mac Pro does not have. Repeat - upgradeability that the Mac Pro does not have.

If you'd compare the Z420 (single socket, like the Mac Pro) you'd find the prices much closer. And if you'd realize that the second GPU has little value for most people - you'd be closer still after deleting the second GPU.

All of these "equivalent config" exercises are nonsense unless they're based on "equivalent performance for task 'X'".

The price for eight, twelve, or 24 cores is irrelevant if your task has trouble using more than four. Powerful GPUs are irrelevant if your task isn't graphics heavy or able to use GPGPU frameworks. Support for 16 GiB or 512 GiB of RAM is irrelevant if your task fits in 4 GiB.

The NMP will most likely win any "equivalent performance for FCPX" bakeoff, since it's an unbalanced system tuned for that application (and that application is tuned for the NMP). GreenWater pointed that out earlier.

So get the HP. Seems to fit your needs
 
I wouldnt argue that the nMP isnt a good price when compared to other Workstation class computers.

However I do feel that most people who buy nMPs, dont need the Workstation class components.

I agree with you. Unless you're doing a lot of HD/4K video work or lots of seriously intense 3D work you're not going to have any use for what the Z820 or even the new Mac Pro offers. Sure, a base model Mac Pro is probably great for most demanding users, but I don't think many people need to go that high. But, there are people and corporations who do need to go that high. That's why workstations exist.

My opinion is that the new Mac Pro is probably one of the most decently priced workstations out there. Other ones can easily cost more.

----------

Yes - and that's upgradeability that the Mac Pro does not have. Repeat - upgradeability that the Mac Pro does not have.

If you'd compare the Z420 (single socket, like the Mac Pro) you'd find the prices much closer. And if you'd realize that the second GPU has little value for most people - you'd be closer still after deleting the second GPU.

All of these "equivalent config" exercises are nonsense unless they're based on "equivalent performance for task 'X'".

The price for eight, twelve, or 24 cores is irrelevant if your task has trouble using more than four. Powerful GPUs are irrelevant if your task isn't graphics heavy or able to use GPGPU frameworks. Support for 16 GiB or 512 GiB of RAM is irrelevant if your task fits in 4 GiB.

The NMP will most likely win any "equivalent performance for FCPX" bakeoff, since it's an unbalanced system tuned for that application (and that application is tuned for the NMP). GreenWater pointed that out earlier.

You basically mentioned what I forgot to mention in my earlier post. Most people don't need that powerful of hardware, but there are some, especially larger corporations, usually the ones who produce those big blockbuster movies you see in IMAX theatres.

And, of course, if I had the money, I'm dumb enough to buy something like that. For me, I'd consider it "future proofing". As long as the computer works and can handle the task, why upgrade? But that's my opinion.

But you mentioned Final Cut Pro X. Isn't that strictly a Mac application?
 
I ask myself...

I agree with you. Unless you're doing a lot of HD/4K video work or lots of seriously intense 3D work you're not going to have any use for what the Z820 or even the new Mac Pro offers. Sure, a base model Mac Pro is probably great for most demanding users, but I don't think many people need to go that high. But, there are people and corporations who do need to go that high. That's why workstations exist.

My opinion is that the new Mac Pro is probably one of the most decently priced workstations out there. Other ones can easily cost more.

----------



You basically mentioned what I forgot to mention in my earlier post. Most people don't need that powerful of hardware, but there are some, especially larger corporations, usually the ones who produce those big blockbuster movies you see in IMAX theatres.

And, of course, if I had the money, I'm dumb enough to buy something like that. For me, I'd consider it "future proofing". As long as the computer works and can handle the task, why upgrade? But that's my opinion.

But you mentioned Final Cut Pro X. Isn't that strictly a Mac application?

..................................................

Since this discussions about preferring a PC to a Mac or the opposite, have already a 30 years old beard :) I ask myself if in other web forums dedicated to Windows and PCs one can also find useless and endless discussions by people advocating the use of a Mac and OSX as opposed to buy and use a PC :confused:

It reminds me "someone" who run to the Great Putin's Democracy (particularly beloved by Russia's neighbors) to save in this way the sacred principles of the democracy in the USA. The right thing to do, of course, since only the USA acts illegally in such a way while everybody knows that all other great powers in this world do not even have the word "spying" in their dictionary. :D
 
..................................................

Since this discussions about preferring a PC to a Mac or the opposite, have already a 30 years old beard :) I ask myself if in other web forums dedicated to Windows and PCs one can also find useless and endless discussions by people advocating the use of a Mac and OSX as opposed to buy and use a PC :confused:

I thought that more than once ... :)
 
If you'd compare the Z420 (single socket, like the Mac Pro) you'd find the prices much closer. And if you'd realize that the second GPU has little value for most people - you'd be closer still after deleting the second GPU.

The price for eight, twelve, or 24 cores is irrelevant if your task has trouble using more than four. Powerful GPUs are irrelevant if your task isn't graphics heavy or able to use GPGPU frameworks. Support for 16 GiB or 512 GiB of RAM is irrelevant if your task fits in 4 GiB.

Not needing a second GPU is irrelevant to what Apple is trying to do. When you start going past 8 cores you get diminished returns exponentially. Known as Amdahl's law. Probably why Apple maxes out at 12 cores and relies so heavily on two GPU's.

Considering even if they removed the second GPU it would not greatly effect the price and would still be competively priced with other workstations. Its more of a bonus.
 
Beside, GPGPU processing isn't applicable to every task. Only in specific scenario does it make a difference. Most application today are still CPU bound.
 
One thing does not deny validity of the other

They max out at 12 cores because that is the highest core count available from Intel in a single socket.

Amdahl's law has nothing to do with this.

........................................
The present maximum quantity of cores availabie in the CPUs of Intel does not deny the validity of that principle. :D
He said "probably", much more cautious than your answer...
 
........................................
The present maximum quantity of cores availabie in the CPUs of Intel does not deny the validity of that principle. :D
He said "probably", much more cautious than your answer...

If they were worried about Amdahl so much they'd have 1 core at 50 Ghz, not be adding 4096 slow GPU cores.
 
They max out at 12 cores because that is the highest core count available from Intel in a single socket.

Amdahl's law has nothing to do with this.

It is related. Because of the diminishing scalability specified by Amdahl's Law, it wasn't worthing putting a 2nd CPU in the nMP -- unlike the previous MP which optionally had two CPUs. There are lots of professional users that would pay for additional cores if it provided significant performance increases.

Apple didn't provide that option in the nMP because the single-CPU core count has now risen to the point where Amdahl's Law dominates multi-thread scalability for most applications. As currently designed, most heavily multithreaded desktop and server software has a small serializable code fraction which cannot be eliminated. This increasingly impacts scalability with increasing core count.

This doesn't apply as much to GPUs as those algorithms are inherently parallel. Harnessing GPU parallelism for general-purpose computing is an area of ongoing academic research, but results to date have not shown huge breakthroughs. The world's second fastest supercomputer (Cray Titan XK7) used thousands of GPUs but was only marginally faster than the 3rd place pure CPU design: http://en.wikipedia.org/wiki/Titan_(supercomputer). Both use thousands of CPU cores but that's for specialized tasks not currently applicable to desktop productivity software.
 
Apple didn't provide that option in the nMP because the single-CPU core count has now risen to the point where Amdahl's Law dominates multi-thread scalability for most applications.

Have they stated this?
 
It is related. Because of the diminishing scalability specified by Amdahl's Law, it wasn't worthing putting a 2nd CPU in the nMP -- unlike the previous MP which optionally had two CPUs. There are lots of professional users that would pay for additional cores if it provided significant performance increases.

Apple didn't provide that option in the nMP because the single-CPU core count has now risen to the point where Amdahl's Law dominates multi-thread scalability for most applications. As currently designed, most heavily multithreaded desktop and server software has a small serializable code fraction which cannot be eliminated. This increasingly impacts scalability with increasing core count.

This doesn't apply as much to GPUs as those algorithms are inherently parallel. Harnessing GPU parallelism for general-purpose computing is an area of ongoing academic research, but results to date have not shown huge breakthroughs. The world's second fastest supercomputer (Cray Titan XK7) used thousands of GPUs but was only marginally faster than the 3rd place pure CPU design: http://en.wikipedia.org/wiki/Titan_(supercomputer). Both use thousands of CPU cores but that's for specialized tasks not currently applicable to desktop productivity software.

It hasn't because not all task suit themselves to parallelism. Only in specific case and with of lot of thinking ahead can you make use of it. You have to make sure that no task or dataset is dependent on the result of another to use parallel processing.

What GPUs are great at is in matrix calculus and vector based algebra resolution. 3D graphics are based on those and so, GPUs are designed and optimized for that.

On the other hand a CPU, while not being optimized for this can still do it, but it can also do many other things that a GPU isn't up to the task.

Not really dissing gpGPUs processing here, just stating that they aren't the ultimate tool in the box.
 
Have they stated this?

They haven't stated this, but that doesn't prevent it being true. Anybody can see the facts themselves by examining large multithreaded loads in activity monitor, iStat or other tools. Even in CPU-oriented cases, it's common that parallel threads are less than 100% utilization (sometimes dramatically so), yet the process is not waiting on disk or network. They are internally blocked on each other due to synchronization of critical code sections, which is an aspect of Amdahl's Law: http://en.wikipedia.org/wiki/Critical_section

It made sense to have multiple CPU chips in the previous Mac Pro. But now that per-CPU core counts have increased to 12 there's no economic or performance incentive to provide a (say) 24-core two-chip nMP because Amdahl's Law constrains performance gains at those levels.

Intel will continue to integrate more cores per CPU because Moore's Law makes the transistor budget available and there's little else to do with them besides GPU and cache. Apple knows this and there's no need to burden the nMP with a multi-CPU design when (a) It wouldn't help performance due to Amdahl's Law, and (b) More per-chip cores are coming anyway due to Moore's Law.
 
It made sense to have multiple CPU chips in the previous Mac Pro. But now that per-CPU core counts have increased to 12 there's no economic or performance incentive to provide a (say) 24-core two-chip nMP because Amdahl's Law constrains performance gains at those levels.

You should share your analysis with the people building HP's 80-core x64 server, and Dell's 60-core x64 server -- also with all the misguided vendors making 24-core systems both for servers and for the desktop.

Also, think about the fact that a dual-socket system could have 12 cores at 3.5 GHz with 50 MiB of cache and 80 PCIe 3.0 lanes - instead of 12 cores at 2.7 GHz with 30 MiB of cache and 40 PCIe 3.0 lanes.

Even if you were correct about Amdahl, that's a powerful argument in favor of dual socket machines!
 
You should share your analysis with the people building HP's 80-core x64 server, and Dell's 60-core x64 server -- also with all the misguided vendors making 24-core systems both for servers and for the desktop...

I don't need to share my analysis -- they already know it which is why they don't make general-purpose desktop machines with 60 cores. I'm talking about the general case -- hence my above term "most...software", not a fringe case that represents a tiny fraction of the market.

You can obviously design highly specific software targeted at a narrow application which more effectively harnesses multiple cores. E.g, the top TPC-E transactional databases use 60-80 cores: http://www.tpc.org/tpce/results/tpce_perf_results.asp

However this does not represent the common case of most desktop productivity software. Apple is not going to make a Mac Pro with high core counts just to run SQL Server or a few highly parallelized scientific applications. For most other applications there is limited economic or performance incentive to provide high core counts since Amdahl's Law limits multi-thread scalability for these situations.
 
You should share your analysis with the people building HP's 80-core x64 server, and Dell's 60-core x64 server -- also with all the misguided vendors making 24-core systems both for servers and for the desktop.

Also, think about the fact that a dual-socket system could have 12 cores at 3.5 GHz with 50 MiB of cache and 80 PCIe 3.0 lanes - instead of 12 cores at 2.7 GHz with 30 MiB of cache and 40 PCIe 3.0 lanes.

Even if you were correct about Amdahl, that's a powerful argument in favor of dual socket machines!

Large multiple cores used for servers are different then say, large multiple cores used just for a single software application. Servers serve multiple requests from anyone on the network.

Trying to use a large amount of multiple cores to render a single video file will eventually max out the speed, even when adding more cores, in some cases slowing it down. It has to be broken down into multiple threads, finish rendering then reassembled.

The server example is different, because each request is not dependent on another request sent by someone else.

Even if you were correct about Amdahl, that's a powerful argument in favor of dual socket machines!
Yes, that is correct.
 
Last edited:
I don't need to share my analysis -- they already know it which is why they don't make general-purpose desktop machines with 60 cores. I'm talking about the general case -- hence my above term "most...software", not a fringe case that represents a tiny fraction of the market.

Large multiple cores used for servers are different then say, large multiple cores used just for a single software application. Servers serve multiple requests from anyone on the network.

So, you admit that
  • some applications scale linearly with increasing core counts
  • Apple doesn't care about those applications

I do think that the truth is coming....

----------

Trying to use a large amount of multiple cores to render a single video file will eventually max out the speed, even when adding more cores, in some cases slowing it down. It has to be broken down into multiple threads, finish rendering then reassembled.

And why would you not want more cores to do this?
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.