Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Just had this through.....would suit my requirements for visualisations but v1 Xeon so no 4.0 turbo and only 1600mhz ram but 16 slots capable. And obviously 1 GPU only at sale. Nice system though. Just covering my bases.

Really need to see nMP benchmarks and real world performance so that I can make my plans for next year....:confused:
 

Attachments

  • image.jpg
    image.jpg
    585.8 KB · Views: 85
Just had this through.....would suit my requirements for visualisations but v1 Xeon so no 4.0 turbo and only 1600mhz ram but 16 slots capable. And obviously 1 GPU only at sale. Nice system though. Just covering my bases.

Really need to see nMP benchmarks and real world performance so that I can make my plans for next year....:confused:

The v2 CPUs are available for configs and BTO from HP.com. Resellers usually have standard configs, and the v2 CPUs haven't gotten into the retail chain much.
 
At least that was what was put out by Blackmagic Designs. Looking it up again DaVinci Resolve can now use up to 5 FirePro cards. So I guess it also has multiGPU support. Pretty cool.
I don’t dispute that much of computing is going to GPGPUs with multiple GPUs, but at the moment applications that actually do that are limited. Its pretty hard to say its happening fast either. Many programs are still working on being multithreaded at the CPU or can hardly use more than 4 cores. You can point to a few examples where 2+ GPUs can be used effectively, but for most folks what percent of their work does that actually represent? I can only speak for myself, but so far its 0% and very few GPGPU at all. Others may sit in places with much more, and if so, great, buy the nMP. I’m not pro- or anti-nMP, its just this MP wouldn’t work for me.

But in regards to this thread...I see what virtualrain is saying. We could argue all day on exactly what configuration would be needed for our own uses.

Well the basic problem has become that the pro-nMP crowd will point to two pretty damn good GPUs and a fast as hell PCIe-SSD that won’t be in any standard configuration from other retailers and then not look at other things like 4xSATA ports or 3 open PCI slots for customization of what you really need. This is pricing out a full system for specific tasks would be more informative. Many of the advantages of other systems is the flexibility to match your workload. The nMP is pretty locked down in flexibility. If everyone wants to assume the nMP is the right thing, fine. Have fun.

I would have to use hardware that does not come with either Mac or PC natively. But that does not necessarily mean I should add the price on top of either configuration.

No, you should only care if the price to add that is different between the two configurations.

The only components you can add internally to the new Mac Pro is an SSD or RAM. Its not about having an PCIe SSD in the HP but rather having to add the additional price of the hardware to support it.

Which can be done for <$500.

If you think spec for spec matching is a game, try adding all the different hardware combinations possible must be childs play.

Do you feel better now that you got that BS out of your system?

Thats not what I said. I meant what select workstations does this discount apply too? Applying it only to one configuration does not mean it would work for all. In your broad definition, based on each persons requirements, it has to be able to be used in all cases. Something I don't have the time to check or want to do....sorry.

I suspect it applies to just about all configurations, as I’ve priced out z420s, z620s and z820s and never ran into the *does not apply* problem. Anyway, its a pretty lame criticism if you can find a single case where it doesn’t work. If you don’t have time to check on that, well.....sorry, not my problem.
 
Ok, you all convinced me. Even though my last PC had 5 PCI-e slots that only held 1 WiFi card, 1 video card, 1 thunderbolt card, 1 USB-3 card, and 1 PCI-e SSD card, which are all included in the nMP, somehow now I absolutely need the flexibility of a slower bigger louder expensive box that has "flexible" PCI-e slots!
 
Last edited:
There is a reason... At least for myself and the OP (and a few others who started out in this thread), the new Mac Pro is a perfect fit for our needs, so we are trying to determine how competitively priced it is. That's it. I understand others value internal storage and upgradable GPUs (even though those things may not cost anything) but that's not relevant to this, we're simply trying to figure out if Apple has priced the nMP competitively.

I think we found that answer already. Myself and others have said as much. Maybe you shouldn’t get your panties in such a knot if we bring up something you don’t want to hear...

I’m still waiting the reason you can bring up differences in favor of the nMP but you ridicule any differences in favor of other systems. Nothing you’ve said rationalizes that. Even if the nMP is perfect for you and you want nothing more or nothing less, that doesn’t mean others would find value in taking note of those differences. And never had the OP stated comparable configurations had to be limited to exact 1-to-1 matches across the board. And even if that was the point, Apple has made it impossible to do. So get over it.

----------

Ok, you all convinced me. Even though my last PC had 5 PCI-e slots that only held 1 WiFi card, 1 video card, 1 thunderbolt card, 1 USB-3 card, and 1 PCI-e SSD card, which are all included in the nMP, somehow I now absolutely need the flexibility of a slower bigger louder expensive box that has flexible PCI-e slots!

I hope you know only demonstrating your own irrationality on the subject.

The pro-nMPers are so touchy....

----------

That's exactly my point. People are arguing about Dell and HP discounts below the retail price. Apple has discounts, MacMall has discounts below Apple, you can avoid the tax on a Mac by ordering from another reseller out of state, but you can't if you order from Dell.

It's all irrelevant. We can only do a general comparison using Retail costs since discounts vary by individual and by location.

But the HP discount is across the board with no boarders, so to speak. You’re just trying to make the Apple look more competitive, if you don’t accept that.
 
I don’t dispute that much of computing is going to GPGPUs with multiple GPUs, but at the moment applications that actually do that are limited. Its pretty hard to say its happening fast either. Many programs are still working on being multithreaded at the CPU or can hardly use more than 4 cores. You can point to a few examples where 2+ GPUs can be used effectively, but for most folks what percent of their work does that actually represent? I can only speak for myself, but so far its 0% and very few GPGPU at all. Others may sit in places with much more, and if so, great, buy the nMP. I’m not pro- or anti-nMP, its just this MP wouldn’t work for me.

many programs are still working on being multithreaded?
like what? what programs, exactly, are still working on this?

also, what programs - exactly- do you use personally?.. and how do you use them in a way which dual 8core is going to benefit you so much better than single 12core?



most operations can't be multithreaded.. that's all there is to it.. and with the ones that truly benefit from it-- seeking performance gains via (multi)cpu is a dead end street..

it's much more likely we're going to see things such as real time rendering via gpu power instead of cpu (well, we already are seeing this).. unless of course, we expect to have 200 core cpus in the near future.. which would be incredibly expensive and an incredible waste of energy.
 
Last edited:
Apple offers an educational discount as well, the 6-core $3,999 Mac Pro is $3,699. People keep posting irrelevant variables on here.

20% off of a $4K machine would be $3200....

"I want internal slower drives so you probably can't notice how fast the Mac drive is anyway." or "I'll just RAID0 a bunch of slow drives and ignore the fact that SATA 6 has a 750MB/s limit." "You can probably blah blah blah for less even though I haven't tried it and don't have prices to show it."

Do you like spew irrationality or what? SATA6 has a 750MB/s limit per channel. 2 SATA 6 SSDs in RAID0 will be damn close to that PCIe-SSD. Oh, and you could have 4 of them....

Some don't want a big bulky loud box so no matter what the price of the PC, the nMP is "better", you can't win when you argue a preference.

Exactly, you’re arguing a personal preference. Also, my supermicro duel 2630 workstation is damn quiet. You can hear fans spin up under full load, but its not that loud even with the computer sitting right on my desk next to the monitor. And of course, we don’t actually know what the noise will be like with the nMP.

There are a million other products and choices out there, this thread is discussing the particular choice and configuration of the nMP, however you can buy or build it.

Except its a rather impossible task. So, you get as close as you can and point out the differences.

Changing the rules to say you don't care about certain features or you want other features so something else is worth more to you, doesn't apply to this thread.

Then your thread was doomed. I never said I didn’t care, I just pointed out the differences. Anyway, you’ll never get the same thing as the nMP anywhere other than from Apple, so how about we just give up.

If you don't like the topic of this thread then don't waste your time whining about not liking the topic of the thread, just pick different thread.

I’m not the one whining.

There are 50 million other products and configurations out there that you probably don't care for, go post why they don't meet your needs on every one of those threads too. See you in a million years.

If this is the idiocy you have to resort to, then this thread should be closed.
 
Thanks to all of you and others for giving me the inspiration to do what I should have done in the first place, for no system made by anyone other than Apple is comparable to the nMP. It's one of a kind.

For my customers who want a compact windows workstation to run dual 30 inch and soon to be 4k it's pretty much ideal for their use. Unless there's a PCIe expander box in the works to run off TB2 it's definitely not for me either but that's not what we're discussing here.

You on the other hand in your line of work had no other choice to do what you've done, as Apple have deserted your requirements probably the most of anyone on here. Still when it is released Apple will assist you slightly making it easier to run OSX on your rigs with LGA2011 support. Maybe the mobo makers too will make boards that fit your requirements that are even easier to emulate the genuine thing at the top end of the market too. The long term outlook after 5,1 goes obsolete I can't see any other way than following your path at the moment.
 
"I want internal slower drives so you probably can't notice how fast the Mac drive is anyway." or "I'll just RAID0 a bunch of slow drives and ignore the fact that SATA 6 has a 750MB/s limit." "You can probably blah blah blah for less even though I haven't tried it and don't have prices to show it."

Do you like spew irrationality or what? SATA6 has a 750MB/s limit per channel. 2 SATA 6 SSDs in RAID0 will be damn close to that PCIe-SSD. Oh, and you could have 4 of them....

{sigh} I suppose it does get frustrating dealing with such ignorance.
SATA is limited by the controller, you don't get 750MB/s on all of your SATA ports at once. Look it up. You would have to purchase an extra PCI-e x2 or x4 card to get 750MB/s from every drive attached to the controller at once. This was already discussed, tested, and shown on this thread.
 
Last edited:
many programs are still working on being multithreaded?
like what? what programs, exactly, are still working on this?

I could give you a list as long as my arm, but they will be specific to my field that I don’t think anyone else here is working on.

Just a couple examples though:

All the operations in R I’ve ever run into are single threaded. There just isn’t any way to break most statistical tests into a bunch of bits. Its all linear by nature.

Many other things are single threaded in nature but for very large jobs can be broken apart by the user. For example if you want to align the mouse genome to the human genome, you can align each chromosome in the human to each chromosome in the mouse and then merge them later. But you are still limited to the single threaded single pairwise alignment.

also, what programs - exactly- do you use personally?.. and how do you use them in a way which dual 8core is going to benefit you so much better than single 12core?

A lot of the programs I use are great multithreaders and going from 12 to 16 cores would be great (short read aligners in particular can scale greatly and its a very common task). But the clock trade off in those cases is pretty substancial. So, at the same cost the 12x2.7GHz (2697v2) is going to equal 2x8x2.2GHz (2660v2). Just multiplication get us 32.4 GHz total throughput vs. 35.2 GHz, which is in favor of the duel socket system. But if many of your applications are single threaded or at least lowly threaded, the faster clocked 12 would maybe balance out to be better. However, climbing the ladder slightly to a 2x10x2.5GHz (2670v2) set up would probably be the sweat spot if you’re going to pay something like $3000 for just the CPUs.

most operations can't be multithreaded.. that's all there is to it.. and with the ones that truly benefit from it-- seeking performance gains via (multi)cpu is a dead end street..

it's much more likely we're going to see things such as real time rendering via gpu power instead of cpu (well, we already are seeing this).. unless of course, we expect to have 200 core cpus in the near future.. which would be incredibly expensive and an incredible waste of energy.

Ultimately, it going to be a RAM limitation. If you need something like a GB per thread, the GPU just isn’t the answer and you have to go to the CPU.
 
I could give you a list as long as my arm, but they will be specific to my field that I don’t think anyone else here is working on.

Just a couple examples though:

All the operations in R I’ve ever run into are single threaded. There just isn’t any way to break most statistical tests into a bunch of bits. Its all linear by nature.

Many other things are single threaded in nature but for very large jobs can be broken apart by the user. For example if you want to align the mouse genome to the human genome, you can align each chromosome in the human to each chromosome in the mouse and then merge them later. But you are still limited to the single threaded single pairwise alignment.



A lot of the programs I use are great multithreaders and going from 12 to 16 cores would be great (short read aligners in particular can scale greatly and its a very common task). But the clock trade off in those cases is pretty substancial. So, at the same cost the 12x2.7GHz (2697v2) is going to equal 2x8x2.2GHz (2660v2). Just multiplication get us 32.4 GHz total throughput vs. 35.2 GHz, which is in favor of the duel socket system. But if many of your applications are single threaded or at least lowly threaded, the faster clocked 12 would maybe balance out to be better. However, climbing the ladder slightly to a 2x10x2.5GHz (2670v2) set up would probably be the sweat spot if you’re going to pay something like $3000 for just the CPUs.



Ultimately, it going to be a RAM limitation. If you need something like a GB per thread, the GPU just isn’t the answer and you have to go to the CPU.

yeah, I don't know.. asked about software and got a hardware response..

that's typical around here.. nobody hardly ever talks about software even though it matters way more than hardware.. people just like to argue about numbers and specs here.. that's fine and I'm not knocking it.. just wish people could recognize it's not as important as you all would like to make it out to be.. then after that recognition-- carry on with the spec fight.
 
I don’t dispute that much of computing is going to GPGPUs with multiple GPUs, but at the moment applications that actually do that are limited. Its pretty hard to say its happening fast either. Many programs are still working on being multithreaded at the CPU or can hardly use more than 4 cores. You can point to a few examples where 2+ GPUs can be used effectively, but for most folks what percent of their work does that actually represent? I can only speak for myself, but so far its 0% and very few GPGPU at all. Others may sit in places with much more, and if so, great, buy the nMP. I’m not pro- or anti-nMP, its just this MP wouldn’t work for me.

The amount of software that can support Multicore, multiGPU is small because its not really needed for the majority of the software.

It wouldn't make sense to use such power on a word processor.
Thats why the majority is being used on processing intensive applications.

That is what workstation & the new Mac Pro were made for, the people who use such intensive applications.

EDIT: Well, I was mostly describing the difference between the average user VS Pro. But could see where some Pro software needing more multicore / GPU support

The broadcasting software I use has excellent multicore/GPU support. But is still 32bit software so can only use 4GB memory which it would benefit from.

But I am seeing more efficient use of average hardware in general. Such as the use of WebGL in web browsers that allows the GPU to run web video.


Do you feel better now that you got that BS out of your system?

Your reply is not really an effective rebuttal but...comparing specs to a game...then wanting to compare each persons hardware requirements into the mix, just turned it into a gaming tournament.

Meaning comparing specs is hard enough when the rules are already set, let alone adding every hardware configurations known to man.
 
Last edited:
yeah, I don't know.. asked about software and got a hardware response..

You asked about both and I gave you both. I could tell you the names of all the programs, but its all going to be unfamiliar to you since I’m pretty sure I’m the only one on these boards that does what I do and posts with much regularity. Should I post the program list in my tools directory and give a run down on how often I use them and if they are single threaded or not? I can do that, it will take a while though. Its 156 programs at the moment....


that's typical around here.. nobody hardly ever talks about software even though it matters way more than hardware.. people just like to argue about numbers and specs here.. that's fine and I'm not knocking it.. just wish people could recognize it's not as important as you all would like to make it out to be.. then after that recognition-- carry on with the spec fight.

That’s actually my point here. I wasn’t trying to say multithreading isn’t catching on or something, just that the GPGPU transition is going to take a while, just like the multithreading transition has and still is taking some time.

----------

The amount of software that can support Multicore, multiGPU is small because its not really needed for the majority of the software.

There are certainly things I use that would be nice if it could do it, but it doesn’t and some of them never will.

It wouldn't make sense to use such power on a word processor.
Thats why the majority is being used on processing intensive applications.

That is what workstation & the new Mac Pro were made for, the people who use such intensive applications.

Thanks Capt. Obvious.



Your reply is not really an effective rebuttal but...comparing specs to a game...then wanting to compare each persons hardware requirements into the mix, just turned it into a gaming tournament.

That’s hardly appropriate. The "gaming tournament" at least has real world applicability. If the game is just to spec match for no reason other than to see if we can do it, then it has no real world applicability.

Meaning comparing specs is hard enough when the rules are already set, let alone adding every hardware configurations known to man.

There has to be purpose to the rules though. For example, if the 1250 MB/s over 750MB/s makes no difference in your workflow, then making that rule for our spec game is pointless. If it makes a difference, great, the nMP might make more sense for you. But so far we’ve had little discussion to what purpose we’re playing this spec game. Just a bunch of abrasive comments after myself and Aiden pointed out the areas where the z420 was superior to the nMP.

----------

{sigh} I suppose it does get frustrating dealing with such ignorance.
SATA is limited by the controller, you don't get 750MB/s on all of your SATA ports at once. Look it up. You would have to purchase an extra PCI-e x2 or x4 card to get 750MB/s from every drive attached to the controller at once. This was already discussed, tested, and shown on this thread.

I’d much rather be ignorant of a fact, something that’s easily corrected than have widespread problems with rationality, as you’re displaying on throughout the last few posts.

I was not aware all SATA ports are limited to the same 6Gb/s. If you have a link with more detail, I’d love to read it.

Back to the point, those PCI-e cards can be added for $500 price difference between the nMP and the HP. You obviously fancy yourself a smart and knowledgable guy, so surely you know that.
 
yeah, I don't know.. asked about software and got a hardware response..

that's typical around here.. nobody hardly ever talks about software even though it matters way more than hardware.. people just like to argue about numbers and specs here.. that's fine and I'm not knocking it.. just wish people could recognize it's not as important as you all would like to make it out to be.. then after that recognition-- carry on with the spec fight.

I thought WallysB *was* was talking about software.
 
I thought WallysB *was* was talking about software.

I asked about app names and (implied) what would a user being doing when all cores are peaking..

but he's saying its a specialized field / uncommon software and it wouldn't matter if I knew the names etc.. and I'm willing to bet he's right about that ;)
 
I was not aware all SATA ports are limited to the same 6Gb/s. If you have a link with more detail, I’d love to read it.

He won't find one, because it's not true. SATA is based on individual point-to-point serial links - in theory all of the SATA ports on a controller could run at full speed.

In practice, however
  • The Intel chipset has two 6Gbps SATA ports and four 3Gbps SATA ports, so with the embedded chipset you'd be limited to 3Gbps for a quad RAID-0
  • The controller shares one PCIe connection, so the sum throughput can't exceed the PCIe connection.
  • Some controllers might be IO-processor limited, especially in hardware RAID mode
  • HP does offer 6Gbps SAS/SATA controllers as BTO options, and third party options are available
 
I asked about app names and (implied) what would a user being doing when all cores are peaking..

but he's saying its a specialized field / uncommon software and it wouldn't matter if I knew the names etc.. and I'm willing to bet he's right about that ;)

I use After Effects and Cinema 4D primarily and my processors are usually maxed. Those are the standard apps in my industry.
 
I’d much rather be ignorant of a fact, something that’s easily corrected than have widespread problems with rationality, as you’re displaying on throughout the last few posts.

I was not aware all SATA ports are limited to the same 6Gb/s. If you have a link with more detail, I’d love to read it.

Back to the point, those PCI-e cards can be added for $500 price difference between the nMP and the HP. You obviously fancy yourself a smart and knowledgable guy, so surely you know that.

You're basing your opinion of my irrationality on your simple ignorance of a fact. From your posts, you think I'm irrational for stating a fact that you didn't know was true, and also apparently because you don't understand the concept of sarcasm.

http://superuser.com/questions/245881/is-sata-bandwith-per-port-or-per-controller

If you had bothered to read this thread before displaying your beligerance, you would have seen that we already discussed the addition of the card, and the cost. Do you need a baby sitter watching what you are about to do and say so she can correct you first?
 
Last edited:
LOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOL BREATH LOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOL
So much troll. Can't tell if real. Is this real?
 
Thanks Capt. Obvious.

Actually, I updated my comment more geared toward professional type software.


If the game is just to spec match for no reason other than to see if we can do it, then it has no real world applicability.

But so far we’ve had little discussion to what purpose we’re playing this spec game.

Pretty much which has been brought up in the thread several times. If the new Mac Pro is competitive with simular workstations.

A main point that still seems to elude some people.
 
Wha? Both. Sometimes neither.

PS: Are we arguing?

ha. no.

just trying to point out that throwing more cores at a multithreaded program doesn't necessarily solve much from a real world user point of view.

at least when talking smthng like 12 core vs 16..

when all cores are going, you're (well, me) generally looking at at least a 10min ordeal.. making that 10 min into 8 is cool and all but the reality is that 8 mins still sucks.. id rather see it drop to around 30sec or real time.

from what I see til now, I'll never get to that point via CPU but it's certainly possible to get that type of meaningful speed increase with openCL programming.
 
ha. no.

just trying to point out that throwing more cores at a multithreaded program doesn't necessarily solve much from a real world user point of view.

at least when talking smthng like 12 core vs 16..

when all cores are going, you're (well, me) generally looking at at least a 10min ordeal.. making that 10 min into 8 is cool and all but the reality is that 8 mins still sucks.. id rather see it drop to around 30sec or real time.

from what I see til now, I'll never get to that point via CPU but it's certainly possible to get that type of meaningful speed increase with openCL programming.

Yeah, just in the AE environment, Particular, Cineware, Element 3D and the myriad of plugins use different technologies for acceleration. Each task relies on something different and often the solution I choose is based on the speed it will render, not how good it can look.

My current project is a high-profile doozie, and ultimately I'm torn between a CUDA-based Octane Render pipeline that is fast, but only implements a single machine or a traditional Cinema 4D render that isn't as fast but allows me to network render my z820 with my MP. And plenty of pitfalls along the way.

Ugh.
 
I don’t dispute that much of computing is going to GPGPUs with multiple GPUs, but at the moment applications that actually do that are limited. Its pretty hard to say its happening fast either.

Not really. Use extensively all the time perhaps. But use at all... it is not as small as you are making out. Here is an open Apple job ( which likely isn't something entirely new... just open ).

" ... Working with internal clients such as CoreImage to implement optimized OpenCL kernels, performance tuning and debugging issues ..."
http://www.linkedin.com/jobs2/view/6956703

There are far more clients for the OpenCL library than simply just 3rd party apps libraries. Apple can add OpenCL to the same core libraries that Applications already use. The usage uptake is largely transparent. As long as the APIs return quickly the result of the requested computation it doesn't particularly matter where the computation gets done.

Posix library only apps would have low uptake on this kind of transparent roll out but Mac centric apps aren't. That is one reason PixelMator is in the Mac Pro's "performance" marketing web pages now.

Similarly do many applications invoke enough work for GCD to spread the OpenCL workload over multiple GPUs, maybe not, but that says more about the user and their data processing needs than the application's abilities.

It is also a narrow net to cast this just over the new Mac Pro only. There are hundreds of thousands more ( if not millions more) other Macs that do have two GPUs and lower thresholds where GCD may spill workload onto two GPUs.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.