Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Don't forget that an 8 core Haswell-E will murder a 6 core Ivy-EP Xeon.

Its difficult to do apples-to-apples comparisons because no one in their right mind that is building a single socket machine would use a Xeon to do so.

E3-1200 Xeons are being used for workstations rather than i5s and i7s due to price and a larger range of specifications. E5-1650 V2 is faster than the i7-4930K for users who don't overclock and they support 4 times the memory as mentioned. A single E5-2600 can offer extra cores for those doing homelabs without the need for the more expensive board, case, PSU and powerbill of a dual socket system. I'm seeing plenty of people in their right minds using Xeons in custom built single socket systems. Especially with the E3-1230 Xeons from V1 through to V3.
 
LOL - all the new Mini Pros are single socket.

And the E5-1600v2 can handle 256 GiB - 4 times what the nMP supports.

A single E5-2600v2 can handle 768 GiB - 12 times the nMP.

There's also the advantage that single socket systems don't have NUMA issues.

I think that you struck out with this argument.

I have no idea what you are arguing about.

In the visual effects industry where I work I have never seen a single socket workstation using Xeon except for Apple machines where there was no choice.

Most are using single socket core i7. FX and lighters that need more cores are on dual Xeon.

I suppose there could be a theoretical workload somewhere that needs massive memory but does not need the additional cores of a second socket. Stranger things have happened.
 
Don't forget that an 8 core Haswell-E will murder a 6 core Ivy-EP Xeon.

Its difficult to do apples-to-apples comparisons because no one in their right mind that is building a single socket machine would use a Xeon to do so.

E3-1200 Xeons are being used for workstations rather than i5s and i7s due to price and a larger range of specifications. E5-1650 V2 is faster than the i7-4930K for users who don't overclock and they support 4 times the memory as mentioned. A single E5-2600 can offer extra cores for those doing homelabs without the need for the more expensive board, case, PSU and powerbill of a dual socket system. I'm seeing plenty of people in their right minds using Xeons in custom built single socket systems. Especially with the E3-1230 Xeons from V1 through to V3.

I suppose there could be a theoretical workload somewhere that needs massive memory but does not need the additional cores of a second socket. Stranger things have happened.

No theory, people are using large amounts of memory with ZFS while not needing huge amounts of cores. In production, development, trials and homelabs.
 
Last edited:
Honestly, Im not a professional in anyway. For either options it would mostly just because I want the hardware.

Mac because well I always wanted a Mac, so why not go for the top of line Mac.

The new PC, because well they are amazing consumer parts, and it would be good for pretty much anything I want to do, Games, and such.

You should have you ego check your ID.
 
Watercooling is easy if you buy a closed loop system. You just plug it in, screw it in, and off you go. I'm using a Corsair H90. I used an H60 but it leaked...

This illustrates how "easy" things become complicated and time consuming. When I built my Windows machine I first used an H50, it worked but was too loud and didn't cool adequately. This led to demounting the cooler, cleaning it, using upgraded thermal paste, more tests.

That still didn't work OK, so spent *many* hours researching different coolers, finally got a Noctua NH-D14. It worked great, but verifying that took many more hours of tests. Each change required tearing down the PC and removing the motherboard because of physical clearance.

Then because the internal air flow wasn't optimal for the air cooler, still more research, checked fan specs, sizes, etc. Added a Noctua rear exhaust fan and some slow turning 200mm case fans. This all works great and is very quiet -- total time required about one week.

That is just the cooling system. Much more time has been required for researching, specing, installing and troubleshooting other components.

It was fun to do (in a way) but I can no longer afford to spend a week or more on an R&D project building and fine-tuning a PC. My 2013 iMac is faster than my 4Ghz overclocked PC, and I spent zero time making it so.
 
This illustrates how "easy" things become complicated and time consuming. When I built my Windows machine I first used an H50, it worked but was too loud and didn't cool adequately. This led to demounting the cooler, cleaning it, using upgraded thermal paste, more tests.

That still didn't work OK, so spent *many* hours researching different coolers, finally got a Noctua NH-D14. It worked great, but verifying that took many more hours of tests. Each change required tearing down the PC and removing the motherboard because of physical clearance.

Then because the internal air flow wasn't optimal for the air cooler, still more research, checked fan specs, sizes, etc. Added a Noctua rear exhaust fan and some slow turning 200mm case fans. This all works great and is very quiet -- total time required about one week.

That is just the cooling system. Much more time has been required for researching, specing, installing and troubleshooting other components.

It was fun to do (in a way) but I can no longer afford to spend a week or more on an R&D project building and fine-tuning a PC. My 2013 iMac is faster than my 4Ghz overclocked PC, and I spent zero time making it so.

While that does sound like a bear, my experience with the newer models was simply: 1) buy H90 2) install H90 3) enjoy my working PC.

The new closed loop systems are miles more reliable than the old ones. Quieter too. I had experience with an H60 which exploded ---again, old design.

Not all technology stays crappy forever, sometimes it does become more reliable and simpler :)
 
Last edited:
You can buy a Lambo or tune up that Civic. It's up to you.

More like you could buy this $90,000 race truck OR, realizing you don't participate in the Baja 1000, you could buy a BMW M3 which will be faster, cheaper, and better at the tasks you actually use it for.

Keep in mind the nMP is using 2 year old video cards, has limited computing power, and costs a freaking fortune. Also note the actual longevity of a machine you can't upgrade Vs one you can.

The computer this guy is building will have more computing power and a heck of a lot of superiority in a lot of respects.

But will it run FCP 10.1 as well as the nMP (AKA the Baja 1000?) ? Nope.
 
Don't forget that an 8 core Haswell-E will murder a 6 core Ivy-EP Xeon.

So will a E5 v3 1600 series (Haswell-EP) 8 core 'murder' a 7 core E5 v2 (Ivy-EP).

Neither the Haswell -E ( probably i7 59xxx ) or E5 v3 are on the market yet because they are both variants of a future implementation. So yes, a future implementation with more cores will be faster than an older generation with less cores. And the sky is blue.

Can buy a i7 47xx ( Haswell not -E ) now, but are not going to see anywhere near 8 cores. If want to be on Intel's latest released micro-architecture generation then living with a four x86 core cap is tradeoff you have to resign to. That probably isn't going to change any time soon. The "core count wars" at the mainstream desktop/laptop level have shifted over to non x86 cores.
 
....
They're raiding 2 TB2 ports together. To run that kind of bandwidth you effectively use two whole controllers--4 ports. That means no monitors plugged in or daisy chained to those ports, no other devices.

I hadn't considered that, I stand corrected :)

Display Port monitors plugged directly into the Mac Pro don't consume any TB bandwidth. So no.... users are not limited to zero other devices. Besides no single one of those La Cie drives can soak up 100% of a single TB controllers bandwidth. So one each of two controllers has headroom on both controllers.



Keep whacking away at it... eventually you iterate down to the truth. If let go of the FUD that TB is simply "external PCI-e" or in some kind of 'war' with PCIe , you probably wouldn't have so much trouble getting there.
 
Keep whacking away at it... eventually you iterate down to the truth. If let go of the FUD that TB is simply "external PCI-e" or in some kind of 'war' with PCIe , you probably wouldn't have so much trouble getting there.

Eventually.. meanwhile everyone else that wants one will have their nMP and will have moved on.. except for a few fanboys and haters that are just here to argue.
 
More like you could buy this Keep in mind the nMP is using 2 year old video cards, has limited computing power, and costs a freaking fortune. Also note the actual longevity of a machine you can't upgrade Vs one you can.

You keep harping on about the "two year old video card" issue as if it is somehow meaningful in absolute terms. One of the key design goals was to make a system which would perform well for developing and using OpenCL based software.

You are correct that the Tahiti micro-architecture is 2 years old. However, during the time that that nMP was being designed, and at the time the nMP parts were cast in working silicon for impending release, the AMD GCN architecture in general, and the Tahiti based GPUs in particular, remained the performance leader for OpenCL and double precision math.

Do they perform as well as even much cheaper nVidia cards in tessellation heavy tasks? No they compare poorly. There are chipsets that currently trounce Tahiti in OpenGL from both AMD and nVidia, but they were not serious contenders. at any point during the design cycle. Why? For the simple reason that they did not perm as well for OpenCL processing.

Even if the newest GCN entrant (the Hawaii architecture) had been ready sooner, they may still have been a poor choice. It has many more transistors. and at 28nm, is much more power hungry than Tahiti. Expect Hawaii to be a contender when TSMC produces AMD parts at 22nm or lower.

There is nothing wrong in critiquing a design, but this particular criticism doesn't hold much water. They used the parts which best suited their design goals at the time. Claiming otherwise does not fit the facts.

The age of a design is irrelevant if it is well matched to the problem to be solved. In this case, there was no clearly better solution on the market. So please, give it a rest.
 
You keep harping on about the "two year old video card" issue as if it is somehow meaningful in absolute terms. One of the key design goals was to make a system which would perform well for developing and using OpenCL based software.

You are correct that the Tahiti micro-architecture is 2 years old. However, during the time that that nMP was being designed, and at the time the nMP parts were cast in working silicon for impending release, the AMD GCN architecture in general, and the Tahiti based GPUs in particular, remained the performance leader for OpenCL and double precision math.

Do they perform as well as even much cheaper nVidia cards in tessellation heavy tasks? No they compare poorly. There are chipsets that currently trounce Tahiti in OpenGL from both AMD and nVidia, but they were not serious contenders. at any point during the design cycle. Why? For the simple reason that they did not perm as well for OpenCL processing.

Even if the newest GCN entrant (the Hawaii architecture) had been ready sooner, they may still have been a poor choice. It has many more transistors. and at 28nm, is much more power hungry than Tahiti. Expect Hawaii to be a contender when TSMC produces AMD parts at 22nm or lower.

There is nothing wrong in critiquing a design, but this particular criticism doesn't hold much water. They used the parts which best suited their design goals at the time. Claiming otherwise does not fit the facts.

The age of a design is irrelevant if it is well matched to the problem to be solved. In this case, there was no clearly better solution on the market. So please, give it a rest.

It holds plenty of water bro. The age of the design IS relevant when there is no upgrade path or options outside of those provided by the manufacturer. The critique of the design isn't even the cards per-say it's that you are stuck with them, and what you are stuck with, is already behind the times. Saying that these cards have been somehow cherry picked as the "best at the time" is marketing speak for "this is what will make us the biggest percentage of the buck while still being adequate for the consumers needs." Apple historically has never been one to use top spec cards. There is nothing great or exceptional about these cards. Apple has done nothing to make them amazing. It's all marketing hype. Hype hype hype.
 
It holds plenty of water bro. The age of the design IS relevant when there is no upgrade path or options outside of those provided by the manufacturer. The critique of the design isn't even the cards per-say it's that you are stuck with them, and what you are stuck with, is already behind the times. Saying that these cards have been somehow cherry picked as the "best at the time" is marketing speak for "this is what will make us the biggest percentage of the buck while still being adequate for the consumers needs." Apple historically has never been one to use top spec cards. There is nothing great or exceptional about these cards. Apple has done nothing to make them amazing. It's all marketing hype. Hype hype hype.

Reducing it to just being marketing efforts is oversimplifying I think.

Buyers wish to have the latest most of the time, especially when buyers are putting this much money into buying a computer. But as a first iteration product, it does pay off to be... conservative (?). Use components that have proven to be reliable. Components that have much more real-world data to use in the design process, be that thermal characteristics etc. These are possible reasons of course, including those you have mentioned. But we do not know for certain, only guesses.

Maybe the best way of going about this is to compare it with other made-to-ready workstations from HP et al. A lot of people seem ready to compare the nMP with DIY computers, probably due to what a lot of people are most familiar with, when realistically the best comparison would be against the HP Z820 for example (some reviews have done so). Even so, I am pretty sure the Z820 has plenty of thermal leeway being a much bigger box and wouldn't cost as much design wise since it uses conventional form factors.
 
Reducing it to just being marketing efforts is oversimplifying I think.

Buyers wish to have the latest most of the time, especially when buyers are putting this much money into buying a computer. But as a first iteration product, it does pay off to be... conservative (?). Use components that have proven to be reliable. Components that have much more real-world data to use in the design process, be that thermal characteristics etc. These are possible reasons of course, including those you have mentioned. But we do not know for certain, only guesses.

Maybe the best way of going about this is to compare it with other made-to-ready workstations from HP et al. A lot of people seem ready to compare the nMP with DIY computers, probably due to what a lot of people are most familiar with, when realistically the best comparison would be against the HP Z820 for example (some reviews have done so). Even so, I am pretty sure the Z820 has plenty of thermal leeway being a much bigger box and wouldn't cost as much design wise since it uses conventional form factors.

LoL. The last time I tried to even compare prebuilts some dude tried to pass off the lowest spec'd 12 core nMP as being comparative to a 16 core Dell with a quadro 5000. When I asked the guy if he was deliberately trying to be dis honest for attempting to compare d300's to a single Qudro 5000 he just told me I was being aggressive then figuratively stuck his fingers in his ears and started to go "NaNaNaNa."
 
LoL. The last time I tried to even compare prebuilts some dude tried to pass off the lowest spec'd 12 core nMP as being comparative to a 16 core Dell with a quadro 5000. When I asked the guy if he was deliberately trying to be dis honest for attempting to compare d300's to a single Qudro 5000 he just told me I was being aggressive then figuratively stuck his fingers in his ears and started to go "NaNaNaNa."

I guess I'm "that guy" but the description you gave is not true. The argument was about pricing. You kept repeating (over and over...and over) that the equivalent HP (or dell, whatever) is much cheaper. But you kept ignoring the parts where the nMP of my example configuration was superior (double ssd, TB2, power consumption etc). No reason to keep going with the conversation, is it ?

Then you just added with a calculator the Ghz of the PC cores, and used the sum as an argument that PC costs less and offers more. Now that was hilarious (of course you kept ignoring the points where nMP was superior, no matter how many times I wrote it down).

In between you used some other funny arguments like "let me know when nMP offers nvidia option" which was totally out of the context and can be played from both sides (e.g. Let me know when the PC will run official OS X - see ? Easy).

So you are not a big fan of common sense. I respect that. To each his own I guess.
 
Display Port monitors plugged directly into the Mac Pro don't consume any TB bandwidth. So no.... users are not limited to zero other devices. Besides no single one of those La Cie drives can soak up 100% of a single TB controllers bandwidth. So one each of two controllers has headroom on both controllers.

" Although you can daisy chain a 4K display onto the back of a Thunderbolt 2 storage device, doing so will severely impact available write bandwidth to that device. [...] I measured less than 4Gbps of bandwidth (~480MB/s) available for writes to a Thunderbolt 2 device downstream from the Mac Pro if it had a 4K display plugged in to it. " - Anand

And yes, at 1300MBps, the LaCie drives do appear to max it out.

"So far I’ve been able to sustain 1.38GB/s of transfers (11Gbps) over Thunderbolt 2 on the Mac Pro. Due to overhead and PCIe 2.0 limits (16Gbps) you won’t be able to get much closer to the peak rates of Thunderbolt 2." - Anand

You're absolutely right: TB2 is not PCIe.... I'll stop comparing it when others stop saying it's a substitute.

----------

You keep harping on about the "two year old video card" issue as if it is somehow meaningful in absolute terms. One of the key design goals was to make a system which would perform well for developing and using OpenCL based software.

I'm not saying they're not great cards, but an R9 290x would blow the D700 away at OpenCL. Also, this is given the possibility people even want to use OpenCL. What if I don't? The nMP becomes an even bigger waste of money for that group.

I know why they used them--thermal limitations, solid, proven technology, excellent binning. However, you are buying this $5,000 machine today and will probably never be able to upgrade the video cards, in addition to them already not exactly being top of the line. This is not a small problem.
 

Remains to be seen. This smells a bit like back in Q4 '11 when "Sandy Bridge -E/EP" was running late and Intel launched subsets of the product line up to limited markets to claim "introduction", but that was disconnected from mass availability.

i7 3960X rolled out in Nov '11 timeframe followed weeks later by 3930K and then finally in Jan '12 the 3920. The E5 1600 variants announced in March '12, but were in volume shipments by April-May timeframe.

It would not be surprising at all for Intel to pump out the very high mark-up models in time to help with Q2's financial numbers.

But the phased rollout in '11 worked because the infrastructure was all ready to go Q4 '11. If Intel has been consistently telling them Q3 and move to Q2 (as the article points out) it may be released but vendor likely going to move to same "high markup on low volume" to cash in on the limited supply.

Intel does seriously need something better than "Haswell Refresh" to talk about at Computex though. Talks and demos without associated NDAs as opposed to shipping high volume of real products though.
 
Mac because well I always wanted a Mac, so why not go for the top of line Mac.

The new PC, because well they are amazing consumer parts, and it would be good for pretty much anything I want to do, Games, and such.


Really just boils down to money then... if you're loaded and/or just REALLY want the Mac then go for it... otherwise save the cash.

I'm more of a "tools for the job" kind of guy. If you don't do much stuff around the house, you can score a decent tool kit from Wal-Mart for $79 that has most everything you'll need and even some stuff you won't. But if you're a paid contractor, and the tools really matter... there's a $499 kit from Home Depot you should get instead.
 
No no, Husky tools? nah. It has to be the $499 wrench off the Snap-On truck ;)

I really like the people that sound like high expectation douches that would snap at a waiter when they talk about how they need a machine that's fast and dependable.
 
This is a common thing around here: focus on what the PC can't do and ignore what the Mac can't do.

Because when people bring up these kind of examples they tend to compare consumer grade PC parts compared to workstations parts. Obviously its much cheaper to build it yourself with no labor costs and overhead compared to OEM branded workstations or even consumer PC's.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.