Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Hi Folks,

I am eagerly waiting for the 21 Oktober event, but so far my hope for an updated nMP with a minor speed bump through Haswell-E V3 was low.

However, I found this article today. And now I am more then ever thrilled about Haswell-E.
In my eyes it basically gives Apple the chance to really reduce the development workload for that update. Theoretically they would not even require to change the memory architecture for an update. Just put the new CPU with a new socket in, keep all the rest of it and that's pretty much it.

http://www.anandtech.com/show/8536/intels-haswellep-xeons-with-ddr3-and-ddr4-on-the-horizon

It would be minor, but better then nothing! I mean -do they want us to wait another 4 years for the next speed bump again?

What do you think?

If you look at the CPUs that will support DDR3, none of them are likely candidates for a nMP. Apple will use the 1600 v3 for all but the high end. Those 2600 v3s with DDR3 are not the most compelling CPUs.

An update to the nMP for Haswell would require completely new CPU board, memory sockets, I/O board (eliminating discrete USB3), and hopefully new GPUs.

If they skip Haswell, it won't be 4 years, possibly another year though.

They will not bother.

I agree.
 
An update to the nMP for Haswell would require completely new CPU board, memory sockets, I/O board (eliminating discrete USB3), and hopefully new GPUs.

If they skip Haswell, it won't be 4 years, possibly another year though.



I agree.

I'm confused by your mentality over this. All you suggest there as costs for a Haswell update will also be present for a Broadwell update. Why would they wait when all it does is cause confusion and disillusionment for customers, raise questions with analysts and reduce sales for the period they don't release?
 
I'm confused by your mentality over this. All you suggest there as costs for a Haswell update will also be present for a Broadwell update. Why would they wait when all it does is cause confusion and disillusionment for customers, raise questions with analysts and reduce sales for the period they don't release?

Of course the costs are the same, the problem is that Haswell doesn't offer any tangible performance benefits and runs hotter... so while they might do a Haswell refresh, I can certainly understand why they might not.

If they skip Haswell, it certainly shouldn't create confusion or disillusionment for the well informed. People won't be missing out on much.
 
Of course the costs are the same, the problem is that Haswell doesn't offer any tangible performance benefits and runs hotter... so while they might do a Haswell refresh, I can certainly understand why they might not.

If they skip Haswell, it certainly shouldn't create confusion or disillusionment for the well informed. People won't be missing out on much.

Ugh what a horrible attitude to have. I know Apple share it to a degree, but that doesn't mean you need to go along with it. There are performance benefits from Haswell-EP and there is no real reason for Apple not to introduce them for new Mac Pro customers. 10% performance might not mean anything to you, to me it can means thousands of pounds and I'm a one man shop.

The only people losing are end-users, and to me that just reinforces the Apple not giving a **** mentality. Apple not keeping up with current platforms is not a good way to have things.

edit: I don't get the impression you are saying Apple shouldn't, and perhaps you are playing devil's advocate, but really this forum should be pressuring Apple to move forward and update with every generation.
 
Last edited:
10% performance might not mean anything to you, to me it can means thousands of pounds and I'm a one man shop.
Is always a good thing to get new technologies available and I hope(for new customers) that a refresh will come soon. But let's be honest.. completing a 3d render or other computing intensive/time consuming tasks in 9h instead of 10h is not going to change our life 99% of the time, unless your clients pay you 10.000pounds/hour, which seems an unrealistic scenario for most of us... even if you earn 10.000pounds/h I think there's not a big difference in earning 29.200.000 instead 28.908.000 during a year.
 
Last edited:
Ugh what a horrible attitude to have. I know Apple share it to a degree, but that doesn't mean you need to go along with it. There are performance benefits from Haswell-EP and there is no real reason for Apple not to introduce them for new Mac Pro customers. 10% performance might not mean anything to you, to me it can means thousands of pounds and I'm a one man shop.

The only people losing are end-users, and to me that just reinforces the Apple not giving a **** mentality. Apple not keeping up with current platforms is not a good way to have things.

edit: I don't get the impression you are saying Apple shouldn't, and perhaps you are playing devil's advocate, but really this forum should be pressuring Apple to move forward and update with every generation.

As I've mentioned in previous threads on this topic, I started looking into the possibility Apple may skip Haswell-EP based on the fact they appear to have skipped Haswell on the Mac Mini which is another computer that doesn't benefit much from a Haswell refresh. Now the Mac Mini likely has no bearing on whether the Mac Pro will get a refresh, but it does offer some possible insights into why it might not.

I'm not saying they should skip it, but I have been looking into why they might skip it, and sharing that here.

I believe the performance gains are somewhere in single digit percentages and this is a great example of where there is virtually zero improvement... in this CPU benchmark, the Haswell 5930K (which is very similar to the 1650v3) is sandwiched in the results between two Ivy Bridge CPUs that are clocked 100MHz faster and 100MHz slower. Of course there are some benchmarks which show some modest improvements, but let's face it, anyone benchmarking Haswell against Ivy is not seeing much.

67025.png
 
Of course the costs are the same, the problem is that Haswell doesn't offer any tangible performance benefits and runs hotter... so while they might do a Haswell refresh, I can certainly understand why they might not.

If they skip Haswell, it certainly shouldn't create confusion or disillusionment for the well informed. People won't be missing out on much.

Dude (or miss), you are literally posting in every single thread that even mentions the nMP refresh, saying the same exact thing, with the exact same benchmark screenshot (of processors that aren't and will never be used inside of the Mac Pro) to try and back up your theories that aren't based on any sort of sound facts.

Look, we get it. You have a Late 2013 nMP and you'd be highly upset if it were to be updated so soon, like within the year or soon after. That is what is so obvious to anyone who reads your post history. Even when you are confronted by people who know a lot more about processor technology, who understand how they work and could probably write a thesis paper on them, who all say that you're wrong, you still continue to post the same two or three key statements in different threads.

I've ignored you up until now, but it's really a shame that your insecurities about having a slightly outdated machine when the new model drops is leading you to knowingly post such false statements, because it spreads bad information to new or casual forum goers.
 
Is always a good thing to get new technologies available and I hope(for new customers) that a refresh will come soon. But let's be honest.. completing a 3d render or other computing intensive/time consuming tasks in 9h instead of 10h is not going to change our life 99% of the time, unless your clients pay you 10.000pounds/hour, which seems an unrealistic scenario for most of us... even if you earn 10.000pounds/h I think there's not a big difference in earning 29.200.000 instead 28.908.000 during a year.

Huh? If your job is largely CPU bottle necked and the amount of work you can get done is limited as such, why are we leaving 10% project through-put and pay on the table?

And like with every generation for the last 5 years or so, this is not really about the 10% over Ivy Bridge. Rather, its about the 20% over Sandy Bridge or 30% over Westmere. Most people don’t upgrade every generation, but every 2 or 3 generations. If you bought with Westmere and decided to sit out Sandy (well were forced to sit out) and Ivy, to wait for a larger upgrade with Haswell, only to find you now are being forced to either sit out Haswell or go back and buy the Ivy stuff you decided to sit out, then I think you’d be pretty freaken pissed.

That’s why this attitude that it wouldn’t cause confusion and disillusionment for those in know is just complete BS. Many toughed it out through skipping Sandy Bridge, waiting for the upgrade that never came (remember the Westmere “refresh" and price drop?), then saw a completely new product with Ivy Bridge, that many folks thought was coming at least one generation too early given core counts/PCIe lane issues. Now, with Haswell Intel has made some progress that fixes some of the criticisms for nMP, but somehow skipping this generation won’t cause complaints? Err, no.
 
Last edited:
I believe the performance gains are somewhere in single digit percentages and this is a great example of where there is virtually zero improvement... in this CPU benchmark, the Haswell 5930K (which is very similar to the 1650v3) is sandwiched in the results between two Ivy Bridge CPUs that are clocked 100MHz faster and 100MHz slower. Of course there are some benchmarks which show some modest improvements, but let's face it, anyone benchmarking Haswell against Ivy is not seeing much.

That’s not really being fully truthful here. The 1660v3 makes getting into the 8 core about $1000 cheaper once we add in the apple mark ups. The 1660v3 is 8 cores at 3.0GHz and is retail at $1080, the 1680v2 is 8 cores at 3.0GHz and is retail $1723. We’ll have to see how the bench marks for everything come out, but its probably 5% clock-for-clock faster and with the same core count, so you’re going to see 5% better performance with a decent price drop. We will see what Apple does, they could stick with the 1680 v3 which $1700 and just clocked 200MHz faster. But if they go with the 1660v3, I’d bet the 8-core becomes the sweat spot for the nMP v2.

Its too bad the 4-core and 6-core offerings from Intel didn’t move much (would have been nice if the 1630 became a 6-core), so I can see some criticism there. And we’ll see where Apple goes with the top end. The 2697 is now 14 cores with 100MHz low base frequency, but 100MHz higher turbo. So, I wouldn’t read too much into the lower clock rate. It may actually do most of its work at the same frequency and of course with the clock-for-clock increases.
 
Dude (or miss), you are literally posting in every single thread that even mentions the nMP refresh, saying the same exact thing, with the exact same benchmark screenshot (of processors that aren't and will never be used inside of the Mac Pro) to try and back up your theories that aren't based on any sort of sound facts.

Look, we get it. You have a Late 2013 nMP and you'd be highly upset if it were to be updated so soon, like within the year or soon after. That is what is so obvious to anyone who reads your post history. Even when you are confronted by people who know a lot more about processor technology, who understand how they work and could probably write a thesis paper on them, who all say that you're wrong, you still continue to post the same two or three key statements in different threads.

I've ignored you up until now, but it's really a shame that your insecurities about having a slightly outdated machine when the new model drops is leading you to knowingly post such false statements, because it spreads bad information to new or casual forum goers.

I don't think I'm being irrational here. I think it's a stretch to try and portray my persistence on this topic as some sort of self-serving interest or insecurity. Trying to guess at my motivations rather than disputing the facts, ins't going to be very productive. I've been buying computers since 1984 and I'm fully aware that the Mac Pro I purchased 9 months ago is already obsolete (before I bought it if you listen to some of the nMP detractors) and worth a fraction of what I paid.

And I wouldn't be so quick to judge who's qualified to write a thesis on CPU technology.

All this irrational behaviour nonsense aside, I guess my questions to you would be...
1. What makes you so certain that Apple will do a Haswell refresh of the nMP?
2. What would you say to all the people who thought a Haswell refresh of the Mac Mini was "certainly coming" in December 2013?
3. How would you respond to someone that asks... "Should I buy a nMP now or wait for a Haswell refresh?"

Believe it or not, I'm actually trying to answer these questions through research and discussion.

That’s not really being fully truthful here. The 1660v3 makes getting into the 8 core about $1000 cheaper once we add in the apple mark ups. The 1660v3 is 8 cores at 3.0GHz and is retail at $1080, the 1680v2 is 8 cores at 3.0GHz and is retail $1723. We’ll have to see how the bench marks for everything come out, but its probably 5% clock-for-clock faster and with the same core count, so you’re going to see 5% better performance with a decent price drop. We will see what Apple does, they could stick with the 1680 v3 which $1700 and just clocked 200MHz faster. But if they go with the 1660v3, I’d bet the 8-core becomes the sweat spot for the nMP v2.

Its too bad the 4-core and 6-core offerings from Intel didn’t move much (would have been nice if the 1630 became a 6-core), so I can see some criticism there. And we’ll see where Apple goes with the top end. The 2697 is now 14 cores with 100MHz low base frequency, but 100MHz higher turbo. So, I wouldn’t read too much into the lower clock rate. It may actually do most of its work at the same frequency and of course with the clock-for-clock increases.

You're right, they could offer similar performance at a lower price or more performance at the same price (although again, how much is debatable). Based on Apple's past, and the deep pockets and performance expectations of their target market for this machine, I'm inclined to expect the latter. But perhaps the sales of the 8/12 core have been so limited that they feel compelled to bring the price point down to something a bit more realistic. I'm not sure I'd agree a $5K 8-core would be the sweet spot, but it's a little bit more palatable. And you're right, the rest of the line up would remain unchanged... 4-core at $3K, 6-core at $4K, and 12 or 14-core at $7K (and maybe even an 18-core at $10K or something). :eek:

EDIT: Some other random thoughts... Anand reviewed the 2013 Mac Pro, he noted that significant performance gains for workstation users were going to come from future GPUs, not CPUs. While everyone remains fixated on Intel's CPU release cadence, I wonder if it's much less relevant than the GPU refresh cycle now. And I would suggest that despite Apple's stunning and most unexpected redesign of the Mac Pro (remember most people here had assumed the product was dead)... Apple's focus is on mobile. And Intel's main competition and threat is ARM. Haswell and Broadwell are clear indications of where Intel is going, they are all about improvements in SoC, efficiency, on-die GPUs, and increasing number of cores (at lower clock speeds if necessary)... none of which caters to the workstation CPU crowd.
 
Last edited:
If your job is largely CPU bottle necked and the amount of work you can get done is limited as such, why are we leaving 10% project through-put and pay on the table?
I've already explain this... 10% more power is not going to significantly change our life or our pay... There's little difference to be payed 1000$ or 1100$(at least for me).
My work is largely CPU bottlenecked for some tasks, and therefore I've done what every other pro is doing, I've purchased a small farm for intensive computation. Because pay for 10%(or even 50%)more power is just silly and will not affect the quality/speed of my works, while a 500% speed increase makes a huge difference for me, my clients and my pay.
I'm not saying that Apple should skip Haswell, and if I'll have to purchase a new machine in 2015 I'll be disappointed if is still IvyBridge. Just saying that in everyday job is not a big improvement.
 
Last edited:
Apple is sitting on a billion dollar R&D and inventory of DDR3 Ivy Xeon main boards. They won't be doing Haswell and DDR4. Not for a lousy 10%. They will get the old chips out of Intel for the reduced price of the new ones (or even less) and pocket the cash.
 
Apple is sitting on a billion dollar R&D and inventory of DDR3 Ivy Xeon main boards.

Not sure what you are smoking. The R&D for the Mac Pro didn't cost anywhere near $1B. Even if do hand waving sweeping in of costs ( sweeping generalization of the total OS X , all bundled software , Mac App store , etc i.e., anything tangentially touching the Mac Pro ) the CPU package, RAM, and chipset costs are no where near that. Cost Intel around $1B to develop the whole Xeon E5 vX line-up and associated chipset? Maybe but that isn't really Apple's problem or burden.

Apple likely doesn't have huge inventory of anything; let alone DDR3 Mac Pro boards. Every week Apple execs sit down and look at how much all their products are selling. If sales crater for an extended period of time, they can turn the "order' switch off or down.

More than likely Apple does not have a huge investment in Mac Pro R&D. There are no overlapping large development teams that work on multiple generations at once. So with a very small team the delay is far more so because the previous Mac Pro wasn't done until relatively deep into the second half of 2013, so they could not spend full time on the 2014 version. It likely isn't coming abnormally quick because they didn't start abnormally early.

If Apple's objective is to get the most return on board design investment they would be interesting in getting off v2 so they can get onto a board design that they can use for a full tock/tick cycle. If you want to minimize return on investment on a Xeon E5 class motherboard R&D you would jump from each "dead end' 'tick' chip offering to the next to the next dead ender. Where the Mac Pro 2013 jumped into the design cycle is bozo, not biillant, long term R&D ROI move. Apple had to do it because they screwed up in 2010-2011 and had TB v2 arrival sync problems, not because it was an optimal investment move.

Glut of Mac Pros parts? Which retailers are having "fire sale" discounts on Mac Pros. From Dec '13 until around April '14 Apple couldn't make enough Mac Pros. That is the polar opposite of inventory glut. If you go to the online store and select a non standard configuration (e.g. 8 cores ) the delivery time changes from 24 hours to days. Doesn't sound like a huge pile of parts clogging the warehouse to me.

They won't be doing Haswell and DDR4. Not for a lousy 10%. They will get the old chips out of Intel for the reduced price of the new ones (or even less) and pocket the cash.

It pragmatically isn't 10%. A Xeon v3 Mac Pro is targeted at folks with Westmere ( 3600/5600) and ealier class machines. Three iterations of compounded 10% gains are much closer to 33% increase. Four iterations 46%. Five iterations 61%. Coupled with more affordable access to higher core counts over much of the line-up there are advantages to moving up if not primariluy focused on just running 4+ year old software faster.

----------

I've already explain this... 10% more power is not going to significantly change our life or our pay... There's little difference to be payed 1000$ or 1100$(at least for me).

You analysis is rather myopic. The difference of $100 per day ( or week) over 3 years ( a rather conservative product lifetime for a Mac Pro class system) is 3 * 230 days * $100 = $69,000 (in weeks 3 years * 46 weeks * $100 = $13,800 ) [ that is a just a 46 week 'work year' ... so 6 weeks of non-revenue generating 'vacation'. ] Both of those sums are more than enough to pay for highly upgraded Mac Pro and still generate profit. ( never mind the amortized write off if allowed through taxes. )

"it is only an hour a day" adds up over 3-5 years product lifetime. That's what is being left on the table. Not just the hour of a single day. It is a years time frame since it is a years product.
 
I've already explain this... 10% more power is not going to significantly change our life or our pay... There's little difference to be payed 1000$ or 1100$(at least for me).
And if its the difference between $50K and $55K?

My work is largely CPU bottlenecked for some tasks, and therefore I've done what every other pro is doing, I've purchased a small farm for intensive computation. Because pay for 10%(or even 50%)more power is just silly and will not affect the quality/speed of my works, while a 500% speed increase makes a huge difference for me, my clients and my pay.

Sure, if you do enough business to make it worth it to get a small cluster, then great. But then you’re not really main person to gain from a nMP then, are you?

I'm not saying that Apple should skip Haswell, and if I'll have to purchase a new machine in 2015 I'll be disappointed if is still IvyBridge. Just saying that in everyday job is not a big improvement.

I think this is how people that are on 2010 Mac Pro’s would feel. It isn’t necessarily just about getting the 10% from Ivy to Haswell. But maybe rather getting the 30% from Westmere to Haswell rather than the 20% from Westmere to Ivy in early 2014.

----------

You're right, they could offer similar performance at a lower price or more performance at the same price (although again, how much is debatable). Based on Apple's past, and the deep pockets and performance expectations of their target market for this machine, I'm inclined to expect the latter. But perhaps the sales of the 8/12 core have been so limited that they feel compelled to bring the price point down to something a bit more realistic. I'm not sure I'd agree a $5K 8-core would be the sweet spot, but it's a little bit more palatable. And you're right, the rest of the line up would remain unchanged... 4-core at $3K, 6-core at $4K, and 12 or 14-core at $7K (and maybe even an 18-core at $10K or something). :eek:

If Apple goes for the 1660v3, maybe we’ll see the 8 core in the 4500-5000 range. I’d argue that would be better than the $4K 6-core. But it will depend on how other features line up, like the GPU and what Apple does with the standard RAM.

EDIT: Some other random thoughts... Anand reviewed the 2013 Mac Pro, he noted that significant performance gains for workstation users were going to come from future GPUs, not CPUs. While everyone remains fixated on Intel's CPU release cadence, I wonder if it's much less relevant than the GPU refresh cycle now. And I would suggest that despite Apple's stunning and most unexpected redesign of the Mac Pro (remember most people here had assumed the product was dead)... Apple's focus is on mobile. And Intel's main competition and threat is ARM. Haswell and Broadwell are clear indications of where Intel is going, they are all about improvements in SoC, efficiency, on-die GPUs, and increasing number of cores (at lower clock speeds if necessary)... none of which caters to the workstation CPU crowd.[/QUOTE]

Aren’t the GPU’s also lined up to be upgraded in the same time farm as 1600v3 will be available? Wouldn’t that make it even more nonsensical for Apple to skip an early 2015 update?
 
Apple is sitting on a billion dollar R&D and inventory of DDR3 Ivy Xeon main boards. They won't be doing Haswell and DDR4. Not for a lousy 10%. They will get the old chips out of Intel for the reduced price of the new ones (or even less) and pocket the cash.

Source?

...and by the way, the CPU and memory are not on the main board - they're on a daughtercard.
 
And if its the difference between $50K and $55K?
Like every other person doing my same job I do have some fluctuation in my yearly revenues. Some year I can earn 100.xxx$, some year 90.xxx$, other years only 80.xxx$, that's just how things are going for every independent workers. Believe me my life stile is always about the same every year and I do not feel doomed for loosing a little percentage of my pay. If next year I'll earn only 20.xxx$ now that will be a problem, but this will certainly not depends on a CPU upgrade.
That's why it's ridiculous to think that a 10% CPU speedup is so important, there's a much biggger chance that you can have some revenue fluctuation(unless you are a dependent worker) that will be absolutely non related about your hardware configuration.
 
Last edited:
Aren’t the GPU’s also lined up to be upgraded in the same time farm as 1600v3 will be available? Wouldn’t that make it even more nonsensical for Apple to skip an early 2015 update?

Discrete GPU updates are independent of CPUs. TSMC provides the silicon to both AMD and Nvidia whereas Intel has it's own Fabs. The interesting thing is that both AMD and Nvidia have been on a 28nm process for a few years now. While TSMC has 20nm in production, it seems the only silicon coming out of these Fabs is Apple's A8 SoC. Maybe Apple has purchased all TSMCs 20nm production for the foreseeable future?

Meanwhile, AMD and Nvidia are stuck in a rut trying to innovate on the same 3-year old process. As a result, AMDs latest Hawaii and Tonga cores are incremental improvements over the Tahiti cores (used in the 2013 nMP). Nvidia's latest GPUs are showing better results, but still nothing like the previous doubling of performance you'd expect from a new GPUs benefitting from a process shrink.

From AnandTech's opening paragraph on the GTX980 the other day...
At the risk of sounding like a broken record, the biggest story in the GPU industry over the last year has been over what isn’t as opposed to what is. What isn’t happening is that after nearly 3 years of the leading edge manufacturing node for GPUs at TSMC being their 28nm process, it isn’t being replaced any time soon. As of this fall TSMC has 20nm up and running, but only for SoC-class devices such as Qualcomm Snapdragons and Apple’s A8. Consequently if you’re making something big and powerful like a GPU, all signs point to an unprecedented 4th year of 28nm being the leading node.

The bottom line for nMP refresh options... really only consists of an incremental improvement in GPUs from AMD which are reported to be running 15-deg hotter than a Tahiti (meaning even more down clocking - possibly negating any performance gains) or perhaps, and this would be most interesting, a switch to Nvidia's latest GPU. Although the latter is probably more in the hands of Nvidia and their willingness to spin a custom design for the nMP at lower margins than they're use to.

Only when AMD has access to TSMC's 20 nm silicon will we see a clear and truly next-gen GPU option in the nMP.
 
Last edited:
EDIT: Some other random thoughts... Anand reviewed the 2013 Mac Pro, he noted that significant performance gains for workstation users were going to come from future GPUs, not CPUs. While everyone remains fixated on Intel's CPU release cadence, I wonder if it's much less relevant than the GPU refresh cycle now.

I'm not seeing much in the way of workstation application support for GPUs outside of the video/3D world today, and while that's an important area, it does cover only a small fraction of workstation users.

If and when GPUs do become relevant for a large proportion of the workstation market, I suspect it will be long after the current generation of GPUs are obsolete.

As to the notion that Haswell-E is doesn't offer a meaningful improvement in performance over Ivy Bridge-E, I'd argue it makes for a bigger improvement than Ivy Bridge-E over Sandy Bridge-E in most CPU intensive tasks. No, it's not the 20-30% we used to see (outside of certain specialized areas like AVX) but it's not meaningless.
 
I'm not seeing much in the way of workstation application support for GPUs outside of the video/3D world today, and while that's an important area, it does cover only a small fraction of workstation users.

If and when GPUs do become relevant for a large proportion of the workstation market, I suspect it will be long after the current generation of GPUs are obsolete.

As to the notion that Haswell-E is doesn't offer a meaningful improvement in performance over Ivy Bridge-E, I'd argue it makes for a bigger improvement than Ivy Bridge-E over Sandy Bridge-E in most CPU intensive tasks. No, it's not the 20-30% we used to see (outside of certain specialized areas like AVX) but it's not meaningless.

I agree with you about GPU software optimization, but if software vendors want to start seeing any performance improvements in their apps, they are going to have to adopt it sooner than later.

As for the performance differences between Haswell, Ivy, and Sandy, there's no need to argue. AnandTech's recent review of the Haswell-E compared all three generations...

Sandy Bridge 3930K = 6-cores at 3.2GHz-3.8GHz
Ivy Bridge 4930K = 6-cores at 3.4GHz-3.9GHz
Haswell 5930K = 6-cores at 3.5GHz-3.7GHz

Handbrake: (higher is better)
3930K = 27.69
4930K = 29.69 (+7%)
5930K = 31.12 (+5%)

Cinebench: (higher is better)
3930K = 977
4930K = 1043 (+7%)
5930K = 1083 (+4%)

WinRAR: (lower is better)
3930K = 45.99
4930K = 43.79 (-5%)
5930K = 44.95 (+3%)

3DPM: (higher is better)
3930K = 967.68
4930K = 1024.55 (+6%)
5930K = 968.59 (-6%)

As you can see, Sandy to Ivy offered anywhere from 5-7% improvement. Haswell is a different story... at best its 5% better than Ivy and at worst, its a regression to Sandy levels.
 
True Innovation Is Creatively Using What You Do Have Or Can Readily Acquire

Most of my hammers, saws, drills, wrenches, screw drivers and other household tools of today aren't significantly different from those my father owned when I was born more than 60 years ago. But what I use them to build, or otherwise work on, is not set in stone. I strongly suspect that my father, who died when I was a child, would never have imagined that I would use many of those same tools to modify and build computers.
Discrete GPU updates are independent of CPUs. TSMC provides the silicon to both AMD and Nvidia whereas Intel has it's own Fabs. The interesting thing is that both AMD and Nvidia have been on a 28nm process for a few years now. While TSMC has 20nm in production, it seems the only silicon coming out of these Fabs is Apple's A8 SoC. Maybe Apple has purchased all TSMCs 20nm production for the foreseeable future? ... . Only when AMD has access to TSMC's 20 nm silicon will we see a clear and truly next-gen GPU option in the nMP.

We shouldn't expect that enhancements of tools, like those that I listed above, and substantial improvements to computer parts like CPUs and GPUs will always proceed at a regular/steady pace because variables such as, but not limited to, market power may influence who gets what when. But like Nvidia has done in what could be characterized as a "rut,", i.e., the inability to get that which it wants most, it wouldn't stop me from innovating, nor should it stop any other business from innovating with what it has available. So I applaud Nvidia for creatively innovating with what it has available. My guess is that there's a lot of performance enhancing capacity in whats available here and now, but it just isn't being used creatively. Moreover, it may soon be considered that a crisis, such as not having access to TSMC's 20 nm silicon, is such a circumstance which spurs the most innovation. Thus, to Nvidia, its partners, its customers and insightful onlookers, being forced to continue using 28nm in production may have been a blessing in disguise.

I'm not seeing much in the way of workstation application support for GPUs outside of the video/3D world today, and while that's an important area, it does cover only a small fraction of workstation users.

If and when GPUs do become relevant for a large proportion of the workstation market, I suspect it will be long after the current generation of GPUs are obsolete.

In the media and entertainment sectors, programmers have, e.g., incorporated CUDA into animation, modeling, rendering, color correction, grain management, compositing, finishing, effects editing, encoding and digital distribution, on-air graphics, on-set, review and stereo tools, and simulation applications. Although the programmers of applications in the media and entertainment industries are prolific in incorporating GPGPU computing into their softwares, there are programmers of applications in many other large fields who have done the same.

According to Nvidia [ http://www.nvidia.com/content/tesla/pdf/gpu-apps-catalog-mar14-digital-fnl-hr.pdf ], GPU accelerated applications have revolutionized the High Performance Computing (HPC) industry. There are over two hundred applications, across a wide range of fields, already optimized for CUDA. In addition to media and entertainment, these fields include the following:

1) Programmers have created CUDA applications to assist, e.g., in research at institutions of Higher Education and for HPC supercomputing, by creating applications used in chemistry, biology and physics that take advantage of CUDA. That includes: molecular dynamics, quantum chemistry, materials science, visualization and docking, bioinformatics and numerical analytics applications;

2) Programmers of defense and intelligence applications have, e.g., incorporated CUDA to provide faster geospatial visualization and analysis, multi-machine distributed object store providing SQL style query capability, advanced geospatial query capability, heatmap generation, and distributed rasterization services;

3) Programmers of computational finance applications have, e.g., incorporated CUDA to provide for faster for real-time hedging, valuation, derivative pricing and risk management (such as catastrophic risk modeling for earthquakes, hurricanes, terrorism, and infectious diseases), visual big data exploration, insight tools, and regulatory compliance and enterprise wide risk transparency packages;

4) Programmers of manufacturing CAD and CAE applications have, e.g., incorporated CUDA to enhance computational fluid dynamics, computational structural mechanics analyses, computer aided design, and electronic design automation;

5) Programmers of weather and climate forecasting applications have, e.g., incorporated CUDA to enhance weather graphics, and researching and predictioning with regional and global atmospheric and ocean modeling; and

6) Programmers of oil and gas industry applications have, e.g., incorporated CUDA to enhance seismic processing and interpretation and reservoir modeling.


Thus, GPGPU computing is currently relevant to lots of workstation users working in various industries. The GPGPU ball in on the software creators' side of the court - will we see a great swing or a horrible miss.

I agree with you about GPU software optimization, but if software vendors want to start seeing any performance improvements in their apps, they are going to have to adopt it sooner than later.
... .

Absolutely correct.

... .
Meanwhile, AMD and Nvidia are stuck in a rut trying to innovate on the same 3-year old process. As a result, AMDs latest Hawaii and Tonga cores are incremental improvements over the Tahiti cores (used in the 2013 nMP). Nvidia's latest GPUs are showing better results, but still nothing like the previous doubling of performance you'd expect from a new GPUs benefitting from a process shrink.

From AnandTech's opening paragraph on the GTX980 the other day...


The bottom line for nMP refresh options... really only consists of an incremental improvement in GPUs from AMD which are reported to be running 15-deg hotter than a Tahiti (meaning even more down clocking - possibly negating any performance gains) or perhaps, and this would be most interesting, a switch to Nvidia's latest GPU. Although the latter is probably more in the hands of Nvidia and their willingness to spin a custom design for the nMP at lower margins than they're use to.... .


Nvidia has innovated itself out of a rut. Nvidia (with the new GTXs) is achieving parity with AMD in OpenCL performance. [ http://www.brightsideofnews.com/2014/09/18/geforce-gtx-980-review-performance-lower-power/ ]. It's not that I never expected Nvidia to achieve parity, its just that I wasn't expecting it so soon and under these circumstances. Bad times and competitiveness can motivate those who prefer not sitting in the corner, saying "Woe is me."

So I see a perfect reason for Apple to introduce a Haswell MP - if only to get Maxwell CUDA cards in them.

As to the notion that Haswell-E is doesn't offer a meaningful improvement in performance over Ivy Bridge-E, I'd argue it makes for a bigger improvement than Ivy Bridge-E over Sandy Bridge-E in most CPU intensive tasks. No, it's not the 20-30% we used to see (outside of certain specialized areas like AVX) but it's not meaningless.


I also believe, in the absence of Apple switching to CUDA cards for the next MP, that Apple should introduce a Haswell MP because that added performance of Haswell, albeit not massive, may be enough for many others who don't own the 2013 MP - which is a vastly larger market than that composed of current 2013 MacPro owners, or, at the very least, tell us now what are its plans. Apple could creatively innovate with Haswell. Also, when I spend that amount of money for a system, I want one that I can easily upgrade myself like the cMP. Otherwise, I will continue to refrain from purchasing a new MacPro because I have no idea what is Apple's MacPro roadmap. Apple's recent handling of its plans for the MacPro has left a bad taste in my mouth - too much secrecy, then dropping the dual CPU option. Apple shouldn't revel in surprises when it comes to purchasers who use their products to make money.
 
Last edited:
If they are cheaper my hair on my head will grow back.

Though Intel will be giving Apple a discount the yields will be less at the start, and they may be slightly dearer than what they were paying for an already proven bit of Intel silicon. The extra current costs of DDR4 ECC sticks over the current DDR3 will probably make it more expensive I reckon.

Would be great if they built a bigger can with a bigger hex core thermal core instead of the current 3, having had a couple to bits they look quite feasibly scalable for more cards to cool around it. Sadly I'm not holding my breath for it happening though.
 
If they are cheaper my hair on my head will grow back.

Also, historically server-class Xeons have not dropped in price soon after the next generation starts shipping.

The main reasons are that many server manufacturers continue to build the older generation systems for customers who want to expand current setups (e.g. I have 31 dual-socket E5-2630-v2 systems in my lab. I need to buy six or seven more shortly - so I'll order E5-2630-v2 to have identical systems. For the next project I'll likely move to E5-x6xx-v3 CPUs - but even then may still buy a few v2 systems for the current setup.).

There's also strong upgrade demand - I'm looking at swapping the 6-core CPUs for 8-core or 12-core in a couple of servers that need more power.

The price dynamics around CPU generation changes for consumer Core I* chips don't apply to Xeons.

For consumer chips, Intel drops production and retires the older generation fairly quickly. For Xeons, Intel continues sales in parallel for a longer time, and gives plenty of notice as to when the "last order date" will be.
 
Are there any good reasons why Apple don't use i-Chips instead of the Xeons ?

It is possible for them to built their MacPros with i-Chips ?
 
Are there any good reasons why Apple don't use i-Chips instead of the Xeons ?

Some tasks/applications may require ECC memory.

It is possible for them to built their MacPros with i-Chips ?

Absolutely. Since i7's don't support ECC ram, they could dispense with that ram for such systems, offering/supporting even faster consumer grade ram as do lots of motherboards sold to builders, and offer Xeons for those who want/need ECC memory. It would also be a bit easier now because Apple doesn't offer dual CPU solutions - Apple could use the same hardware, except for the ram and the CPU. For example, for gamer's Apple could offer a utility to benefit from memory faster than 1866 MHz, the current top of the line speed for ECC memory in Ivy Bridge system. That's why NewEgg and others offer and sell memory much faster (2600 MHz+) than that speed.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.