Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
....It's actually a bit damning that the v3 10-core shows a 20% improvement over the v2 8-core in that benchmark with 25% more cores...

The 12-core E5-2690 v3 has 19.5% more performance on the multi-threaded whetstone 2 test than the 10-core E5-2690 v2, despite having a base clock that's 2.6Ghz vs 3.0Ghz (13.3% slower). So the v3 has 20% more cores that run 13.3% slower, yet it produced nearly 20% more performance.

Amdahl's Law states that performance increases rapidly taper off with increasing core counts: https://en.wikipedia.org/wiki/Amdahl's_law

Considering this the v3 achieving that amount of performance increase is impressive.
 
Amdahl's Law states that performance increases rapidly taper off with increasing core counts: https://en.wikipedia.org/wiki/Amdahl's_law.

You don't even understand the link that you posted.

The topic sentence of the second paragraph at that link says "The speedup of a program using multiple processors in parallel computing is limited by the time needed for the sequential fraction of the program".

In other words, if 50% of your app is single-threaded ("sequential"), then an infinite number of cores can only double your speed.

That's completely different from "Amdahl's Law states that performance increases rapidly taper off with increasing core counts". If the sequential fraction of your program approaches zero, then additional cores give you nearly perfect scaling. According to Amdahl's Law.
 
I read through the Tom's review and found it extremely difficult to draw conclusions from because they're comparing very different CPUs (8, 10, and 12 cores parts with similar model #s across generations).

While I've posted a subset of this previously in another thread on the subject of Haswell, I think it's worth repeating as it's the best cross-generational set of benchmarks I've found.

In AnandTech's Haswell-E Review (effectively the same as EP without ECC) they compared fairly equivalent 6-core CPUs from the last three generations...

Sandy Bridge 3930K = 6-cores at 3.2GHz-3.8GHz
Ivy Bridge 4930K = 6-cores at 3.4GHz-3.9GHz
Haswell 5930K = 6-cores at 3.5GHz-3.7GHz

All of these are going to top out at around 3.6-3.7GHz under load.

Here's the results summarized...

Handbrake LQ: (higher is better)
3930K = 473.8
4930K = 520.8 (10% improvement)
5930K = 527.36 (1% improvement)

Handbrake 4K: (higher is better)
3930K = 27.69
4930K = 29.69 (7% improvement)
5930K = 31.12 (5% improvement)

AgiSoft: (lower is better)
3930K = 14.92
4930K = 13.87 (7% improvement)
5930K = 14.08 (2% regression)

WinRAR: (lower is better)
3930K = 45.99
4930K = 43.79 (5% improvement)
5930K = 44.95 (3% regression)

Hybrid x265 4K: (higher is better)
3930K = 1.84
4930K = 2.05 (11% improvement)
5930K = 2.04 (0%)

Cinebench Single Thread: (higher is better)
3930K = 132
4930K = 140 (6% improvement)
5930K = 146 (4% improvement)

Cinebench Multi Thread: (higher is better)
3930K = 977
4930K = 1043 (7% improvement)
5930K = 1083 (4% improvement)

3DPM Single Thread: (higher is better)
3930K = 120.19
4930K = 126.35 (5% improvement)
5930K = 120.0 (5% regression)

3DPM Multi Thread: (higher is better)
3930K = 967.68
4930K = 1024.55 (6% improvement)
5930K = 968.59 (6% regression)

FastStone: (lower is better)
3930K = 44
4930K = 41 (7% improvement)
5930K = 41 (0%)

As you can see, Sandy to Ivy offered anywhere from 5-11% improvement. Haswell is a different story... at best (in this set of benchmarks) its 5% better than Ivy and at worst, its a regression to Sandy levels.
 
Last edited:
You don't even understand the link that you posted.

The topic sentence of the second paragraph at that link says "The speedup of a program using multiple processors in parallel computing is limited by the time needed for the sequential fraction of the program".

In other words, if 50% of your app is single-threaded ("sequential"), then an infinite number of cores can only double your speed.

That's completely different from "Amdahl's Law states that performance increases rapidly taper off with increasing core counts". If the sequential fraction of your program approaches zero, then additional cores give you nearly perfect scaling. According to Amdahl's Law.

Yes and no. I've heard several theroms that back up that multicore performance improvements will eventually taper off. One that uses Amdahl's law is that as the number of cores we have approaches a big number, code itself can only be devided a finite number of times. Therefore, unless a lot of algorithms expand in complexity dramatically, our core counts will eventually exceed what we can actually use.

The other therom is that eventually the cost to manage +1 core will be computationally more than the additional core, so each additional thread would create a net loss in performance.
 
Yes and no. I've heard several theroms that back up that multicore performance improvements will eventually taper off. One that uses Amdahl's law is that as the number of cores we have approaches a big number, code itself can only be devided a finite number of times. Therefore, unless a lot of algorithms expand in complexity dramatically, our core counts will eventually exceed what we can actually use.

The other therom is that eventually the cost to manage +1 core will be computationally more than the additional core, so each additional thread would create a net loss in performance.

Whether true or not for particular applications, neither of those "theorems" are the same as "Amdahl's Law". (The topic sentence of the second paragraph.)
 
You don't even understand the link that you posted...That's completely different from "Amdahl's Law states that performance increases rapidly taper off with increasing core counts". If the sequential fraction of your program approaches zero, then additional cores give you nearly perfect scaling...

I assure you I understand it and have worked on multiprocessor computers for 30 years at a system level.

At a fixed, unchanging % of serialized code -- IOW a single benchmark -- scalability quickly plateaus as core count increases. That's what each line graph on the Amdahl's Law chart illustrates.

The benchmark in question is multi-threaded Whetstone. Whetstone uses mainly global variables requiring protection of critical sections via synchronization mechanisms such as spinlocks (Nanda, 1992).

While we don't know the exact % of serializable code fraction in Whetstone benchmark, the literature is very clear that global variables are a significant internal component. This variant of the test is multi-thread, not multi-process. All threads run within a single process and have access to global data which requires synchronization protection. Without that the program would corrupt data and likely crash.

Running such a workload, showing any significant scalability as core count increases from 10 to 12 is an achievement.
 
I assure you I understand it and have worked on multiprocessor computers for 30 years at a system level.

At a fixed, unchanging % of serialized code -- IOW a single benchmark -- scalability quickly plateaus as core count increases. That's what each line graph on the Amdahl's Law chart illustrates.

The benchmark in question is multi-threaded Whetstone. Whetstone uses mainly global variables requiring protection of critical sections via synchronization mechanisms such as spinlocks (Nanda, 1992).

While we don't know the exact % of serializable code fraction in Whetstone benchmark, the literature is very clear that global variables are a significant internal component. This variant of the test is multi-thread, not multi-process. All threads run within a single process and have access to global data which requires synchronization protection. Without that the program would corrupt data and likely crash.

Running such a workload, showing any significant scalability as core count increases from 10 to 12 is an achievement.

Then we agree that it's application-dependent - some apps may scale nearly perfectly going from 10 to 12 cores.
 
thinking more and more of selling my iMac and pickup a refurbed quad, 1tb sad, d700s with the intentions of upgrading the cpu to a 10 core or the higher clocked 8 core
 
The buyers-guide was updated now with the status "Caution". Yes, it's automated and yes it's all about the number of days that has passed... But still... :cool:
 
Don't buy now because of a web script?

Yes, but also because for the next two weeks Mercury is retrograde in Libra. And there will be a solar eclipse on Oct. 23 in the house of Scorpio. October is also Dog month, so that means new products will be fetched and brought to consumers. But the new Mac Pro won't be released until after the new moon on October 24th.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.