Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

762999

Cancelled
Original poster
Nov 9, 2012
891
509
Capture2.PNG
Here it is: link

Cheers

I created a summary computed with the benchmark numbers + relative performance index to the fastest machine in each test.

Price wise:
  • the 10 core is 78% the price of the 18C for 83% of it's performance (in these 6 benchmarks)
  • the 8 core is 68% the price of the 18C for 74% of it's performance (in these 6 benchmarks)
 
Last edited:
Wow...not that big of a performance increase vs price difference between the 18 & 10 core (even the 8 core). It will be interesting to see if there will be benchmarks of apps where the performance difference is more significant?
 
  • Like
Reactions: SFjohn
"the 8 core is 68% the price of the 18C for 74% of it's performance (in these 6 benchmarks)"

That's telling. Add in the still available $1000 discount at MicroCenter (for those lucky enough to live near one) and the 8 core gets closer to 55% (?) of the price for 74% of the performance. That's mighty impressive.

The 10 core value holds up even better. It is definitely the P/P "sweet spot" if you have enough $ to splurge but not enough to go crazy.
 
  • Like
Reactions: SFjohn and 762999
"the 8 core is 68% the price of the 18C for 74% of it's performance (in these 6 benchmarks)"

That's telling. Add in the still available $1000 discount at MicroCenter (for those lucky enough to live near one) and the 8 core gets closer to 55% (?) of the price for 74% of the performance. That's mighty impressive.

The 10 core value holds up even better. It is definitely the P/P "sweet spot" if you have enough $ to splurge but not enough to go crazy.
Totally depends on how you look at it. According to PassMark, the Xeon W 2195 (18 Core Processor in the iMP) is the 4th best processor in the world for dollars / compute performance. It was 2nd last time I checked, maybe something new came out while I wasn't looking. Statistics are a funny thing, and you can shape them however you want to suit your data.

I'm not saying anything here is wrong, quite the opposite actually. But I am bringing new information here that does favor the 18 core.

https://www.cpubenchmark.net/cpu.php?cpu=Intel+Xeon+W-2195+@+2.30GHz&id=3149

Edit: Something else to consider is that these performance benchmarks are all for video transcoding and music editing. Not all pros work in this area, and benchmarks in other areas may show different performance gains / losses as well.
 
Totally depends on how you look at it. According to PassMark, the Xeon W 2195 (18 Core Processor in the iMP) is the 4th best processor in the world for dollars / compute performance. It was 2nd last time I checked, maybe something new came out while I wasn't looking. Statistics are a funny thing, and you can shape them however you want to suit your data.

I'm not saying anything here is wrong, quite the opposite actually. But I am bringing new information here that does favor the 18 core.

https://www.cpubenchmark.net/cpu.php?cpu=Intel+Xeon+W-2195+@+2.30GHz&id=3149

Edit: Something else to consider is that these performance benchmarks are all for video transcoding and music editing. Not all pros work in this area, and benchmarks in other areas may show different performance gains / losses as well.

You have benchmark tools and real work. If a benchmark tool can use the all cores efficiently it's not the case with many softwares. I simply did an average of the results (as stated in my first post). If someone doesn't use these tools, I know the stats are meaningless. If more tools were benchmarked I would have included them. I did for fun and don't even plan to buy one, I can't justify it for my current work.

Overall, Passmark & Geekbench are fine to show potential of a CPU since they saturate it, but when it's to do REAL work it's a different story.

Cheers
[doublepost=1517599797][/doublepost]
"But can it run Crysis?" Haha I love Youtube comments

I'm puzzled, I didn't see any Dwarf Fortress benchmarks.
 
  • Like
Reactions: SFjohn
Very interesting, holds up to what most of us believed, though I'm actually surprised the single core is as close to the 10 core as it is.

Geekbench won't give an accurate single core benchmark. I was already able to prove that the more cores you have, the slower the single core will go because of thermal limits. Single core simply doesn't hit that due to the short & less intensive test of Geekbench.
 
  • Like
Reactions: SFjohn
"the 8 core is 68% the price of the 18C for 74% of it's performance (in these 6 benchmarks)"

That's telling. Add in the still available $1000 discount at MicroCenter (for those lucky enough to live near one) and the 8 core gets closer to 55% (?) of the price for 74% of the performance. That's mighty impressive.

The 10 core value holds up even better. It is definitely the P/P "sweet spot" if you have enough $ to splurge but not enough to go crazy.

tempted to get one since i live near microcenter .. but hesitant cuz im really holding out for the mac pro
 
  • Like
Reactions: SFjohn
Geekbench won't give an accurate single core benchmark. I was already able to prove that the more cores you have, the slower the single core will go because of thermal limits. Single core simply doesn't hit that due to the short & less intensive test of Geekbench.
How can I test that on 10-core?
 
  • Like
Reactions: SFjohn
Geekbench won't give an accurate single core benchmark. I was already able to prove that the more cores you have, the slower the single core will go because of thermal limits. Single core simply doesn't hit that due to the short & less intensive test of Geekbench.

Using Xcode's Instruments one can disable 9 cores on the 10 core iMP and then run Geekbench 4 to see how single core performs.

I've done this and tested performance of the single core using my own CPU intensive program and was able to achieve 4.5 GHz. This 4.5 GHz can also be achieved on two cores when the other 8 cores are disabled.
 
  • Like
Reactions: SFjohn
The point is that Geekbench is not sufficient to test a single core. Manually turning off the rest of the cores does not help as that is not even close to real world performance. You need to run a different single core benchmark tool at a much longer time that actually uses 100% of the single core to get accurate results.
 
  • Like
Reactions: SFjohn
The point is that Geekbench is not sufficient to test a single core. Manually turning off the rest of the cores does not help as that is not even close to real world performance. You need to run a different single core benchmark tool at a much longer time that actually uses 100% of the single core to get accurate results.
Yes... I don't disagree with that. What I was trying to indicate was that one can get a single core on the 10-core to be 4.5 GHz as many people were saying it was not possible.... and yes, that single core configuration is not practical by any means for a running system.

With all 10 cores enabled and launching a program that uses just one core's worth of CPU is not achievable for exercising a single core. The reason being is that the program that wants the 100% of a core will be forced by the OS to bounce around from core to core and will not stay on just one core. However, if one can code a program that can lock itself to a single core that maybe a way to address the issue.
 
  • Like
Reactions: SFjohn and Trebuin
Yes... I don't disagree with that. What I was trying to indicate was that one can get a single core on the 10-core to be 4.5 GHz as many people were saying it was not possible.... and yes, that single core configuration is not practical by any means for a running system.

With all 10 cores enabled and launching a program that uses just one core's worth of CPU is not achievable for exercising a single core. The reason being is that the program that wants the 100% of a core will be forced by the OS to bounce around from core to core and will not stay on just one core. However, if one can code a program that can lock itself to a single core that maybe a way to address the issue.
I have run a single instance of the Yes command and watched the processor being used bounce around as you describe. Power Gadget showed processor running mostly around 4.26, dropping occasionally to a little under 4.2. What does that tell us?
 
  • Like
Reactions: SFjohn
You have benchmark tools and real work. If a benchmark tool can use the all cores efficiently it's not the case with many softwares...Overall, Passmark & Geekbench are fine to show potential of a CPU since they saturate it, but when it's to do REAL work it's a different story....

That's exactly right. I have extensively tested both 8-core and 10-core iMac Pro vs a top-spec 2017 iMac on real-world video editing tasks in FCPX, and for most cases involving H264, the iMac Pro isn't that much faster.

This is because the top-spec 2017 iMac is *really* fast on H264 in FCPX -- it is about 2x faster than the top-spec 2015 iMac at importing H264 and creating proxies, much faster timeline responsiveness on 4k H264, and about 2x faster at rendering and exporting to 1080p or 4k H264. So that's a high bar for the iMac Pro to exceed.

The iMac Pro is much faster than the 12-core D700 nMP at these tasks, so anyone upgrading from the trash can or a 2015 or earlier iMac will see a big improvement. But anyone editing H264 video on an i7 2017 iMac in FCPX won't see much improvement on most things. OTOH the iMac Pro is much quieter, even under high load. Is a quiet computer that's not much faster on this common workload worth $8,000?

On an all-ProRes or RED workflow, it's different. The iMac Pro is a lot faster than the 2017 iMac (in FCPX). This shows how the exact codec and workload is important in characterizing performance.

In the above-referenced video he mostly tested 8k RED RAW footage and transcoding from that to ProRes. The only H264 test he did was using ScreenFlow. If that isn't using any transcoding acceleration it's a pure CPU task and of course more cores would improve things. So he did virtually no meaningful testing of the world's most common codec -- H264.

Also each NLE is different. My results in FCPX don't necessarily translate to Premiere or Resolve. Each one must be tested and characterized separately.
 
That's exactly right. I have extensively tested both 8-core and 10-core iMac Pro vs a top-spec 2017 iMac on real-world video editing tasks in FCPX, and for most cases involving H264, the iMac Pro isn't that much faster.

This is because the top-spec 2017 iMac is *really* fast on H264 in FCPX -- it is about 2x faster than the top-spec 2015 iMac at importing H264 and creating proxies, much faster timeline responsiveness on 4k H264, and about 2x faster at rendering and exporting to 1080p or 4k H264. So that's a high bar for the iMac Pro to exceed.

The iMac Pro is much faster than the 12-core D700 nMP at these tasks, so anyone upgrading from the trash can or a 2015 or earlier iMac will see a big improvement. But anyone editing H264 video on an i7 2017 iMac in FCPX won't see much improvement on most things. OTOH the iMac Pro is much quieter, even under high load. Is a quiet computer that's not much faster on this common workload worth $8,000?

On an all-ProRes or RED workflow, it's different. The iMac Pro is a lot faster than the 2017 iMac (in FCPX). This shows how the exact codec and workload is important in characterizing performance.

In the above-referenced video he mostly tested 8k RED RAW footage and transcoding from that to ProRes. The only H264 test he did was using ScreenFlow. If that isn't using any transcoding acceleration it's a pure CPU task and of course more cores would improve things. So he did virtually no meaningful testing of the world's most common codec -- H264.

Also each NLE is different. My results in FCPX don't necessarily translate to Premiere or Resolve. Each one must be tested and characterized separately.

I agree, mostly. But broadly speaking H264 becomes more irrelevant by the day and will likely have the same fate as Flash in a few years. H265 is becoming more common and will only grow.
 
  • Like
Reactions: SFjohn
...But broadly speaking H264 becomes more irrelevant by the day and will likely have the same fate as Flash in a few years. H265 is becoming more common and will only grow.

That does not reflect current or near term reality. As you can see in this 2017 global media report, the world wide split was H264 at 79%, VP9 at 11%, FLV at 5%, and HEVC/H265 at 3%: https://www.encoding.com/files/2017-Global-Media-Formats-Report.pdf

There are probably thousands of petabytes of H264/MPEG-4 content that will never be transcoded to anything else. I myself manage 200 terabytes of h264 documentary material. Every Blu Ray disk is encode H264, most satellite and cable channels are encoded H264.

H265 is heavily patent-encumbered, hence deployment of H265/HEVC has been slowed for non-technical reasons. There have been major disputes over licensing, royalties and intellectual property. At one point the patent holders were demanding a % of gross revenue from *each* individual end user who encodes H265 content. That is one reason Google developed the open source VP9 codec. The patent holders have recently retreated from their more egregious demands, but that negatively tainted H265 and has delayed deployment.

This licensing and royalty issue is why the evaluation version of Premiere Pro does not have H265.

VP9 is replacing H264 on Youtube, and they will transition to VP9's successor AV1 soon. AV1 is also open source, not patent-encumbered, and significantly better than H265/HEVC

After many years when all current cameras and most computers have been replaced, and IF HEVC/H265 survives, and IF the world doesn't settle on Google's open-source AV1, THEN might be a good time for hardware and NLE designers to reduce support for H264 hardware acceleration.

That is a long way off. For today and for several years, the iMac Pro will have to handle lots of H264/MPEG-4 material. It is better at this than the 2013 Mac Pro but it's barely faster than a 2017 iMac overall on this workload.
 
That does not reflect current or near term reality. As you can see in this 2017 global media report, the world wide split was H264 at 79%, VP9 at 11%, FLV at 5%, and HEVC/H265 at 3%: https://www.encoding.com/files/2017-Global-Media-Formats-Report.pdf

There are probably thousands of petabytes of H264/MPEG-4 content that will never be transcoded to anything else. I myself manage 200 terabytes of h264 documentary material. Every Blu Ray disk is encode H264, most satellite and cable channels are encoded H264.

H265 is heavily patent-encumbered, hence deployment of H265/HEVC has been slowed for non-technical reasons. There have been major disputes over licensing, royalties and intellectual property. At one point the patent holders were demanding a % of gross revenue from *each* individual end user who encodes H265 content. That is one reason Google developed the open source VP9 codec. The patent holders have recently retreated from their more egregious demands, but that negatively tainted H265 and has delayed deployment.

This licensing and royalty issue is why the evaluation version of Premiere Pro does not have H265.

VP9 is replacing H264 on Youtube, and they will transition to VP9's successor AV1 soon. AV1 is also open source, not patent-encumbered, and significantly better than H265/HEVC

After many years when all current cameras and most computers have been replaced, and IF HEVC/H265 survives, and IF the world doesn't settle on Google's open-source AV1, THEN might be a good time for hardware and NLE designers to reduce support for H264 hardware acceleration.

That is a long way off. For today and for several years, the iMac Pro will have to handle lots of H264/MPEG-4 material. It is better at this than the 2013 Mac Pro but it's barely faster than a 2017 iMac overall on this workload.
Wow! There is so much there that I had no idea of. Thank you.
 
  • Like
Reactions: SFjohn
That does not reflect current or near term reality. As you can see in this 2017 global media report, the world wide split was H264 at 79%, VP9 at 11%, FLV at 5%, and HEVC/H265 at 3%: https://www.encoding.com/files/2017-Global-Media-Formats-Report.pdf

There are probably thousands of petabytes of H264/MPEG-4 content that will never be transcoded to anything else. I myself manage 200 terabytes of h264 documentary material. Every Blu Ray disk is encode H264, most satellite and cable channels are encoded H264.

H265 is heavily patent-encumbered, hence deployment of H265/HEVC has been slowed for non-technical reasons. There have been major disputes over licensing, royalties and intellectual property. At one point the patent holders were demanding a % of gross revenue from *each* individual end user who encodes H265 content. That is one reason Google developed the open source VP9 codec. The patent holders have recently retreated from their more egregious demands, but that negatively tainted H265 and has delayed deployment.

This licensing and royalty issue is why the evaluation version of Premiere Pro does not have H265.

VP9 is replacing H264 on Youtube, and they will transition to VP9's successor AV1 soon. AV1 is also open source, not patent-encumbered, and significantly better than H265/HEVC

After many years when all current cameras and most computers have been replaced, and IF HEVC/H265 survives, and IF the world doesn't settle on Google's open-source AV1, THEN might be a good time for hardware and NLE designers to reduce support for H264 hardware acceleration.

That is a long way off. For today and for several years, the iMac Pro will have to handle lots of H264/MPEG-4 material. It is better at this than the 2013 Mac Pro but it's barely faster than a 2017 iMac overall on this workload.

Great insight and commentary as always joema2. Thank you.
 
  • Like
Reactions: SFjohn
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.