Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Radeon Pro WX 5100. Polaris 10LE. 1792 GCN cores. 1.086 GHz core clock. 75W TDP.

51 GFLOPs/watt. Is it really a failure in efficiency?

It's efficient, but it's also 3.9 TFLOPs. That's better than the D700, but barely.

If macOS had better dual GPU support it might be a more compelling choice. But a single one those those doesn't hold a candle to Nvidia or Vega.

And as soon as you try to scale Polaris up it falls apart.
 
  • Like
Reactions: AidenShaw
Huh, interesting. If Vega can maintain that efficiency and fill out the high end of AMD's lineup then it too would make a killer mac pro. Its a shame its taking them so long...

If we trust the rumors, Vega is rumored to be 12 TFLOPS at 225 W. This puts it at 53 GFLOPS/W.
AMD's approach to the GPU architecture is... slightly different than what people think at first glance. When I read people's opinions about Vega, and its effciency they equate this idea, to a situation that somehow the GPU will use less power. Well...

AMD's approach was to increase both compute and graphics capabilities in the same thermal envelope. Don't expect anything else than 150-175W mid-range GPU and 200-225W high-end GPU.

There is an upside. Vega architecture has been designed to handle higher frequencies in lower wattages. So not only we can expect higher clocks, but also much better scalability with downclocking.
 
51 GFLOPs/watt.
Vega is rumored to be 12 TFLOPS at 225 W. This puts it at 53 GFLOPS/W.
The GPGPU industry needs to move beyond naïve performance claims based on clock speeds and shader counts, and establish useful, comparative benchmark standards for delivered performance.

And definitely remove the "per watt" from most claims. It's not irrelevant, but it should be a footnote in the perf reports. Many people are primarily concerned about absolute performance, and put power consumption as a minor concern.

But, that won't be useful until ATI supports CUDA.
 
The GPGPU industry needs to move beyond naïve performance claims based on clock speeds and shader counts, and establish useful, comparative benchmark standards for delivered performance.

But, that won't be useful until ATI supports CUDA.
Why don't you ask Nvidia to open up CUDA? We have open and industry standard benchmarks. They are called OpenCL, benchmarks.

Your post is complete logical fallacy that I cannot comprehend, that you can get away with it.

You are asking a company for proprietary API support, and call it open, established, comparative benchmark standard.

Why dont we look at OpenCL benchmarks, and call them open, useful, comparative benchmark standards, for delivered performance?

Ah, yes. No, because then Nvidia would not win, with AMD.

And you dared to call me a fanboy?
 
Why don't you ask Nvidia to open up CUDA? We have open and industry standard benchmarks. They are called OpenCL, benchmarks.

Your post is complete logical fallacy that I cannot comprehend, that you can get away with it.

You are asking a company for proprietary API support, and call it open, established, comparative benchmark standard.

Why dont we look at OpenCL benchmarks, and call them open, useful, comparative benchmark standards, for delivered performance?

Ah, yes. No, because then Nvidia would not win, with AMD.

And you dared to call me a fanboy?
Hasn't OpenCL been abandoned? Why use it as a basis for benchmarks?

ps: "hook, line and sinker"
pps: where is "fanboy" in my post? (also, I spell it "fanboi")
 
Hasn't OpenCL been abandoned? Why use it as a basis for benchmarks?

ps: "hook, line and sinker"
pps: where is "fanboy" in my post?
You have called me AMD fanboy months ago, when you did not liked some things I posted about Nvidia, which were simply facts.

OpenCL maybe has been abandoned by Apple, but not by software developers. There is still support for it in many applications. Is it not?

Oh, there is one more Professional API, that is OpenSource. We can use it as open, established, comparative benchmark. The API is very simple, it is just a plug-in, to every application, so the developers do not need to optimize anything. Every IEM can use it: AMD, Intel, Nvidia... even Apple, on their little SoC's.

its Called AMD ProRender. Never heard about, I presume?
 
its Called AMD ProRender. Never heard about, I presume?

Yes, hopefully the help in development from Maxon can help push that into a useful production renderer.

Maxon makes cinebench. So perhaps that will give us the cross platform GPU benchmarking in real world software we are looking for. That renderer is the last great hope for Mac users stuck on AMD GPUs who use Cinema 4d.
 
  • Like
Reactions: singhs.apps
Yes, hopefully the help in development from Maxon can help push that into a useful production renderer.

Maxon makes cinebench. So perhaps that will give us the cross platform GPU benchmarking in real world software we are looking for. That renderer is the last great hope for Mac users stuck on AMD GPUs who use Cinema 4d.
Benchmarks that will only be relevant for C4D users.

The useful benchmarks test a number of different applications, and call out the performance of each app as well as the aggregate mean.

This becomes even more complicated with GPU-accelerated apps, since both the CPU and GPU affect the results. At the least, one needs a chart of different CPUs with one GPU - and a separate chart of different GPUs with one CPU.

And for those stuck with the deprecated OpenCL GPUs, I hope that they show the performance of cross-architecture apps using both the standard CUDA API and the deprecated OpenCL API.
[doublepost=1485309722][/doublepost]
You have called me AMD fanboy months ago, when you did not liked some things I posted about Nvidia, which were simply facts.
Link?
[doublepost=1485309976][/doublepost]
its Called AMD ProRender. Never heard about, I presume?
Because it’s built on industry-standard OpenCL™, Radeon ProRender works across Windows®, OS X and Linux®, and supports AMD GPUs and CPUs as well as those of other vendors.

http://www.amd.com/en-us/innovations/software-technologies/radeon-pro-technologies/radeon-prorender

Deprecated. And you didn't even get the name right.
 
Last edited:
If they can make a heavier MBP with 32 gig ram and 6 cores running at 3+ghz, I'm in. They have already rolled the MP into a cylinder 1/3 the size of the previous generation. And they have reduced the MBP to a glorified "air". let's meet somewhere in the middle!
 
what relevance does any benchmark software have for you if it's not tensorflow or FCPX?
Thank you for supporting my point that a benchmark on a very small number of applications is useless unless your cash flow depends on one of those few applications.

As I said, the popular benchmark suites run a fair number of applications, and aggregate a performance number. None of them seem to give much attention to GPU performance, and since OpenCL was deprecated any OpenCL numbers are not very interesting.
 
Thank you for supporting my point that a benchmark on a very small number of applications is useless unless your cash flow depends on one of those few applications.

Snip - you've edited your reply. I'll edit mine.

I see what you're getting at now. I think this thread has people on edge, lol.


RELEASE A NEW MAC PRO, APPLE. The forums depend on it.
 
Last edited:
  • Like
Reactions: Aldaris
Ok, now you are beginning to fascinate me.

You can't look at benchmarks from other apps and come to a general conclusion about hardware speeds and the associated costs? Data from other apps doesn't inform you in any way?
If the benchmark shows speeds for 10 or 20 apps in different categories - like Spec - then certainly I can look at various apps similar to mine and make some guesses.

Your post implied that if we get some C4D numbers we can apply them to other apps. Bull####. It lets us compare C4D numbers, not TensorFlow or FCPX.
 
If the benchmark shows speeds for 10 or 20 apps in different categories - like Spec - then certainly I can look at various apps similar to mine and make some guesses.

Your post implied that if we get some C4D numbers we can apply them to other apps. Bull####. It lets us compare C4D numbers, not TensorFlow or FCPX.

Edit - never mind. Onward to other threads.
 
Last edited:
Some people have an agenda...

On another note, I was almost tented to get a refurbished classic MacPro for 699€ - E5462 quad core 2.8GHz, 10GBRAM and 250GB SSD, with 2600XT.
If only I wasn't eyeing the nMP 2017...
 
  • Like
Reactions: Aldaris
If they can make a heavier MBP with 32 gig ram and 6 cores running at 3+ghz, I'm in.

This goes against many of Apple's design directions, namely power efficiency (all day battery life), thickness, weight, etc.

We're not going back in time. If you don't care about the size and weight of a MOBILE DEVICE then go get a PC laptop and enjoy burning your thighs
 
does Apple has on the market all of the components for a possible update for the mac pro? and with that get some improved perf ? cpu/gpu, we know that they can use MBP ssd, ram ?
[doublepost=1485348627][/doublepost]
This goes against many of Apple's design directions, namely power efficiency (all day battery life), thickness, weight, etc.

We're not going back in time. If you don't care about the size and weight of a MOBILE DEVICE then go get a PC laptop and enjoy burning your thighs
i think they can add 32gb and 6 cores...in same thickness and with all day battery life only if they made 17" MBP...so 0% chances
 
You can search for it in forum trash. Because thats where the thread, and your and mine posts are.
AidenShaw said:
Because it’s built on industry-standard OpenCL™, Radeon ProRender works across Windows®, OS X and Linux®, and supports AMD GPUs and CPUs as well as those of other vendors.

http://www.amd.com/en-us/innovations/software-technologies/radeon-pro-technologies/radeon-prorender

Deprecated. And you didn't even get the name right.
There is a difference between opinions and facts. You are mistaking your opinions about OpenCL with facts. OpenCL is still alive.

ProRender is just a plug-in for applications. It is funny that you are arguing with reality, and the reaction ProRender got in industry.

Maybe Manuel is right, after all...?
 
Anyone know what these 22 core MacPro 6,1 Geekbench results are? Hackintosh, Fake or the real deal?

http://browser.primatelabs.com/v4/cpu/1357458


Single-Core Score Multi-Core Score
4279 38395
Geekbench 4.0.3 Tryout for Mac OS X x86 (64-bit)
Result Information
Upload Date December 17 2016 11:28 AM
Views 60
System Information
MacPro6,1
Operating System macOS 10.12.2 (Build 16C67)
Model MacPro6,1
Processor Intel Xeon E5-2696 v4 @ 2.20 GHz
1 processor, 22 cores, 44 threads
Processor ID GenuineIntel Family 6 Model 79 Stepping 1
Processor Codename
Processor Package
L1 Instruction Cache 32 KB x 22
L1 Data Cache 32 KB x 22
L2 Cache 256 KB x 22
L3 Cache 56320 KB
Motherboard Apple Inc. Mac-F60DEB81FF30ACF6 MacPro6,1
Northbridge
Southbridge
BIOS Apple Inc. MP61.88Z.0116.B04.1312061508
Memory 131072 MB 2400 MHz DDR4

More results here: http://browser.primatelabs.com/v4/cpu/search?dir=desc&q=macpro6,1&sort=multicore_score

Is that Geekbench Single-Core result not too high for a 2.20 GHz Core???
I could believe the Multi-core result but the single-core must be a joke.
 
Looks like a X99 Hackintosh to me. Single Core turbo boost is 3.6Ghz, so scores could be legit.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.