Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
You are making a lot of assumptions about "we" are thinking. I didn't say the RX 480 was rubbish, I said that that the GTX 1060 is much more efficient. Given that Apple tends to design thermally constrained systems, this matters a great deal.
Im not assuming anything.

What I meant by we: is what has been discussed over few past pages.

Aaaaand secondly, lowering the voltages on the core of RX 480 saves 30W of power. You do not even have to change the core clocks. WX 7100 is 130W GPU, with 1243 MHz on core clock. 21 less, than RX 480, we are discussing. Im sure that makes ginormous difference in performance of the GPU.

Apple GPUs are undervolted, anyway...

aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS84LzIvNjMxNDQyL29yaWdpbmFsL1JhZGVvbi1Qcm8tV1g3MTAwLnBuZw==

aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS9UL0UvNjMzNTA2L29yaWdpbmFsLzA3LVBvd2VyLURyYXctQWxsLVNjZW5lcy5wbmc=


Take a broader picture. I have brought up enough benchmarks(compute) with properly optimized software, which can do on AMD GPUs.

P.S. I am not claiming that they are better or worse, or on par in efficiency.

Believe me, I would love them to be more efficient, but they are not. But they are also not that bad, as people are making them to be. That is whole point.
 
You are making a lot of assumptions about "we" are thinking. I didn't say the RX 480 was rubbish, I said that that the GTX 1060 is much more efficient. Given that Apple tends to design thermally constrained systems, this matters a great deal.

This is all I've been saying as well. There are games where the RX 480/580 win. If you play those games, buy the AMD card. There are games where the 1060 wins. If you play those games, buy the NVIDIA card. Performance is close in general, but given the absolutely enormous difference in power consumption, it should be clear to everyone that NVIDIA has a more efficient architecture overall. It's a shame that Apple will likely stick with AMD and chose a less efficient architecture for their extremely power-limited systems.
[doublepost=1494009206][/doublepost]
Aaaaand secondly, lowering the voltages on the core of RX 480 saves 30W of power. You do not even have to change the core clocks. WX 7100 is 130W GPU, with 1243 MHz on core clock. 21 less, than RX 480, we are discussing. Im sure that makes ginormous difference in performance of the GPU.

This is irrelevant unless OEMs are actually selling cards with lower voltages by default. The exact same change could be applied to the GTX 1060 and you'd lower the power consumption of that card as well. All this indicates is that for AMD to compete on overall performance, they had to crank up the voltage to the absolute limit. After all, the 580 is basically just an overclocked (and over-voltaged) RX 480, right?

So yeah, you can keep trying to explain this away, but it's really irrelevant in the grand scheme of things.
 
  • Like
Reactions: tuxon86
This is all I've been saying as well. There are games where the RX 480/580 win. If you play those games, buy the AMD card. There are games where the 1060 wins. If you play those games, buy the NVIDIA card. Performance is close in general, but given the absolutely enormous difference in power consumption, it should be clear to everyone that NVIDIA has a more efficient architecture overall. It's a shame that Apple will likely stick with AMD and chose a less efficient architecture for their extremely power-limited systems.
[doublepost=1494009206][/doublepost]

This is irrelevant unless OEMs are actually selling cards with lower voltages by default. The exact same change could be applied to the GTX 1060 and you'd lower the power consumption of that card as well. All this indicates is that for AMD to compete on overall performance, they had to crank up the voltage to the absolute limit. After all, the 580 is basically just an overclocked (and over-voltaged) RX 480, right?

So yeah, you can keep trying to explain this away, but it's really irrelevant in the grand scheme of things.
You do not see that that you contradicted yourself in the context of Apple computers in the one and the same post?

And funnier is that you in the context of quote, cut out the most important part.

"GPUs that Apple uses are undervolted anyway".

So why it is a shame, to use a GPU that draws 12W more, for much higher compute performance?
 
Power consumption can be a reason to buy NVIDIA, FreeSync can be a popular reason to buy AMD.
 
Im not assuming anything.

What I meant by we: is what has been discussed over few past pages.

Aaaaand secondly, lowering the voltages on the core of RX 480 saves 30W of power. You do not even have to change the core clocks. WX 7100 is 130W GPU, with 1243 MHz on core clock. 21 less, than RX 480, we are discussing. Im sure that makes ginormous difference in performance of the GPU.

Apple GPUs are undervolted, anyway...

aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS84LzIvNjMxNDQyL29yaWdpbmFsL1JhZGVvbi1Qcm8tV1g3MTAwLnBuZw==

aHR0cDovL21lZGlhLmJlc3RvZm1pY3JvLmNvbS9UL0UvNjMzNTA2L29yaWdpbmFsLzA3LVBvd2VyLURyYXctQWxsLVNjZW5lcy5wbmc=


Take a broader picture. I have brought up enough benchmarks(compute) with properly optimized software, which can do on AMD GPUs.

P.S. I am not claiming that they are better or worse, or on par in efficiency.

Believe me, I would love them to be more efficient, but they are not. But they are also not that bad, as people are making them to be. That is whole point.

You can undervolt with Nvidia too, and see similar improvements in efficiency. Like with the upcoming mobile gtx 1080 that will fit 6.6 TFLOPS into 110 W.
 
You do not see that that you contradicted yourself in the context of Apple computers in the one and the same post?

And funnier is that you in the context of quote, cut out the most important part.

"GPUs that Apple uses are undervolted anyway".

So why it is a shame, to use a GPU that draws 12W more, for much higher compute performance?

Every time you post that AMD has higher compute performance, I can just find examples that show otherwise, if you like.

86522.png
 
Every time you post that AMD has higher compute performance, I can just find examples that show otherwise, if you like.

86522.png
How does it turn out in real world applications?

Do you want me to bring compute, real world benchmarks comparing RX 480 vs GTX 1060?

By real world I mean professional workloads. Professional applications.
 
How does it turn out in real world applications?

Do you want me to bring compute, real world benchmarks comparing RX 480 vs GTX 1060?

By real world I mean professional workloads. Professional applications.
Perhaps professional particle physicists would object to your snide comment.

Your "surreal world" is yours and yours alone.
 
Do Particle physicists use Macs, or Apple hardware at all?
Many scientists and engineers use macs. One aspect that AMD has no presence in is machine learning. This space is absolutely dominated by CUDA.
 
Many scientists and engineers use macs. One aspect that AMD has no presence in is machine learning. This space is absolutely dominated by CUDA.
My experience is that many scientists and engineers use Apple laptops - as dumb terminals to SSH into their Linux computers and supercomputers running CUDA applications.

That's certainly the case is my group. Most people have MacBooks (we have a generous budget) - but the laptops are used for email, browsing, and remote connections to our Linux (and Windows) lab systems.
 
Last edited:
  • Like
Reactions: thekev and tuxon86
My experience is that many scientists and engineers use Apple laptops - as dumb terminals to SSH into their Linux computers and supercomputers running CUDA applications.

That's certainly the case is my group. Most people have MacBooks (we have a generous budget) - but the laptops are used for email, browsing, and remote connections to our Linux (and Windows) lab systems.

Our research center is also like that. And those needing more than a terminal also have one or two HP Zxx WS under their desk.
 
How does it turn out in real world applications?

Do you want me to bring compute, real world benchmarks comparing RX 480 vs GTX 1060?

By real world I mean professional workloads. Professional applications.
Perhaps professional particle physicists would object to your snide comment.

Your "surreal world" is yours and yours alone.
Do Particle physicists use Macs, or Apple hardware at all?
The old saying that a picture is worth a thousand words applies here.

My experience is that many scientists and engineers use Apple laptops - as dumb terminals to SSH into their Linux computers and supercomputers running CUDA applications.

That's certainly the case is my group. Most people have MacBooks (we have a generous budget) - but the laptops are used for email, browsing, and remote connections to our Linux (and Windows) lab systems.
Rrrrrright.

Im guessing, it is me who is moving the goal posts, here.

So how come particle simulation helps in connecting to SSH, to Linux computers and supercomputers running CUDA applications?

Do not respond. I am laughing my ass off right now.
 
Rrrrrright.

Im guessing, it is me who is moving the goal posts, here.

So how come particle simulation helps in connecting to SSH, to Linux computers and supercomputers running CUDA applications?

Do not respond. I am laughing my ass off right now.
It's a sad thing when someone is laughing is ass off alone in his mom's basement.
Have you considered therapy?
 
Rrrrrright.

Im guessing, it is me who is moving the goal posts, here.

So how come particle simulation helps in connecting to SSH, to Linux computers and supercomputers running CUDA applications?

Do not respond. I am laughing my ass off right now.
You know how badly it's gone for Apple execs who have said "my ass" in public - welcome to the club.

It's not clear what your argument is - but it seems to be that you claim that ATI GPUs are faster at compute, and you denigrate people who contradict you with real world benchmarks by implying that the real world task isn't important for Apple users.

And to do this, you crib images from Tom's Hardware of Windows 10 Enterprise x64 benchmarks without attributing the source. (See http://www.tomshardware.com/reviews/amd-radeon-pro-wx-7100,4896-4.html and http://www.tomshardware.com/reviews/amd-radeon-pro-wx-7100,4896.html .) Really, you use Win10 x64 numbers to promote cards for Apple users?

And if someone shows a clear advantage to a Pascal card - your response is that the ATI card is cheaper.

Slow but cheap isn't a winner.

You move the goalposts so often that you got one with wheels.

wheeled-goalposts.jpg
 
Last edited:
You know how badly it's gone for Apple execs who have said "my ass" in public - welcome to the club.

It's not clear what your argument is - but it seems to be that you claim that ATI GPUs are faster at compute, and you denigrate people who contradict you with real world benchmarks by implying that the real world task isn't important for Apple users.

And to do this, you crib images from Tom's Hardware of Windows 10 Enterprise x64 benchmarks without attributing the source. (See http://www.tomshardware.com/reviews/amd-radeon-pro-wx-7100,4896-4.html and http://www.tomshardware.com/reviews/amd-radeon-pro-wx-7100,4896.html .) Really, you use Win10 x64 numbers to promote cards for Apple users?

And if someone shows a clear advantage to a Pascal card - your response is that the ATI card is cheaper.

Slow but cheap isn't a winner.

You move the goalposts so often that you got one with wheels.

View attachment 698734
Again: is there a software on Apple platform that Particle Scientist use to do their simulations?

Is it because of hardware on Apple platform, or simply because: there is no software?
Real World benchmarks: For example how does GTX 1060 faire in Blender using CUDA, to RX 480 using OpenCL? How does GTX 1060 faire to RX 480 using Final Cut Pro X? How both GPUs faire together in OpenCL benchmarks, in real world applications?

AMD is faster in real world applications in Apple ecosystem, and it starts to be in Windows software ecosystem(Blender optimization of OpenCL for AMD GPUs, vs CUDA on Nvidia), in similar price brackets. Why? Because software matures over time. Hardware - cannot.

It is not me who is moving the goal post. This is biggest problem you have with Apple professional platform. You want it to be like Windows PC/Workstation.

It is not. Never was, never will be. I am surprised that you, and others, were always too blind to see this.

Just because Cheese Grater was... well, Cheese Grater, never meant that it was the same as Windows PC/Workstation platform.

I am not arguing about your needs Aiden, however. So far, there is nothing useful from AMD for very specific markets, but ROCm and other GPUOpen initiatives show that over time, when the software will be developed/optimized for AMD hardware, we will see that AMD hardware will catch up. And even more likely - outpace Nvidia hardware. How come?

It is actually historically proven fact, that when software matures(drivers, optimization, APIs, etc), that AMD hardware starts to be faster than Nvidia price competitors, and approaches higher tier. I do like to keep hardware for years, and it is actually even better from my personal view if I can pay less over time, than I payed for Nvidia GPU that I have to replace after two years. This is over PC, nothing that matter actually for Apple. Nothing that is important for you.
 
Again: is there a software on Apple platform that Particle Scientist use to do their simulations?

There are many uses for particle engines or physics computations on the mac. You can't just discount a task that you don't think has "real world" applications.

It is not me who is moving the goal post. This is biggest problem you have with Apple professional platform. You want it to be like Windows PC/Workstation.

What does being like a windows PC/workstation mean? If it means that its a flexible platform good for all kinds of professional work then isn't that what the mac should strive for?

Why should the mac pro be limited to just being a final cut pro machine? If you believe that then you are right, Apple should just take final cut pro, run it on an nvidia and AMD card, and then choose the fastest and then only ship macs with that GPU.

It is actually historically proven fact, that when software matures(drivers, optimization, APIs, etc), that AMD hardware starts to be faster than Nvidia price competitors, and approaches higher tier. I do like to keep hardware for years, and it is actually even better from my personal view if I can pay less over time, than I payed for Nvidia GPU that I have to replace after two years. This is over PC, nothing that matter actually for Apple. Nothing that is important for you.

Citation needed...
 
  • Like
Reactions: tuxon86
There are many uses for particle engines or physics computations on the mac. You can't just discount a task that you don't think has "real world" applications.
Where are them? What software uses Particle Simulation on Mac? What software uses physics engines on Mac?

In other words: Citation Needed ;).


What does being like a windows PC/workstation mean? If it means that its a flexible platform good for all kinds of professional work then isn't that what the mac should strive for?

Why should the mac pro be limited to just being a final cut pro machine? If you believe that then you are right, Apple should just take final cut pro, run it on an nvidia and AMD card, and then choose the fastest and then only ship macs with that GPU.
Is Mac Flexible with only Apple approved GPUs? And requiring a hack to run other GPUs on Mac?


Citation needed...
Erm, previous pages are full of benchmarks, from this year that are showing that AMD GPU performance improves over time...

(Actually its AMD software performance is improving over time, but people still do not see the impact of software on performance of GPUs...).
 
Where are them? What software uses Particle Simulation on Mac? What software uses physics engines on Mac?

In other words: Citation Needed ;).

Motion includes a particle engine.

Is Mac Flexible with only Apple approved GPUs? And requiring a hack to run other GPUs on Mac?

I think what a lot of people want here is a choice of GPU. Not to be stuck with a single vendor.

Erm, previous pages are full of benchmarks, from this year that are showing that AMD GPU performance improves over time...

(Actually its AMD software performance is improving over time, but people still do not see the impact of software on performance of GPUs...).

Since none of those benchmarks seek an answer to your question, I'll do your work for you. HardOCP looked at drivers over a year for AMD and Nvidia. Guess what, both vendors saw improvements in performance.
 
  • Like
Reactions: tuxon86
Motion includes a particle engine.
Anything else? One application so far. How does it reflect in real world performance between vendors?


I think what a lot of people want here is a choice of GPU. Not to be stuck with a single vendor.
And locking yourselves to CUDA software ecosystem? That would not be lock to single GPU vendor?


Since none of those benchmarks seek an answer to your question, I'll do your work for you. HardOCP looked at drivers over a year for AMD and Nvidia. Guess what, both vendors saw improvements in performance.
First of all, I think you should watch the benchmarks again...

Secondly.
I know those numbers.

The problem is here:
http://www.hardwarecanucks.com/foru...945-gtx-1060-vs-rx-480-updated-review-23.html
GTX-1060-UPDATE-100.jpg

In the benchmarks on previous pages, like the link to a film from Hardware Unboxed there was comparison between RX 480 and GTX 1060 from last year, and this year. At start RX 480 was 12% slower than GTX 1060. Right now it is 1%. So pretty in line with the numbers from Hardware Canucks review.

Even if Nvidia is gaining performance, AMD is gaining even more. So it is on par right now with GTX 1060 on average.

I will repeat the question: How long till RX 480 will be faster than GTX 1060? And we are talking about GPUs that were sold at 160$ and 250$ price points.

Software matures, and there is still a lot to be exposed from AMD hardware.
 
How out of touch with reality can someone gets... I've never seen such an acute case before...
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.