Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
The GPU in the same review at desktop is at 28 dB. If you measure the average over 20 minute timespan, the spikes to 50 dB, would have to be very short, to average 33 dB.

Your theory is falling apart. You are too attached to your vision of the world. You have faulty GPU. You can blame only yourself for resisting the reality. Red Devil is not going to 50 dB, regardless of what you will say. Max RPM for the fans in Red Devil is 2300, and at 1600 RPM they are at 33 dB. At 1800, they are at 34.5 dB. So 200 RPM higher, and we have 1.5 dB increase. Maximum sound output for Red Devil is, by simple calculation 38-39 dB, which is in line with other silent versions of coolers for RX 480. It is all in the computerbase review.

Stop this Tuxon. We know you are displeased with your RX 480, but you are now dishonest/lying to readers.

So you don't know what you are talking about, so nothing is new under the sun then...
For each +3db the sound volume doubles. So from 33db to 39db, guess what, the sound level doubled, TWICE! And if you go to 50db guess what happens? Yea, for each 3db the volume doubles.

And again, I'm not the only one having these problem.

https://www.newegg.com/Product/Product.aspx?Item=N82E16814131692

Read the customer reviews about overheating.
 
So you don't know what you are talking about, so nothing is new under the sun then...
For each +3db the sound volume doubles. So from 33db to 39db, guess what, the sound level doubled, TWICE! And if you go to 50db guess what happens? Yea, for each 3db the volume doubles.

And again, I'm not the only one having these problem.

https://www.newegg.com/Product/Product.aspx?Item=N82E16814131692

Read the customer reviews about overheating.
I know this perfectly well, but it has nothing to do with how sound pressure increases with RPM increase. Again, max RPM of fans for this GPU is 2300. At 1600 the GPU outputs 33 dB, and at 1800 it outputs 34.5. Simple math says that at max RPM of fans the GPU would go to 39 dB.

The problem is that you believe that your GPU is at 50 dB, because the reference, blower cooler is able to output that level of noise. Red Devil does not.

The GPUs can be faulty. Some people have problems, some don't. Read properly, and open your mind, also, because that link shows that people have perfectly cool, and quiet Red Devil GPU. You are in the ballpark of problem with the GPU. But the GPU itself will not go to 50 dB like you have said.

I am not saying that you are lying that you have a problem.
 
And I'm telling you that I don't have any problem nor am I lying. Why are you so intense in defending a product that you never used? All you keep posting are stats on paper when people are talking about actual product that they own and use daily. Stats on paper are nice but they're not the ultimate arbiter of what is in the real world. All your tech site and youtuber hw tester do their test in either caseless bench system or in enthousiast case, not in HP or Dell prebuilt system.

So yeah, MY Powercolor Red Devil is noisy, that you don't agree because of something you read on a tech site without experimenting with it yourself doesn't mean much. This PC of mine was silent with my old GTX 660, was also silent with my ref 7870 and was silent with my GTX 745 that I got when the 7870 went belly up. But it sure ain't silent now...
 
The software creators have pretty much decided on who's the winner in chosing CUDA over OpenCL in way more pro software package.

Not really true, FCPX and Apple video apps work better under Open CL, Blackmagic Resolve works better under Open CL, Adobe apps are being recoded and will use OpenCL more efficiently.

https://forum.blackmagicdesign.com/viewtopic.php?f=21&t=49713

Here are the Standard Candle results for the new $199 AMD Radeon RX480 in RED, Nvidia Titan X in GREEN., the GTX1060 in BLACK.
Rsolve 12.5 Studio, X99 Motherboard, 12 core Xeon.

Blur:
09 Nodes: RX480:24fps TitanX:24fps 1060: 24fps
18 Nodes: RX480:20fps TitanX:16fps 1060: 14fps
30 Nodes: RX480:13fps TitanX:11fps 1060: 9fps
66 Nodes: RX480:6fps TitanX:5-6fps 1060: 4fps

TNR:
1: RX480:24fps TitanX:24fps 1060: 24fps
2: RX480:18fps TitanX:17-20fps 1060: 17fps
4: RX480:9fps TitanX:11fps 1060: 8fps
6: RX480:7fps TitanX:8fps 1060: 6fps


Big thanks to Peter Richards of fixafilm.com for purchasing the 480 card, as there are no compute based reviews of the card, he thought it important that the card be tested for OPENCL performance. Let me know if you want any other benchmarks run.

EDIT: Added GTX 1060 (Gigabyte Gaming G1 overclocked 1060) results.

I think you still have to choose your software first and build your hardware decisions based in that, I think that's the main problem with the TDP and why they have to change their design so professionals can chose video cards based on requirements
 
Last edited:
Not really true, FCPX and Apple video apps work better under Open CL, Blackmagic Resolve works better under Open CL, Adobe apps are being recoded and will use OpenCL more efficiently.

https://forum.blackmagicdesign.com/viewtopic.php?f=21&t=49713



I think you still have to chose your software first and build your hardware decisions based in that, I think that's the main problem with he TMP and why they have to change their design so professionals can chose video cards based on requirements
Currently Blender using OpenCL is faster on AMD platform, than using CUDA and Nvidia GPU:
Timings.png

Of course, this is raw compute horsepower difference between the GPUs(4.4 vs 5.7 TFLOPs), but it shows, that proper implementation of OpenCL is not worse than CUDA.

And in general you can get much better results with AMD hardware, for less, because they are cheaper, and offer higher compute performance, than Nvidia. I actually would like to se the comparison in those tests between RX 480 and GTX 1070, because both GPUs have similar compute performance.
 
If you're referring to the 5,1 model, I believe the only reason there was a 2012 refresh with 'newer' CPUs using the same socket was that the 2010 model Nehalem CPUs were about to be discontinued by Intel. 2012 was considered a speed bump at the time but Apple were forced to act by something out of their control.

Forced to act, but not because of those specific 2010 CPUs. The 2012 MP stuff was 2009-2010 based too. The 3530 used in the baseline Mac Pro 2010 was retired for try buys until 2014. The retail was September 2013.

http://www.cpu-world.com/CPUs/Xeon/Intel-Xeon W3530 - AT80601000897AB (BX80601W3530).html

The 2012 Mac Pro's 3465 entry processor. Exact same retirement dates.

http://www.cpu-world.com/CPUs/Xeon/Intel-Xeon W3565 - AT80601002727AB (BX80601W3565).html

In fact the 3565 has an even older introduction date than the 3530.


What happened in June 2012 that forced Apple to act was that all of their workstation competitors introduced Xeon E5 1600 v1 ( Sandy Bridge ) systems and Apple had nothing to ship in the same class. There were processors that Apple hadn't used into the 2010 models ( and price cuts to be funneled out) and Apple was forced to put on some externally visible show they were not completely abandoning the Mac Pro.

There certainly was newer, better GPUs and Apple did nothing with those. Again the GPUs they stuck with were in same slippery slope of not that far from discontinuation as the CPUs. [ A third party card came that could have offered as a standard build but didn't. ]


And finally, we have the evolution of the METAL API which replaces OpenCL which was the big thing that Apple was trying to push when they launched the 6,1. Crucially, this is billed as Apple's own version of DirectX so they are at least taking control of their own graphics destiny rather than relying on ancient implementations of OpenGL or OpenCL.

Metal doesn't really replace OpenCL any more than Vulkan replaces OpenCL. There are some computational overlaps, but it isn't a replacement.


Just make sure the cooling solution can deal with an Nvidia 1080Ti or an AMD VEGA and Apple can set about designing a silent case around a single GPU and maybe a set of E3 Xeon CPUs to keep the hackintoshers at bay, all the while getting Nvidia or AMD to create Mac Editions of their hardware built to fit the slot (custom size if need be) and also help prevent people from flashing Mac ROMS onto a variety of non standard hardware.

The core issue is the standard form PCI-e cards don't deal with Thunderbolt well. A secondary, optional, GPU card doesn't have to tie into Thunderbolt. However, the primary GPU card does have that constraint. A single secondary, optional slot still allows those who want some 3rd party GPU to have one. So would have two GPUs. If want to ignore one than can.

Mac Pro's are in the 1-3% of Mac market zone. Getting along with the other 97+ % of the Mac market supersedes some smaller subset of 1-3% wanting to throw TB completely off the Mac Pro.

Using E3 doesn't make much sense either. The RAM and core count limitations don't help much. The PCI-e limitations makes it a non starter for a Mac Pro. Skylake-W may have some E3 like limits on the 4 core option (if the uncore PCI-e limitations are expanded out to make the rest of the line up), but at least it will be a shared socket and chipset design. A forked logic board likely doesn't make sense. It isn't going to save that much money in Apple's context of limited R&D resources. . I don't think cost differences are gong to be major difference here. ( the baseline E3 die that Intel stuffs into a Skylake-W socket probably won't be that much more expensive even if adds more PCI-e lanes. If price is the same because the lane limitations are same, then it is still a non starter. )


The Hackintosh folks aren't really a problem. They market impact is likely in the sub 1% range. They moan and groan extremely loudly but the dollars they represent probably isn't worth chasing after. Trying to get that down as close to zero as possible has diminishing returns financially.

One, secondary GPU slot would likely cut way down on the moan and groan. Highly likely enough to keep that Hackintosh down in the sub 1% zone. Some folks are going to complain no matter what.

What I think Apple will be taking their time over is making a small case that will be as silent or quiet as the 2013 Mac Pro while being able to continue fitting on the desktop and they may find themselves in competition with a lot of vendors if that's all they were aiming for. It's also Apple so we should expect more innovation than that - just don't involve dousing modules in mineral oil or something equally as exotic :rolleyes:.

Desktop targeted doesn't mean it is an Xmac. And when doesn't Apple has lots of competition across the whole Mac line up. Apple has about 7% of traditional form factors PC market. On any given day 90+ % of folks are buying something other than macOS. That is a wake up and "same stuff, different day" factor for Apple executives. It isn't new, nor is it likely to change in the future.
 
Currently Blender using OpenCL is faster on AMD platform, than using CUDA and Nvidia GPU:

Of course, this is raw compute horsepower difference between the GPUs(4.4 vs 5.7 TFLOPs), but it shows, that proper implementation of OpenCL is not worse than CUDA.

And in general you can get much better results with AMD hardware, for less, because they are cheaper, and offer higher compute performance, than Nvidia. I actually would like to se the comparison in those tests between RX 480 and GTX 1070, because both GPUs have similar compute performance.
Two things:

"Currently Blender using OpenCL is faster on AMD platform"​

Clearly "fake facts", meant to mislead. The chart that you included clearly says in the header that it was comparing 200 € cards - in other words a lower middle range GTX 1060 against the high end RX 480.

It's very dishonest to say "AMD is faster" without noting that you're comparing the fastest ATI chip to a low-midrange Nvidia chip. Dishonest.

Why didn't you cite a source comparing a Titan Xp to the latest ATI chip? Do you have an agenda?

Second thing:

Why didn't you cite a source?​

No links from your post to the original site. Hiding something, perhaps?

Don't you realize that tricks like this destroy any credibility that you might have had? Your posts are misleading and dishonest.
 
  • Like
Reactions: tuxon86
Two things:

"Currently Blender using OpenCL is faster on AMD platform"​

Clearly "fake facts", meant to mislead. The chart that you included clearly says in the header that it was comparing 200 € cards - in other words a lower middle range GTX 1060 against the high end RX 480.

It's very dishonest to say "AMD is faster" without noting that you're comparing the fastest ATI chip to a low-midrange Nvidia chip. Dishonest.

Why didn't you cite a source comparing a Titan Xp to the latest ATI chip? Do you have an agenda?

Second thing:

Why didn't you cite a source?​

No links from your post to the original site. Hiding something, perhaps?

Don't you realize that tricks like this destroy any credibility that you might have had? Your posts are misleading and dishonest.
Fastest AMD GPU is Fury X. Maybe you do not know that? The fastest, by your knowledge, AMD GPU is still mainstream GPU. So by your knowledge mainstream AMD GPU is faster than Nvidia midrange(its actually mainstream, still) GPU.

Why I didn't cite a source? Actually the source for this I have cited in Waiting for Mac Pro 7.1 Thread. And they have compared 200E GPUs together. Why do you accuse me for being dishonest, when Nvidia is not able to compete with AMD on particular level of price? I have already said, the difference in performance is due to higher compute performance of RX 480, (5.7 TFLOPs vs 4.4). And I would like to see the difference between 6.5 TFLOPs and 5.7 TFLOPs GPU from both vendors(GTX 1070 vs RX 480).

Im sorry Aiden, your love for Nvidia is destroying your ability to understand what is written, without putting your projection on it. We can see the difference between two mainstream GPUs. Why cannot we expect that the same thing would happen with higher performing GPUs from both vendors? Fury X would be on par with GTX 1080?
 
https://wiki.blender.org/index.php/Dev:Source/Render/Cycles/OpenCL

This is actual source, if anyone is interested.

Quote from the site:

OpenCL on other platforms
OpenCL works fine on NVIDIA cards, but performance is reasonably slower (up to 2x slowdown) compared to CUDA, so it doesn't really worth using OpenCL on NVIDIA cards at this moment.

Intel OpenCL works reasonably well. It's even possible to use OpenCL to combine GPU and CPU to render at the same time, for until some more proper solution is implemented.

The tests are done OpenCL(AMD) vs CUDA(Nvidia).

And the results are this:
Timings.png

The point stays. Currently OpenCL is not worse than CUDA in implementation in software(at least, it does not have to be). AMD GPUs have higher compute performance than Nvidia GPUs, and you will get better results with them, which we see in the picture above.

I am actually interested in comparing 170$ RX 480 vs 350$ GTX 1070, because both GPUs will have similar compute performance(theoretical), but it would be all to software to check which one can be faster. It would be pretty interesting to see.
 
500 series is bit of a let down. We knew it was a rebrand but still.
It seems you can upgrade your 480 into a 580, but beware of the power/heat issues.
One thing is new and welcome, the mid-power mem clock. Bout time.
Let's hope Vega can bring some fresh stuff to talk about, soon.
 
https://wiki.blender.org/index.php/Dev:Source/Render/Cycles/OpenCL

This is actual source, if anyone is interested.

Quote from the site:

OpenCL on other platforms
OpenCL works fine on NVIDIA cards, but performance is reasonably slower (up to 2x slowdown) compared to CUDA, so it doesn't really worth using OpenCL on NVIDIA cards at this moment.

Intel OpenCL works reasonably well. It's even possible to use OpenCL to combine GPU and CPU to render at the same time, for until some more proper solution is implemented.

The tests are done OpenCL(AMD) vs CUDA(Nvidia).

And the results are this:
Timings.png

The point stays. Currently OpenCL is not worse than CUDA in implementation in software(at least, it does not have to be). AMD GPUs have higher compute performance than Nvidia GPUs, and you will get better results with them, which we see in the picture above.

I am actually interested in comparing 170$ RX 480 vs 350$ GTX 1070, because both GPUs will have similar compute performance(theoretical), but it would be all to software to check which one can be faster. It would be pretty interesting to see.

You left out the really important information from that page:

Cycles has a split OpenCL kernel since Blender release 2.75. It's an alternative approach to what is used on CPU (so called megakernel). The idea behind splitting the kernel is to have multiple smaller kernels which are not only simpler and faster to compile, but also have better performance. Initial split kernel patch was done by AMD. Further work was also funded by AMD.

The OpenCL Split Kernel now supports nearly all features. Only Correlated multi jitter is missing. Baking works, but uses the mega kernel. Volumetrics, SSS, Branched path tracing, HDR lightning and Denoising are fully supported.

With current drivers, all production files from the official cycles benchmark pack, including the huge one from Gooseberry, render now pretty fast.

AMD made changes to make this application run faster on their GPUs, perhaps at the expense of NVIDIA performance. Their new path doesn't even support all the features of the standard path.

Also, please refrain from making blanket statements like "AMD GPUs have higher compute performance than Nvidia GPUs" because this is simply not true in general. As I've tried to explain to you many times, some applications will run better on AMD and some will run better on NVIDIA. In this specific case, Blender might run faster on AMD when OpenCL is used, but given that this chart appears to be comparing OpenCL performance between AMD and NVIDIA, and not AMD/OpenCL vs NVIDIA/CUDA as you'd expect, I think it's somewhat misleading w.r.t. overall Blender performance.
 
You left out the really important information from that page:



AMD made changes to make this application run faster on their GPUs, perhaps at the expense of NVIDIA performance. Their new path doesn't even support all the features of the standard path.

Also, please refrain from making blanket statements like "AMD GPUs have higher compute performance than Nvidia GPUs" because this is simply not true in general. As I've tried to explain to you many times, some applications will run better on AMD and some will run better on NVIDIA. In this specific case, Blender might run faster on AMD when OpenCL is used, but given that this chart appears to be comparing OpenCL performance between AMD and NVIDIA, and not AMD/OpenCL vs NVIDIA/CUDA as you'd expect, I think it's somewhat misleading w.r.t. overall Blender performance.
You forget one important thing. The tests are comparing OPenCL performance on AMD GPU vs CUDA performance on Nvidia GPU. So what you are saying AMD cheated everybody, by gimping Nvidia CUDA performance? THIS IS THE POINT OF THE POST! It is comparison between CUDA and OpenCL on different vendors.

And this is actual source: tweet in which author specifically say: OpenCL is on par with CUDA.
https://twitter.com/tonroosendaal/status/851850578112151556
https://twitter.com/tonroosendaal/status/852103617742073857
"We tested OpenCL vs Nvidia CUDA." Any questions?

Thank you, you can resist reality as you want, but that does not change anything. Likes under your post made by Tuxon and Aiden are making me want to cry out loud from laughing.
 
You forget one important thing. The tests are comparing OPenCL performance on AMD GPU vs CUDA performance on Nvidia GPU. So what you are saying AMD cheated everybody, by gimping Nvidia CUDA performance? THIS IS THE POINT OF THE POST! It is comparison between CUDA and OpenCL on different vendors.

And this is actual source: tweet in which author specifically say: OpenCL is on par with CUDA.
https://twitter.com/tonroosendaal/status/851850578112151556
https://twitter.com/tonroosendaal/status/852103617742073857
"We tested OpenCL vs Nvidia CUDA." Any questions?

Thank you, you can resist reality as you want, but that does not change anything. Likes under your post made by Tuxon and Aiden are making me want to cry out loud from laughing.

Where on that page does it say the graph is measuring CUDA performance on NVIDIA? I specifically said it appeared that they were using OpenCL on both, because that whole page is talking about OpenCL. If you have information that suggests otherwise, great, thanks for sharing.

It doesn't change my fundamental point: AMD altered the application to run better on their hardware. So, if you care about Blender performance, great, go and buy an AMD card (assuming that an RX 580 can beat a GTX 1080 Ti of course, which seems rather unlikely). As I keep saying, some applications are tuned to run well on AMD, some are tuned to run well on NVIDIA. That's why you shouldn't make blanket statements like "AMD is better at compute than NVIDIA", you'll notice that I've never said that but it's something that you continually post here.
 
  • Like
Reactions: tuxon86
Where on that page does it say the graph is measuring CUDA performance on NVIDIA? I specifically said it appeared that they were using OpenCL on both, because that whole page is talking about OpenCL. If you have information that suggests otherwise, great, thanks for sharing.

It doesn't change my fundamental point: AMD altered the application to run better on their hardware. So, if you care about Blender performance, great, go and buy an AMD card (assuming that an RX 580 can beat a GTX 1080 Ti of course, which seems rather unlikely). As I keep saying, some applications are tuned to run well on AMD, some are tuned to run well on NVIDIA. That's why you shouldn't make blanket statements like "AMD is better at compute than NVIDIA", you'll notice that I've never said that but it's something that you continually post here.
https://docs.google.com/spreadsheets/d/1YC0R06lLDn0pECDDridUTxEZDboAzzyjotZLQmOi3Og/edit#gid=0
Spreadsheet.

Tweet from this guy: https://twitter.com/tonroosendaal/status/852103617742073857
Specifically says: We tested NVidia CUDA and AMD OpenCL. The wiki page links to a spreadsheet with detailed information.

And he is the chairman of Blender Foundation. You are accusing him right now of blatantly lying to people. Come on.

In my first post I have specifically said, that the difference in performance between GTX 1060 and RX 480 is because of higher compute performance in AMD GPU than Nvidia. Many times I have written to you that in properly optimized software you will not see a difference between similarly powerful GPUs. That is why I also said that I would like to see a comparison between GTX 1070 and RX 480, because both GPUs have similar compute performance.

P.S. It is funny that Nvidia can use their proprietary API to make application run faster on their hardware, but when AMD funds optimization for their hardware, without affecting their competitor, because they are using their own, proprietary API, its all cheating.

Laughable.
 
https://docs.google.com/spreadsheets/d/1YC0R06lLDn0pECDDridUTxEZDboAzzyjotZLQmOi3Og/edit#gid=0
Spreadsheet.

Tweet from this guy: https://twitter.com/tonroosendaal/status/852103617742073857
Specifically says: We tested NVidia CUDA and AMD OpenCL. The wiki page links to a spreadsheet with detailed information.

And he is the chairman of Blender Foundation. You are accusing him right now of blatantly lying to people. Come on.

In my first post I have specifically said, that the difference in performance between GTX 1060 and RX 480 is because of higher compute performance in AMD GPU than Nvidia. Many times I have written to you that in properly optimized software you will not see a difference between similarly powerful GPUs. That is why I also said that I would like to see a comparison between GTX 1070 and RX 480, because both GPUs have similar compute performance.

P.S. It is funny that Nvidia can use their proprietary API to make application run faster on their hardware, but when AMD funds optimization for their hardware, without affecting their competitor, because they are using their own, proprietary API, its all cheating.

Laughable.

Where did I say they were cheating, or that that guy is lying to us? Please re-read my posts, I said no such thing. All I did was point out that you omitted the fact that AMD went in and tuned Blender to run better on their GPUs, and that it wasn't clear from that page alone that the NVIDIA results were using CUDA (since the rest of the page was all about OpenCL). There's nothing wrong with AMD tuning Blender to run better for them, though it's worth knowing that they did it. As I said, if you really care about Blender performance, then you should absolutely go and buy an AMD GPU because they've made that application run very well on their GPUs. Again, my underlying point is that for every instance of an application running better on AMD, there are applications that run better on NVIDIA. That might be because it uses CUDA, or because NVIDIA helped to tune the application, or a variety of other reasons. Again, nobody said this was cheating, however I do take issue with your continued efforts to claim that AMD simply has superior compute performance across the board.
 
  • Like
Reactions: tuxon86
Where did I say they were cheating, or that that guy is lying to us? Please re-read my posts, I said no such thing. All I did was point out that you omitted the fact that AMD went in and tuned Blender to run better on their GPUs, and that it wasn't clear from that page alone that the NVIDIA results were using CUDA (since the rest of the page was all about OpenCL). There's nothing wrong with AMD tuning Blender to run better for them, though it's worth knowing that they did it. As I said, if you really care about Blender performance, then you should absolutely go and buy an AMD GPU because they've made that application run very well on their GPUs. Again, my underlying point is that for every instance of an application running better on AMD, there are applications that run better on NVIDIA. That might be because it uses CUDA, or because NVIDIA helped to tune the application, or a variety of other reasons. Again, nobody said this was cheating, however I do take issue with your continued efforts to claim that AMD simply has superior compute performance across the board.
I don't care about blender performance. I care about compute performance, and spread of misinformation over forums.

I don't claim that AMD has superior compute lineup. I say that in particular price range AMD has superior compute GPUs than Nvidia. That is Why I asked for comparison between GTX 1070, and RX 480, because both have similar compute performance, in theory. The difference in the quoted links is because there is a huge difference in performance between GTX 1060 and RX 480(4.4 vs 5.7 TFLOPs). Properly optimized software can make miracles, as you can see, and it is a good showing that optimization can work magic with OpenCL, not being worse than CUDA. That is whole point.
The RX 550 is not worth it, it is priced too close to RX 460.
I actually agree with this :).
 
I don't claim that AMD has superior compute lineup. I say that in particular price range AMD has superior compute GPUs than Nvidia.

Actually, no you didn't, you've been making very broad statements like:

"And in general you can get much better results with AMD hardware, for less, because they are cheaper, and offer higher compute performance, than Nvidia."

If you're talking about Blender, which you appear to be doing, then please say something like:

"And in Blender you can get much better results with AMD hardware, for less, because they are cheaper, and offer higher compute performance, than Nvidia."

because AMD has tuned that specific application to work better on their architecture. Here are some counter examples, where the 1060 destroys the RX 480:

83299.png
83301.png
83302.png


So again, it all depends on the application and it's not really accurate to say "AMD is better at compute than NVIDIA" just as it's not accurate to say "NVIDIA is better at compute than AMD".
 
Erm, do you know what CompuBench checks in FaceDetection? ;)

If you would know, you would know why it is faster than AMD GPU ;).

Lets end this conversation. It will not go anywhere from here, and get back to topic.
 
Erm, do you know what CompuBench checks in FaceDetection? ;)

If you would know, you would know why it is faster than AMD GPU ;).

Lets end this conversation. It will not go anywhere from here, and get back to topic.

Sure, let's just ignore the particle simulation results or Folding@Home, and keep making blanket statements that AMD is better at compute than NVIDIA.

I have no objection to you pointing out specific cases where AMD performs better, as long as your statements are specific to that case. I do object to you cherry-picking examples and using that as the basis for broad sweeping claims about how AMD is better at compute than NVIDIA. If you stop doing that, I'll stop complaining about your posts.
 
Let it rest Asgorath... Koyoot is on a crusade. He's just not realising hwat he's doing.
After all he's here to fight the "spread of misinformation over forums"!

70bd25c24ce762f21751065ed251af3a74d16b9bcc362372eedba0c5f8051c30.jpg
 
seems like everyone is in a pissing match, it's interesting to hear both sides of the arguement... but geez
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.