Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Links?

I don't remember that, and I have dozens of them. Nvidia's been shipping drivers and CUDA ready for the latest GPUSs for quite some time.

We got pretty much immediate speedups from upgrading - just upgrade CUDA to the new version that supports Maxwell/Pascal and things are faster immediately. (Note that my focus is compute-only - no games, and no 3D graphics.)

If you have apps that don't use the new features - no speedups until the apps are updated. That can be fixed by a strong 3rd party early adopter program to help apps be ready on launch date. Nvidia mostly seems to have that - CUDA 8 was out when Pascal shipped for real. If your apps use the CUDA libraries - they're mostly ready on day 1. (Many GPU advances enhance existing APIs without code changes. Using new APIs do need some app code changes.)

CUDA 9 with Volta support is towards the end of Beta, and with GV100-based cards showing up should soon CUDA 9 be public.

ATI should be embarrassed for coming out with a 345 watt liquid-cooled card - and saying that "the software/drivers aren't ready".
[doublepost=1501533965][/doublepost]
You should apply to be the next Trump White House communications director. Your belief in alternative facts makes you a shoe-in.

I bet he was talking about the very late arrived but still under perform MacOS driver, not the Windows driver.
 
  • Like
Reactions: Blakehoo
ATI should be embarrassed for coming out with a 345 watt liquid-cooled card - and saying that "the software/drivers aren't ready".

AMD doesn't have de facto coding language that most apps use a la Cuda. They've tried many things and DX12/Vulkan/Metal seems to be bearing some fruit on gaming side, but not in Pro app side. Cuda is just so much easier to approach than openCL ever was.

So definitely this is a software problem. AMD can't help it as long as Cuda is closed and de facto language for Windows & Unix Pro apps.

This could turn over in Mac platform, because Apple showed Nvidia the finger three years ago. But for Windows AMD won't have a good recipe for success before they can get a decent Cuda support. And that wont happen any time soon, I suppose.

Metal 2 and some gaming consoles are AMD's last resort because there the software is not the obstacle. On Windows side they can only overclock the chip.
 
  • Like
Reactions: Blakehoo
It will be (at least) late next month before RX Vega ships.

I'll bet the BIOS isn't locked down yet...

I can't believe how AMD has kept people talking...
 
Last edited:
This could turn over in Mac platform, because Apple showed Nvidia the finger three years ago.

Unlikely, unless Apple decides to stop putting the "Pro" label on amateur computers and ups their game.

4xTitans.jpg


If Apple wants to "turn it over", then let's see 60+ cores and 120+ threads with quad double-slot GPUs and multiple TiB of RAM. Like everyone else has.
[doublepost=1501535410][/doublepost]
I can't believe how AMD has kept people talking...
Who is talking other than koyoot and cube?
 
Last edited:
Unlikely, unless Apple decides to stop putting the "Pro" label on amateur computers and ups their game.

View attachment 710966

If Apple wants to "turn it over", then let's see 60+ cores and 120+ threads with quad double-slot GPUs and multiple TiB of RAM. Like everyone else has.
:D
I suppose the modular Mac Pro is not going to be that modular. And what you mentioned is (kind of) a niche market. Sure it has a lot of PR value, but doesn't make billions for Apple.

Pro as a markting term has started to devalue even among the regular users, so I wouldn't be surprised if Apple invents another great term for their more expensive lineup. First it was Power, then Pro, what next... Business Class? Einstein edition?
 
:D
I suppose the modular Mac Pro is not going to be that modular. And what you mentioned is (kind of) a niche market. Sure it has a lot of PR value, but doesn't make billions for Apple.
But it would cost Apple almost nothing to address that market.

Pro as a markting term has started to devalue even among the regular users, so I wouldn't be surprised if Apple invents another great term for their more expensive lineup. First it was Power, then Pro, what next... Business Class? Einstein edition?
How about "Mac Workstation"? Or "Mac Z-series"? (oops, taken) Or "MacStation"? (lame)

Any name should allow for both a desktop variant and a "big and heavy and powerful" portable variant.
 
The only cMP capable one is the Vega 56 XL (without science). I'm reading some conflicting information however. Some say 210w, some say 285w. Some say dual 8-pins, some say an 8-pin + 6-pin.

Would be nice once that's solidified, because that's a 10.5 tFlop card. If it however performs similarly to a GTX 1070 for real-world gaming performance, then I must say the GTX 1070 is still the better card/deal.
 
The only cMP capable one is the Vega 56 XL (without science). I'm reading some conflicting information however. Some say 210w, some say 285w. Some say dual 8-pins, some say an 8-pin + 6-pin.

Would be nice once that's solidified, because that's a 10.5 tFlop card. If it however performs similarly to a GTX 1070 for real-world gaming performance, then I must say the GTX 1070 is still the better card/deal.

For me, if Vega 56 work OOTB. I prefer that more than the 1070. However, if not, then rely on Nvidia web driver is a better options than possible kext edit.
 
The only cMP capable one is the Vega 56 XL (without science). I'm reading some conflicting information however. Some say 210w, some say 285w. Some say dual 8-pins, some say an 8-pin + 6-pin.

Would be nice once that's solidified, because that's a 10.5 tFlop card. If it however performs similarly to a GTX 1070 for real-world gaming performance, then I must say the GTX 1070 is still the better card/deal.
I'm mostly interested in the RX Vega Nano card, but it remains to be see what performance it is capable of and when it will even be available to purchase. Safe to say I spent more time looking at Nvidia cards last night than AMD. The 1070 does indeed look like the better option.
 
You keep moving the goal posts here. First, graphics workloads are professional. Apple is advertising the iMac Pro for VR, featuring a Vega graphics chip. There are many other "professional" workloads that stress graphics performance.

Second, Vega FE/Vega RX 64 with a TDP of 295 W with GTX 1080 graphics performance is pathetic efficiency wise. Sure, a downclocked chip may slightly improve this. But even if there is zero performance loss between the 295 W RX 64 and the 210 W Vega Nano, it will still be less efficient than the 180 W GTX 1080.
But the graphics workloads in professional applications are run on compute kernels, not geometry kernels.

Im sometimes dumbfounded about level of knowledge on this forum.

CUDA is purely Compute Kernel, OpenCL is purely compute Kernel. How can you not know this, and believe that professional applications are running on geometry kernels?

Have you seen a professional application that works on DirectX?

The 295W GPU is on the level of performance of GTX 1080, at least according to AMD, in current state of drivers in gaming, and still will be faster than Titan Xp in compute oriented applications, which I think is most important from professionals.
AMD doesn't have de facto coding language that most apps use a la Cuda. They've tried many things and DX12/Vulkan/Metal seems to be bearing some fruit on gaming side, but not in Pro app side. Cuda is just so much easier to approach than openCL ever was.

So definitely this is a software problem. AMD can't help it as long as Cuda is closed and de facto language for Windows & Unix Pro apps.

This could turn over in Mac platform, because Apple showed Nvidia the finger three years ago. But for Windows AMD won't have a good recipe for success before they can get a decent Cuda support. And that wont happen any time soon, I suppose.

Metal 2 and some gaming consoles are AMD's last resort because there the software is not the obstacle. On Windows side they can only overclock the chip.
You are mistaking Compute Kernels, with Geometry Kernels.

There is NOTHING in CUDA that would stop it from running on AMD GPUs, because it is agnostic Compute Kernel, just like OpenCL. But there is no AMD-CUDA Compiler, apart from HIP.

Nvidia has the benefit from it being heavily optimized for their architecture, done by Nvidia Engineers. In AMD's case, OpenCL requires you to do a lot of work on optimization, if you are a software developer. Its easier to developers to buy CUDA graphics, use CUDA, and save money and time.

AidenShaw said:
You should apply to be the next Trump White House communications director. Your belief in alternative facts makes you a shoe-in.
Yes, obviously. At least I know how GPUs work, and what affects performance. Maybe you should learn a thing or two about them also?

The same thing you were saying when I was claiming, that purely based on high-level architecture analysis of Ryzen CPUs I claimed that they will be on Haswell/Broadwell level of IPC. You claimed that I believe in alternative facts, and am doing AMD's PR. It turned out, that I was correct. What makes you believe that this time I will not be correct?
 
Last edited:
  • Like
Reactions: itdk92 and ssgbryan
These TDP numbers are a bit insane aren't they!

Certainly looking like Vega 56 and Vega Nano are the cards to watch. Reference design Vega 56 pulling 210w and requiring 8-pin and 6-pin is a bit frustrating though.

Temps on the Vega Nano should be interesting too. R9 Nano got pretty hot, hopefully they'll resolve that. Perhaps let AIB partners do their own versions this time?
 
Unlikely, unless Apple decides to stop putting the "Pro" label on amateur computers and ups their game.

View attachment 710966

If Apple wants to "turn it over", then let's see 60+ cores and 120+ threads with quad double-slot GPUs and multiple TiB of RAM. Like everyone else has.
[doublepost=1501535410][/doublepost]
Who is talking other than koyoot and cube?
What are GTX cards doing in a "pro" computer? I though these cards belonged to the consumer category. Why not the quadro card - price?

Is this computer used as a workstation or as a server? Looks like an overkill for Photoshop, FCPX and other similar content creation software. Apple has chosen to build computers for this market segment since decades back. If this computer is a server, there are lots of even more impressive PFlops machines around. Do you suggest Apple should compete with the vendors of these as well?
 
Certainly looking like Vega 56 and Vega Nano are the cards to watch. Reference design Vega 56 pulling 210w and requiring 8-pin and 6-pin is a bit frustrating though.

Was wondering myself if this would not be a great use for the EVGA PowerLink. I remember reading on here that another member noted that the PowerLink helped balance the power consumption evenly between the two PCIe cables. The design/location of the power inputs on the Vega cards looks like the would be compatible with the PowerLink.
 
You are mistaking Compute Kernels, with Geometry Kernels.

There is NOTHING in CUDA that would stop it from running on AMD GPUs, because it is agnostic Compute Kernel, just like OpenCL. But there is no AMD-CUDA Compiler, apart from HIP.

That is what I meant. I didn't mix Cuda with DX12, but used it just as an example how a different coding approach of DX12 has prooved AMD hardware is pretty good. Cuda is a software problem AMD cannot go around (now). Programmes love to use Cuda and don't like openCL. Debugging openCL takes a lot more time, tools are not as nice and support as well is lacking. If AMD had similar tools that everybody loves to use, it wouldn't have any difficulties to compete with Nvidia on hardware level and AMD wouldn't need to overclock their products. Nano and Radeon Pro products are proof of that AMD can be quite power efficient.

DX12 has given a new life to older AMD gcn cards as well and if AMD can find a solution to compete with Cuda, it will do the same for Pro apps. There's a lot of horsepower inside AMD's GPU's, but software doesn't know how to use it well. Same problem lies within still popular DX11 games. Overclocking will continue.

Summa summarum, software is AMD's Achilles kneel. Not just drivers, but the programming culture software companies have. They favour Cuda and that is the best asset Nvidia have.

Metal 2 with its new tools are about to change this in Mac platform, but I cannot see that AMD can compete with Nvidia on other platforms in near future. Not without overclocking the h*** out of their GPU's.
 
Last edited:
That is what I meant. I didn't mix Cuda with DX12, but used it just as an example how a different coding approach of DX12 has prooved AMD hardware is pretty good. Cuda is a software problem AMD cannot go around (now). Programmes love to use Cuda and don't like openCL. Debugging openCL takes a lot more time, tools are not as nice and support as well is lacking. If AMD had similar tools that everybody loves to use, it wouldn't have any difficulties to compete with Nvidia on hardware level and AMD wouldn't need to overclock their products. Nano and Radeon Pro products are proof of that AMD can be quite power efficient.

DX12 has given a new life to older AMD gcn cards as well and if AMD can find a solution to compete with Cuda, it will do the same for Pro apps. There's a lot of horsepower inside AMD's GPU's, but software doesn't know how to use it well. Same problem lies within still popular DX11 games. Overclocking will continue.

Summa summarum, software is AMD's Achilles kneel. Not just drivers, but the programming culture software companies have. They favour Cuda and that is the best asset Nvidia have.

Metal 2 with its new tools are about to change this in Mac platform, but I cannot see that AMD can compete with Nvidia on other platforms in near future. Not without overclocking the h*** out of their GPU's.
Gaming software is different, than Compute. Why do you believe that gaming software reflects compute performance of AMD GPUs? That is what I meant.
 
Gaming software is different, than Compute. Why do you believe that gaming software reflects compute performance of AMD GPUs? That is what I meant.
DX12 games have proved, that DX11 was not the correct software approach for AMD's GCN. It was just an illustration I used to show how coding can make a difference.. Don't stumble over it more. :) Compute is compute, another thing, but here as well AMD would look better for instance with Adobe software, if Cuda/openCL mess wouldn't exist.

What I said, AMD's GCN is quite elegant solution, but if the software support is lacking, then... so much for elegant solution. Game consoles and Mac can be, where AMD can (start to) thrive.
 
Last edited:
DX12 games have proved, that DX11 was not the correct software approach for AMD. It was just an illustration I used to show how coding can make a difference.. Don't stumble over it more. :) Compute is compute, and AMD would look better for instance with Adobe software, if Cuda/openCL mess wouldn't exist.
The problem is that MOST of this forum, which supposed to be professionals is completely forgetting that Vega is faster than Titan Xp in compute. And yet, they call it a failure.

Why is it? Incompetence? Pure hatred over AMD?

The fact that Nvidia started huge marketing campaign that is targeted at Titan Xp, after the release of Vega, should show you something. The fact that they have just released new drivers for Titan Xp, should tell you something.

Why? Because RX Vega costs 499$ and is faster in compute than a GPU that costs 1199$, and draws around the same amount of power.
 
The problem is that MOST of this forum, which supposed to be professionals is completely forgetting that Vega is faster than Titan Xp in compute. And yet, they call it a failure.

Why is it? Incompetence? Pure hatred over AMD?

The fact that Nvidia started huge marketing campaign that is targeted at Titan Xp, after the release of Vega, should show you something. The fact that they have just released new drivers for Titan Xp, should tell you something.

Why? Because RX Vega costs 499$ and is faster in compute than a GPU that costs 1199$, and draws around the same amount of power.
Could be, and I wont deny that. But how does that raw power translate to a real world use is up to the software. I hope AMD's able to improve matters on that side. We've seen how powerful the FCPX can be with AMD, and how lacking the same system can be with Adobe Premiere Pro CC.
 
Could be, and I wont deny that. But how does that raw power translate to real world use is up to software. I hope AMD's able to improve matters on that side.
AMD drivers are good. Its up to developers to make good use of the architecture.

And that is for both: compute and gaming.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.