They call the S a "GPU accelerator", and the R a "graphics card".ATI's thinking is complete disregard for the facts.
They call the S9300-x2 a dual GPU professional card, and the newer Radeon Pro Duo the *first* dual GPU professional card.
They call the S a "GPU accelerator", and the R a "graphics card".ATI's thinking is complete disregard for the facts.
They call the S9300-x2 a dual GPU professional card, and the newer Radeon Pro Duo the *first* dual GPU professional card.
So, adding a couple of VGA connectors makes it something completely different?They call the S a "GPU accelerator", and the R a "graphics card".
And why do you think I claim anything otherwise? Because I focus out of technological curiosity about changes in GV100 chip, and the architecture layout on high and low-level?But that's an improvement, no?
You've been claiming without references that Pascal consumer is reused Maxwell. So isn't reused Pascal a big leap?
Can you drive a monitor if you use a GPU accelerator instead of a graphics card in a workstation?So, adding a couple of VGA connectors makes it something completely different?
I'd call that "renaming the goal posts". Seriously.
Wait for answers to my question? That's what I am actually doing.
I am not speculating. I asked about people's thoughts, predictions, hopes, dreams about the architecture.No wait for the actual product to come out instead of speculating.
I am not speculating. I asked about people's thoughts, predictions, hopes, dreams about the architecture.
Funniest part is that I am actually enjoying doom and gloom over "very long wait time for Vega" over forums.
Is this part of your crusade?Is that another part of your crusade?
GPU allocation seems nice.
So, Vega will be here this quarter? Meaning till June?
Let's hope this time AMD will really deliver, make our jaws drop. I believe they're going in the right direction but... we'll see.
With this new Pro Duo, I'm not sure there will be a Vega Pro soon.
Well, I have written months ago that AMD PR and marketing teams are complete atrocity.ATI's thinking is complete disregard for the facts.
They call the S9300-x2 a dual GPU professional card, and the newer Radeon Pro Duo the *first* dual GPU professional card.
What do you mean by Core war, and Cheap Cores?Blender Cycles was written for CPU and CUDA first and it was an awful mess to get it to work properly using OpenCL on any card. The story goes back a few years but I try to summarize:
I believe AMD was forced to support the large render kernel in Blender Cycles as it did not even compile. One issue was an error somehwhere (don´t remember if it was hardware or software) so the AMD card could not compile a kernel if it was too big despite having anough RAM. If I recall correctly, the people over at luxrender tried a split kernel approach in order ot be able to run luxrender and it turned out that a split kernel (microkernels) had some positive speed effects as well. It seem that OpenCl on AMD card are performing comparatively to CUDA now in Blender Cycles on comparable cards. However, the split kernal approach stil needs to be implemented for Mac (according to Blender home page). At any rate, it is a great achievement by the open source community with the help of AMD to make OpenCL competitive which of course limits vendor lockin. Vendor lockin does not seem to go down well with the open source community. Futhermore, I think that the competition in the GPU market has driven the development of GPU renderers becuase GPU compute is cheap. In order for that competition to work, OpenCL (or metal for Apple) need to work efficiently otherwise we will have de facto monopoly of Nvidia and we have seen what no competition to Intel has resulted in...very little very slowly.
While we are at it: can we please get a proper "core war" between AMD and Intel so we can get lots of cheap cores for us who use CPU render engines?
8 cores you can buy for a lot less than 6 cores.What do you mean by Core war, and Cheap Cores?
8 Cores you can buy right now for the price of 6 core.
https://www.amazon.com/Gigabyte-Radeon-Windforce-Graphics-GV-RX460WF2OC-2GD/dp/B01K1JV83C/A GT 710 costs $40, so I would say $65 would be a fair price for the RX 550.
Yay! Just what we need, more Vega speculation!
If ATI can't do actual Vega "shipments", then "speculation" is the next best thing.Yay! Just what we need, more Vega speculation!
When was the last time that AMD lead in CPUs? Or GPUs?
If ATI can't do actual Vega "shipments", then "speculation" is the next best thing.
Also amused by the "regain platform leadership" bit in the title. When was the last time that AMD lead in CPUs? Or GPUs?
They've been the "cheap bargain basement" for ages.
ATI Radeon HD 5870. September 2009.
So, for a few months from time-to-time AMD has lead by some measure. That's hardly "platform leadership" is it isn't sustained across several generations of products.In my eyes the last time AMD was undisputed in CPUs was in the Athlon 64 era. ...
GPU wise, I would say Tahiti (HD 7970) was pretty good and better than Nvidia's best at the time.
So, for a few months from time-to-time AMD has lead by some measure. That's hardly "platform leadership" is it isn't sustained across several generations of products.
I notice that the ATI/AMD fans here today seldom say "performance" unless they qualify it with "performance per watt" or "performance per dollar". It's OK for it to be "slow but cheap".
They have had superior performance per watt in 28 nm era. Maybe you guys were to fond of gaming performances that you have completely forgot about it?And they haven't been able to claim superior perf/watt in a very long time (certainly not since the Maxwell generation from NVIDIA, and probably all the way back to Kepler). It's pretty sad that a 185W RX 580 can barely beat a 120W GTX 1060, and gets destroyed by a 180W GTX 1080.