Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
ATI's thinking is complete disregard for the facts.

They call the S9300-x2 a dual GPU professional card, and the newer Radeon Pro Duo the *first* dual GPU professional card.
They call the S a "GPU accelerator", and the R a "graphics card".
 
But that's an improvement, no?

You've been claiming without references that Pascal consumer is reused Maxwell. So isn't reused Pascal a big leap?

:rolleyes:
And why do you think I claim anything otherwise? Because I focus out of technological curiosity about changes in GV100 chip, and the architecture layout on high and low-level?

If you want to see one of references:

There is absolutely no improvement in core for core in performance in Pascal GPUs compared to Maxwell. Reusing GP100 chip, would bring core for core at least 30% improvement, core for core. For example GP100 chip with 1.2 GHz core clock will be around 35% faster than GTX Xp, even while using slightly less power, due to lower core clocks(around 230W). Of course, consumer GPUs will not have as much FP64 focus, as GP100 has, so the power consumption also should be lower in the end.

Volta may be new architecture from Nvidia, just like Vega is from AMD. That is why I am curious to see what they will change.
 
So, adding a couple of VGA connectors makes it something completely different?

I'd call that "renaming the goal posts". Seriously.
Can you drive a monitor if you use a GPU accelerator instead of a graphics card in a workstation?

It is as reasonable as making a difference between a consumer dual GPU graphics card and a professional dual GPU graphics card.
 
No wait for the actual product to come out instead of speculating.
I am not speculating. I asked about people's thoughts, predictions, hopes, dreams about the architecture.

Funniest part is that I am actually enjoying doom and gloom over "very long wait time for Vega" over forums.
 
I am not speculating. I asked about people's thoughts, predictions, hopes, dreams about the architecture.

Funniest part is that I am actually enjoying doom and gloom over "very long wait time for Vega" over forums.

Is that another part of your crusade? :p
 
GPU allocation seems nice.

So, Vega will be here this quarter? Meaning till June?
Let's hope this time AMD will really deliver, make our jaws drop. I believe they're going in the right direction but... we'll see.
With this new Pro Duo, I'm not sure there will be a Vega Pro soon.

AMD will announce vega in may since nvidia is announcing new products
 
ATI's thinking is complete disregard for the facts.

They call the S9300-x2 a dual GPU professional card, and the newer Radeon Pro Duo the *first* dual GPU professional card.
Well, I have written months ago that AMD PR and marketing teams are complete atrocity.
 
Blender Cycles was written for CPU and CUDA first and it was an awful mess to get it to work properly using OpenCL on any card. The story goes back a few years but I try to summarize:

I believe AMD was forced to support the large render kernel in Blender Cycles as it did not even compile. One issue was an error somehwhere (don´t remember if it was hardware or software) so the AMD card could not compile a kernel if it was too big despite having anough RAM. If I recall correctly, the people over at luxrender tried a split kernel approach in order ot be able to run luxrender and it turned out that a split kernel (microkernels) had some positive speed effects as well. It seem that OpenCl on AMD card are performing comparatively to CUDA now in Blender Cycles on comparable cards. However, the split kernal approach stil needs to be implemented for Mac (according to Blender home page). At any rate, it is a great achievement by the open source community with the help of AMD to make OpenCL competitive which of course limits vendor lockin. Vendor lockin does not seem to go down well with the open source community. Futhermore, I think that the competition in the GPU market has driven the development of GPU renderers becuase GPU compute is cheap. In order for that competition to work, OpenCL (or metal for Apple) need to work efficiently otherwise we will have de facto monopoly of Nvidia and we have seen what no competition to Intel has resulted in...very little very slowly.

While we are at it: can we please get a proper "core war" between AMD and Intel so we can get lots of cheap cores for us who use CPU render engines?
 
Blender Cycles was written for CPU and CUDA first and it was an awful mess to get it to work properly using OpenCL on any card. The story goes back a few years but I try to summarize:

I believe AMD was forced to support the large render kernel in Blender Cycles as it did not even compile. One issue was an error somehwhere (don´t remember if it was hardware or software) so the AMD card could not compile a kernel if it was too big despite having anough RAM. If I recall correctly, the people over at luxrender tried a split kernel approach in order ot be able to run luxrender and it turned out that a split kernel (microkernels) had some positive speed effects as well. It seem that OpenCl on AMD card are performing comparatively to CUDA now in Blender Cycles on comparable cards. However, the split kernal approach stil needs to be implemented for Mac (according to Blender home page). At any rate, it is a great achievement by the open source community with the help of AMD to make OpenCL competitive which of course limits vendor lockin. Vendor lockin does not seem to go down well with the open source community. Futhermore, I think that the competition in the GPU market has driven the development of GPU renderers becuase GPU compute is cheap. In order for that competition to work, OpenCL (or metal for Apple) need to work efficiently otherwise we will have de facto monopoly of Nvidia and we have seen what no competition to Intel has resulted in...very little very slowly.

While we are at it: can we please get a proper "core war" between AMD and Intel so we can get lots of cheap cores for us who use CPU render engines?
What do you mean by Core war, and Cheap Cores? :)

8 Cores you can buy right now for the price of 6 core.
 
big, announcing is one thing, having real products is another.
We've heard so much on Vega, and no sign of it. It's supposed to come in Q2 still but let's see if this means availability or just smoke.
Performance seems to be good, if you are to believe recent info.
 
Yay! Just what we need, more Vega speculation!
If ATI can't do actual Vega "shipments", then "speculation" is the next best thing.

Also amused by the "regain platform leadership" bit in the title. When was the last time that AMD lead in CPUs? Or GPUs?

They've been the "cheap bargain basement" for ages.
 
If ATI can't do actual Vega "shipments", then "speculation" is the next best thing.

Also amused by the "regain platform leadership" bit in the title. When was the last time that AMD lead in CPUs? Or GPUs?

They've been the "cheap bargain basement" for ages.

In my eyes the last time AMD was undisputed in CPUs was in the Athlon 64 era. Zen has brought them a long ways back and they are competitive with Intel's HEDT but it remains to be seen how they compete across the rest of the lineup. Intel still has no competition in the servers and mobile spaces, both of which are very important. Its also important to remember that Intel isn't standing still, and will have Skylake-X/W within a few months and will likely be brining 10 nm CPUs to market before AMD.

GPU wise, I would say Tahiti (HD 7970) was pretty good and better than Nvidia's best at the time.
 
ATI Radeon HD 5870. September 2009.
In my eyes the last time AMD was undisputed in CPUs was in the Athlon 64 era. ...

GPU wise, I would say Tahiti (HD 7970) was pretty good and better than Nvidia's best at the time.
So, for a few months from time-to-time AMD has lead by some measure. That's hardly "platform leadership" is it isn't sustained across several generations of products.

I notice that the ATI/AMD fans here today seldom say "performance" unless they qualify it with "performance per watt" or "performance per dollar". It's OK for it to be "slow but cheap". ;)
 
  • Like
Reactions: tuxon86
So, for a few months from time-to-time AMD has lead by some measure. That's hardly "platform leadership" is it isn't sustained across several generations of products.

I notice that the ATI/AMD fans here today seldom say "performance" unless they qualify it with "performance per watt" or "performance per dollar". It's OK for it to be "slow but cheap". ;)

And they haven't been able to claim superior perf/watt in a very long time (certainly not since the Maxwell generation from NVIDIA, and probably all the way back to Kepler). It's pretty sad that a 185W RX 580 can barely beat a 120W GTX 1060, and gets destroyed by a 180W GTX 1080.
 
  • Like
Reactions: tuxon86
And they haven't been able to claim superior perf/watt in a very long time (certainly not since the Maxwell generation from NVIDIA, and probably all the way back to Kepler). It's pretty sad that a 185W RX 580 can barely beat a 120W GTX 1060, and gets destroyed by a 180W GTX 1080.
They have had superior performance per watt in 28 nm era. Maybe you guys were to fond of gaming performances that you have completely forgot about it?

R9 Nano. Compute, and Gaming Efficiency. Absolute star of 28 nm era.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.