Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Status
Not open for further replies.
There is one another rumor: Polaris 11 has 8.6 TFLOPs of compute power in 125W of TDP.

Hmmmm ;).
 
  • Like
Reactions: Mago
Ok, I'll start a rumor too.

Nvidia has a Titan that is powered by a perpetual motion machine, with a TDP of 0.

Starting rumors is fun !
Another rumour - Nvidia had a monster quarter on increased GPU sales, stock up 15%.

Oh wait, that's fact ;)

http://finance.yahoo.com/echarts?s=NVDA+Interactive#{"range":"5d","allowChartStacking":true}

http://www.fool.com/investing/gener...ce=yahoo&utm_medium=feed&utm_campaign=article

http://www.fool.com/investing/gener...ce=yahoo&utm_medium=feed&utm_campaign=article
[doublepost=1456092685][/doublepost]
Apple will update it when there are parts worth upgrading to.
The parts arrived a year ago (E5-x6xx v3).

Apple didn't update.
 
Here's another rumor: NVIDIA and AMD filed bankruptcy and gave up making graphic cards.
[doublepost=1456100100][/doublepost]
I gotta say....

MacVidCards is doing excellent advocacy for the disappointed long term Mac Pro folks.

Geeze, can't Apple come up with something "insanely great"?
Something better than a closed system tube with 3 year old GPUs and CPUs called a Mac Pro.

I am one of the folks who would happily purchase a new mac pro if Apple would "think different"
LOL. Tim Cook had been saying, "it's my way or the highway."
Apple have been "thinking differently."
 
There is another very consistent rumour, Apple dissapointed by the nMP sales will buy Cray computer and launch the üMac Pro (Über Mac Pro not related to cabs ), Yet another Trash-Can Like Machine, this case a Barel-Trash - Like, capable to hold 6 vertically mounted Std PC GPUs Around a thermal Core and on top of 4 Xeon/20coreEach Processors, and an 2TB of ECC-RAID3 DDR5 or HBM3 RAM BANKS plus 16 SSD Blades.

Only drawback with this mac, is that is only an special order CTO, and assembly fully manually at some small video card shop at nevada.

Orders yours soon, due supply chains constrains and anticipated explosive demand, this üMac Pro will be sold only one day a year, specifically on Feb/31.

i have my credit card ready to pick one, are yours ?
 
Last edited:
  • Like
Reactions: JimmyPainter
There is another very consistent rumour, Apple dissapointed by the nMP sales will buy Cray computer

That's a name I've not heard since... re-reading Jurassic Park, actually. Didn't realize they were still around.
 
Well - I am no where as up to date on all of these component issues as most of you. But, in layman's terms, are most of you saying that the components that have been released in the last three years (gpu / cpu / memory & bus / ??) would not provide a significant performance improvement over the outdated 2013 nMP?

For the first time ever I am looking into upgrading my 4,1 via an external gpu enclosure - 6 core cpu / whatever. Not really what I want to do but ........
 
Lenovo laptops coming up with mobile 400 series in april, I'm guessing these are the rebrands, probably the ones seen on Zauba of the older generation.

AC, you can expect the GPUs to have a significant increase in performance and power draw. Depends on what you do with them if you really notice it or not, surely you will in heavy or demanding tasks.
CPUs might have a better IPC but performance improvements are now more conservative, the number of cores goes up, TDP remains the same, you might get an additional 10-15% at best with each new generation.
Mem performance is also not sky rocketing with DDR4 but better nonetheless.
SSDs tend to be faster as well.
TB3 will help with those eGPUs.
Don't expect an overall WOW factor to write home about though,
 
It seems it's time to push with drivers and multi-thread/GPGPU in software developing, to see the big numbers come back...
 
Much of the "low hanging fruit" has already been paralyzed.

Some programs cannot be made parallel, and others can exploit some parallelism but are limited by serial segments (see https://en.wikipedia.org/wiki/Amdahl's_law).

Stuff that runs on GPUs tends to be very highly parallizable. askunk is right. There have been limitations in drivers that have been slowing down the ability to use the GPU in that way. Metal, Vulkan, and DirectX 12 are all trying to optimize towards those known issues.

GPGPU is still relatively new in the industry, and graphics drivers never really caught up until now. Stuff like GLSL made a good start, but there were still fundamental issues with GPUs and threads.

A lot of software has not been optimized yet to get around these performance issues yet, which is why DirectX 12 benchmarks tend to come with a warning that it's just an initial set of optimizations.
 
Stuff that runs on GPUs tends to be very highly parallizable. askunk is right. There have been limitations in drivers that have been slowing down the ability to use the GPU in that way. Metal, Vulkan, and DirectX 12 are all trying to optimize towards those known issues.

GPGPU is still relatively new in the industry, and graphics drivers never really caught up until now. Stuff like GLSL made a good start, but there were still fundamental issues with GPUs and threads.

A lot of software has not been optimized yet to get around these performance issues yet, which is why DirectX 12 benchmarks tend to come with a warning that it's just an initial set of optimizations.

A minor addition, GPGPU code to be efficient also has to be "vectorizable" not every algorithm (notwithstanding could being paralellized) could be run efficiency on a GPGPU setup, things like multiple recursive calls to the same function applied on a large table of data (as could be f(x, y, z) = z=K? f(x, y, z+1)+Pi/x-cos (y) : K

What you have inside a gpu are thousands of mini-cpu each solving an small portion of an algorithm also waiting for the solution from other(s) cpu teamed on the task, ASAP an cpu is free it loads the next seed of data to process and pass it's results or the next cpu, when the last cpu in the chain finished it's when output data backs to the host system memory.
 
Last edited:
A lot of software has not been optimized yet to get around these performance issues yet, which is why DirectX 12 benchmarks tend to come with a warning that it's just an initial set of optimizations.
It is much easier to optimize software for hardware, than other way around ;).
 
What you have inside a gpu are thousands of mini-cpu each solving an small portion of an algorithm also waiting for the solution from other(s) cpu teamed on the task, ASAP an cpu is free it loads the next seed of data to process and pass it's results or the next cpu, when the last cpu in the chain finished it's when output data backs to the host system memory.

I remember vaguely that I studied predictive gates that anticipate the solution and have a statistical probability to speed up the process. Nevertheless, they might not be a game changer.
 
It is much easier to optimize software for hardware, than other way around ;).
However, one of the most spectacular leaps in performance in the last two decades was when the P6 came out - with the ability for the hardware to dynamically recompile and optimize the software.

It was huge then, and still reverberates today.
 
I remember vaguely that I studied predictive gates that anticipate the solution and have a statistical probability to speed up the process. Nevertheless, they might not be a game changer.

Most CPUs have features like this. You could add this to a GPU core, but it's going to add complexity/heat/power issues. GPU cores also don't usually work on tasks that large, or tasks that have branching. So it's possible, but give how much power/size issues GPUs are having, probably not worth the trade off right now.

It looks like there is research out there about compiler based branch prediction on GPUs, but I'm not aware of any sort of branch prediction on GPU hardware.
 
  • Like
Reactions: askunk
If anyone would want to see real value of Nvidia hardware in comparison to AMD hardware, without any bottlenecks, there are new benchmarks of Ashes of Singularity, with complete analysis on them.

Also, if anyone would question how Asynchronous Compute affects performance, there is also a page.

http://www.anandtech.com/show/10067/ashes-of-the-singularity-revisited-beta
Have a good read. It looks like in the end - only thing that makes Nvidia GPU "better" than AMD is CUDA. The question is how much longer it will bring benefit for customers...

Edit. One more benchmark to consider on the topic: http://www.nordichardware.se/Grafik...12-och-multi-gpu/Prestandatester.html#content

For Apple and next Mac Pro, I think it will be better to stay with AMD cards.

Edit2: http://www.tomshardware.de/ashes-of...rectx-12-dx12-gaming,testberichte-242049.html
http://www.computerbase.de/2016-02/ashes-of-the-singularity-directx-12-amd-nvidia/2/
R9 390X ties with GTX 980 Ti. Unbelievable.
 
Last edited:
If anyone would want to see real value of Nvidia hardware in comparison to AMD hardware, without any bottlenecks, there are new benchmarks of Ashes of Singularity, with complete analysis on them.

Also, if anyone would question how Asynchronous Compute affects performance, there is also a page.

http://www.anandtech.com/show/10067/ashes-of-the-singularity-revisited-beta
Have a good read. It looks like in the end - only thing that makes Nvidia GPU "better" than AMD is CUDA. The question is how much longer it will bring benefit for customers...

Edit. One more benchmark to consider on the topic: http://www.nordichardware.se/Grafik...12-och-multi-gpu/Prestandatester.html#content

For Apple and next Mac Pro, I think it will be better to stay with AMD cards.

Edit2: http://www.tomshardware.de/ashes-of...rectx-12-dx12-gaming,testberichte-242049.html
http://www.computerbase.de/2016-02/ashes-of-the-singularity-directx-12-amd-nvidia/2/
R9 390X ties with GTX 980 Ti. Unbelievable.

That's a single game... Not a trend. I know that you like to fish for single thing to push your agenda but can you at least wait until there is more than one unfinished and unoptimized video game before declaring victory.
 
  • Like
Reactions: MacVidCards
That's a single game... Not a trend. I know that you like to fish for single thing to push your agenda but can you at least wait until there is more than one unfinished and unoptimized video game before declaring victory.
I am not pushing my agenda. That is only your opinion. Ask developers, about this. They will say EXACTLY same thing as I do. Because I am repeating only what they are saying.

Ashes, and GPUs behavior in that scenario is only emanation of whole situation.
 
I am not pushing my agenda. That is only your opinion. Ask developers, about this. They will say EXACTLY same thing as I do. Because I am repeating only what they are saying.

Ashes, and GPUs behavior in that scenario is only emanation of whole situation.

It's a single data point nothing more.
 
It's a single data point nothing more.
Hitman is coming, and from already has been hinted on forums: it will show exactly the same thing...

Why? Because AMD GPUs in DX11 scenario were underutilized. DX12 lifts all of the bottlenecks for all of current architectures of GPUs. Imagine situation where you compare 6 TFLOPs GPU(GTX 980 Ti) to 6 TFLOPs GPU(R9 390X). They have to tie.

However you are right, at this moment - it is only one point of validity.
 
Hitman is coming, and from already has been hinted on forums: it will show exactly the same thing...

Why? Because AMD GPUs in DX11 scenario were underutilized. DX12 lifts all of the bottlenecks for all of current architectures of GPUs. Imagine situation where you compare 6 TFLOPs GPU(GTX 980 Ti) to 6 TFLOPs GPU(R9 390X). They have to tie.

However you are right, at this moment - it is only one point of validity.

Ashes of singularity is sponsored by AMD... And so is Hitman... Talk about objectivity... Do you want me to post some benchmark of NVidia gamework enabled game and how they perform on AMD cards...
 
Status
Not open for further replies.
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.