Another rumour - Nvidia had a monster quarter on increased GPU sales, stock up 15%.Ok, I'll start a rumor too.
Nvidia has a Titan that is powered by a perpetual motion machine, with a TDP of 0.
Starting rumors is fun !
The parts arrived a year ago (E5-x6xx v3).Apple will update it when there are parts worth upgrading to.
LOL. Tim Cook had been saying, "it's my way or the highway."I gotta say....
MacVidCards is doing excellent advocacy for the disappointed long term Mac Pro folks.
Geeze, can't Apple come up with something "insanely great"?
Something better than a closed system tube with 3 year old GPUs and CPUs called a Mac Pro.
I am one of the folks who would happily purchase a new mac pro if Apple would "think different"
[doublepost=1456092685][/doublepost]
The parts arrived a year ago (E5-x6xx v3).
Apple didn't update.
Yes. T-Bolt 3 is a PCIe part that you can bolt to about anything. It doesn't have or need special CPUs.are those Thunderbolt 3 compatible? If not, not worth the update.
There is another very consistent rumour, Apple dissapointed by the nMP sales will buy Cray computer
Much of the "low hanging fruit" has already been paralyzed.It seems it's time to push with drivers and multi-thread/GPGPU in software developing, to see the big numbers come back...
Yes, Cray computer still alive and well, don't sound much since HPC market is not as glamorous or provide news as often to be noticed.That's a name I've not heard since... re-reading Jurassic Park, actually. Didn't realize they were still around.
Much of the "low hanging fruit" has already been paralyzed.
Some programs cannot be made parallel, and others can exploit some parallelism but are limited by serial segments (see https://en.wikipedia.org/wiki/Amdahl's_law).
Stuff that runs on GPUs tends to be very highly parallizable. askunk is right. There have been limitations in drivers that have been slowing down the ability to use the GPU in that way. Metal, Vulkan, and DirectX 12 are all trying to optimize towards those known issues.
GPGPU is still relatively new in the industry, and graphics drivers never really caught up until now. Stuff like GLSL made a good start, but there were still fundamental issues with GPUs and threads.
A lot of software has not been optimized yet to get around these performance issues yet, which is why DirectX 12 benchmarks tend to come with a warning that it's just an initial set of optimizations.
It is much easier to optimize software for hardware, than other way around .A lot of software has not been optimized yet to get around these performance issues yet, which is why DirectX 12 benchmarks tend to come with a warning that it's just an initial set of optimizations.
What you have inside a gpu are thousands of mini-cpu each solving an small portion of an algorithm also waiting for the solution from other(s) cpu teamed on the task, ASAP an cpu is free it loads the next seed of data to process and pass it's results or the next cpu, when the last cpu in the chain finished it's when output data backs to the host system memory.
However, one of the most spectacular leaps in performance in the last two decades was when the P6 came out - with the ability for the hardware to dynamically recompile and optimize the software.It is much easier to optimize software for hardware, than other way around .
I remember vaguely that I studied predictive gates that anticipate the solution and have a statistical probability to speed up the process. Nevertheless, they might not be a game changer.
If anyone would want to see real value of Nvidia hardware in comparison to AMD hardware, without any bottlenecks, there are new benchmarks of Ashes of Singularity, with complete analysis on them.
Also, if anyone would question how Asynchronous Compute affects performance, there is also a page.
http://www.anandtech.com/show/10067/ashes-of-the-singularity-revisited-beta
Have a good read. It looks like in the end - only thing that makes Nvidia GPU "better" than AMD is CUDA. The question is how much longer it will bring benefit for customers...
Edit. One more benchmark to consider on the topic: http://www.nordichardware.se/Grafik...12-och-multi-gpu/Prestandatester.html#content
For Apple and next Mac Pro, I think it will be better to stay with AMD cards.
Edit2: http://www.tomshardware.de/ashes-of...rectx-12-dx12-gaming,testberichte-242049.html
http://www.computerbase.de/2016-02/ashes-of-the-singularity-directx-12-amd-nvidia/2/
R9 390X ties with GTX 980 Ti. Unbelievable.
I am not pushing my agenda. That is only your opinion. Ask developers, about this. They will say EXACTLY same thing as I do. Because I am repeating only what they are saying.That's a single game... Not a trend. I know that you like to fish for single thing to push your agenda but can you at least wait until there is more than one unfinished and unoptimized video game before declaring victory.
I am not pushing my agenda. That is only your opinion. Ask developers, about this. They will say EXACTLY same thing as I do. Because I am repeating only what they are saying.
Ashes, and GPUs behavior in that scenario is only emanation of whole situation.
Hitman is coming, and from already has been hinted on forums: it will show exactly the same thing...It's a single data point nothing more.
Hitman is coming, and from already has been hinted on forums: it will show exactly the same thing...
Why? Because AMD GPUs in DX11 scenario were underutilized. DX12 lifts all of the bottlenecks for all of current architectures of GPUs. Imagine situation where you compare 6 TFLOPs GPU(GTX 980 Ti) to 6 TFLOPs GPU(R9 390X). They have to tie.
However you are right, at this moment - it is only one point of validity.