Newbie? Ive been here for years, sure, may have not logged on in a while.
Few months ago, talking to Apple support, mention, EE (Electrical Engineering, with emphasis in micro photolithography, making cpus, components, analog devices in Boston, they make the AUDIO CHIPS FOR APOLLO UAD, and what I do’t get is where is all the outrage?
Anyway, delays were due to Apple trying to make FPU faster thru GPU, never happened, thats why we have n 14//18 core, the , Intel will be pisssed.
That's it. The all in one cable solution that Apple likes
http://arstechnica.co.uk/gaming/201...-win-for-amd-and-disappointment-for-nvidia/1/
This is even funnier. In DirectX12 R9 290X performs almost as fast GTX 980 Ti. That is because both cards have roughly the same Compute power. And yes that is intended to work this way, because of the design on DirectX 12.
The effect is astonishing.
Edit. And this effect says very much why Apple went with AMD route for their GPU cards in Macs.
It is completely absurd to say that Apple went the AMD route for GPU cards in Macs because of a Microsoft Windows gaming API. Especially since it wasn't even out when Apple made these decisions over the last few years.
You won't see a speed increase due to Metal in a lot of cases.
You'll be lucky if existing things run in Metal at all.
I have said that because DirectX 12 is Based on Mantle, as is every other modern API, as is Metal. It is not direct rip-off from Mantle, like Vulcan. But there is much of "how it works" in Metal.My eyes are rolling so hard they just flew out of their sockets.
It is completely absurd to say that Apple went the AMD route for GPU cards in Macs because of a Microsoft Windows gaming API. Especially since it wasn't even out when Apple made these decisions over the last few years.
It's pretty clear that Apple went full hog on OpenCL with their own software in a big way, and at the time AMD was substantially better with OpenCL support than Nvidia was (and maybe still is--I don't know). There may have been other large factors too, such as sweetheart pricing, but that's private and we'll never know about it.
that's sweet.
(never used/seen usb-c)
what's the actual connection like? the plug etc.
is it more sturdy(?) than mini display port?
more like usb?
https://community.amd.com/community/gaming/blog/2015/05/12/on-apis-and-the-future-of-mantleDX12 is not based on Mantle. There has always been talk about reducing overhead and coding closer to the metal for many years because it was one reason consoles were better at squeezing out performance from a GPU than a desktop OS. The last desktop API that did something close to that was 3DFX Glide.
In plain english that means that Mantle 1.0 is in DirectX 12.AMD said:The Mantle SDK also remains available to partners who register in this co-development and evaluation program. However, if you are a developer interested in Mantle "1.0" functionality, we suggest that you focus your attention on DirectX® 12 or GLnext.
https://community.amd.com/community/gaming/blog/2015/05/12/on-apis-and-the-future-of-mantle
In plain english that means that Mantle 1.0 is in DirectX 12.
Microsoft started designing DirectX12 in 2010 year. Developers have had from the start access to the code. One engineer from EA Dice have come up with idea for lowering the overhead on GPUs. Went to Intel, Nvidia, Khronos. Nobody cared. And then went to AMD. Even they did not cared on the first glimpse of idea, but they started to experiment. And it was such a good idea that they made API. Then Microsoft saw the potential and they decided to implement it in DirectX12 as a low-level base and built the rest of library on top of the functional base. AMD gave it to everybody. Intel, Khronos, Apple, Imagination, Google, ARM. Everybody. Only one brand refused to use it, and to optimize for it - Nvidia. I have said many months before, that Mantle will be used for professional applications, however I completely had no idea how that will be executed. Metal shows it.
About the DirectX performance. Well both companies have had the access to the code of the game for over a year. AMD has much less in terms of resources and yet DirectX 12 performs better on their hardware than DirectX 11. On Nvidia side there is completely different thing. Its not because the software is not mature. It was over a year of work.
Nvidia underperforms because their hardware is not optimized for low-level reducing in overhead. Preemption is where their hardware excels. But simultaneous compute with asynchronous? No. That is exactly why we see regression in performance on DirectX12. Maxwell GPUs simply cannot handle asynchronous compute. Which from now will be base of every application.
Im not arguing here with anyone, just sharing what have been told to me.
At this rate AMD will have to sell the graphics division, which is a shame. Maybe Apple is positioning itself to buy all the patents.
http://www.techspot.com/news/61832-amd-market-share-continues-collapse.html
http://vr-zone.com/articles/nvidia-sold-over-80-percent-of-desktop-gpus-in-q2-2015/97502.html