Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Status
Not open for further replies.
Newbie? Ive been here for years, sure, may have not logged on in a while.

Few months ago, talking to Apple support, mention, EE (Electrical Engineering, with emphasis in micro photolithography, making cpus, components, analog devices in Boston, they make the AUDIO CHIPS FOR APOLLO UAD, and what I do’t get is where is all the outrage?

Anyway, delays were due to Apple trying to make FPU faster thru GPU, never happened, thats why we have n 14//18 core, the , Intel will be pisssed.


Vocal diarrea
 
Thunderbolt-3-Intel.jpg


That's it. The all in one cable solution that Apple likes
 
  • Like
Reactions: envoy510
You can ignore that 'Direct X 12' benchmark article and that game. It's a beta. There's no point even comparing DX12 titles until there are enough titles with mature DX12 coding. That won't be until end of 2016.
 
Check the MacBook.
Smaller connector, lower Z.
Reversible.
Calling it USB brings more confusion to the table.

To bad Intel didn't comment on general availability of TB3, and provided too few details on Skylake Roadmap, specially Xeons :)
 
http://arstechnica.co.uk/gaming/201...-win-for-amd-and-disappointment-for-nvidia/1/

This is even funnier. In DirectX12 R9 290X performs almost as fast GTX 980 Ti. That is because both cards have roughly the same Compute power. And yes that is intended to work this way, because of the design on DirectX 12.

The effect is astonishing.

Edit. And this effect says very much why Apple went with AMD route for their GPU cards in Macs.

My eyes are rolling so hard they just flew out of their sockets.

It is completely absurd to say that Apple went the AMD route for GPU cards in Macs because of a Microsoft Windows gaming API. Especially since it wasn't even out when Apple made these decisions over the last few years.

It's pretty clear that Apple went full hog on OpenCL with their own software in a big way, and at the time AMD was substantially better with OpenCL support than Nvidia was (and maybe still is--I don't know). There may have been other large factors too, such as sweetheart pricing, but that's private and we'll never know about it.
 
It is completely absurd to say that Apple went the AMD route for GPU cards in Macs because of a Microsoft Windows gaming API. Especially since it wasn't even out when Apple made these decisions over the last few years.

You have to remember his arm flapping that DirectX12 3D , Mantle, Metal , Vulkan are all really the "same thing". All 100% different? No. All 100% same? Again No.

There are some aspects common to all three that AMD ( via Mantle ) have weaved into their hardware incrementally better than some others. More than a good chance that AMD was pitching Mantle concepts to Apple. Extremely, high that Apple knew of a desire for something like Vulkan (via sitting on the standards committee and the research work being done on OpenGL "almost zero overhead" command queue ). If Apple got wind of the DirectX12 3D stuff also then would know that multiple GPU vendors would be heading this general solution space direction ( between glNext (eventually Vulkan) and DirectX12 3D both driving the GPU vendors) some kind of solution here was coming. If Apple's proprietary Metal can just follow the wave.

But those common, widespread drivers ( glNext and DX12 3D) also mean trying to turn this in to AMD fan boy versus Nvidia fan boy issue is goofy. Over time there isn't going to be a huge gap between the two as both vendors follow along into this solution space.
 
You won't see a speed increase due to Metal in a lot of cases.

In alot of cases were Metal is called? Or Metal isn't being called very much by legacy (and released in last year or so ) Mac applications?


You'll be lucky if existing things run in Metal at all.

All Apple needs to do is hook foundation library services into it to make it run.

for the set of apps that minimize Apple libraries and compose their own "direct to OpenGL" graphics library .... then yeah.. If don't call Metal it won't show an improvement. But that is going to be a certian subset of applications. ( granted probably higher than normal usage by many Mac Pro users, but a subset none-the-less. )
 
  • Like
Reactions: Xteec
My eyes are rolling so hard they just flew out of their sockets.

It is completely absurd to say that Apple went the AMD route for GPU cards in Macs because of a Microsoft Windows gaming API. Especially since it wasn't even out when Apple made these decisions over the last few years.

It's pretty clear that Apple went full hog on OpenCL with their own software in a big way, and at the time AMD was substantially better with OpenCL support than Nvidia was (and maybe still is--I don't know). There may have been other large factors too, such as sweetheart pricing, but that's private and we'll never know about it.
I have said that because DirectX 12 is Based on Mantle, as is every other modern API, as is Metal. It is not direct rip-off from Mantle, like Vulcan. But there is much of "how it works" in Metal.

What I meant in that post was that if you use Mantle, and lower the overhead from CPU you get much higher performance on AMD GPUs. That was the point of that post.

Deconstruct60: From what I know Metal is custom form of Mantle, designed to work both with OpenCL and OpenGL. Unfortunately it opens door to making it hard to directly port from one platform to another, even if they share the same architecture. It may be really hard to see premieres of Applications of XBOne, PS4, PC and Mac simultaneously. That is the only danger here.
 
Custom form in that it's going to ship?
Has there been any word as to when Metal as a project was in planning/started work? Just curious about timings - Apple does do technology looking forward several years - had they known about this since before or around 2012 which is the current Metal cut-off point?
 
DX12 is not based on Mantle. There has always been talk about reducing overhead and coding closer to the metal for many years because it was one reason consoles were better at squeezing out performance from a GPU than a desktop OS. The last desktop API that did something close to that was 3DFX Glide.
 
  • Like
Reactions: tuxon86 and t0mat0
that's sweet.

(never used/seen usb-c)
what's the actual connection like? the plug etc.
is it more sturdy(?) than mini display port?
more like usb?

In my limited experience, it's probably better then mDP in that it's less likely to fall out (an issue I have never experienced but apparently some people rant about, especially with their TB accessories) but the downside is it's USB with the pins on an easy-to-deform piece in the middle. So it's likely to be a more fragile port (although whether that frailty ends up mattering, is a question that won't be answered for a few years.)

It's still going to be a bit weird to have a bunch of ports that look identical but only have some shared functions and different speeds, though.
 
  • Like
Reactions: flat five
DX12 is not based on Mantle. There has always been talk about reducing overhead and coding closer to the metal for many years because it was one reason consoles were better at squeezing out performance from a GPU than a desktop OS. The last desktop API that did something close to that was 3DFX Glide.
https://community.amd.com/community/gaming/blog/2015/05/12/on-apis-and-the-future-of-mantle
AMD said:
The Mantle SDK also remains available to partners who register in this co-development and evaluation program. However, if you are a developer interested in Mantle "1.0" functionality, we suggest that you focus your attention on DirectX® 12 or GLnext.
In plain english that means that Mantle 1.0 is in DirectX 12.
Microsoft started designing DirectX12 in 2010 year. Developers have had from the start access to the code. One engineer from EA Dice have come up with idea for lowering the overhead on GPUs. Went to Intel, Nvidia, Khronos. Nobody cared. And then went to AMD. Even they did not cared on the first glimpse of idea, but they started to experiment. And it was such a good idea that they made API. Then Microsoft saw the potential and they decided to implement it in DirectX12 as a low-level base and built the rest of library on top of the functional base. AMD gave it to everybody. Intel, Khronos, Apple, Imagination, Google, ARM. Everybody. Only one brand refused to use it, and to optimize for it - Nvidia. I have said many months before, that Mantle will be used for professional applications, however I completely had no idea how that will be executed. Metal shows it.

About the DirectX performance. Well both companies have had the access to the code of the game for over a year. AMD has much less in terms of resources and yet DirectX 12 performs better on their hardware than DirectX 11. On Nvidia side there is completely different thing. Its not because the software is not mature. It was over a year of work.

Nvidia underperforms because their hardware is not optimized for low-level reducing in overhead. Preemption is where their hardware excels. But simultaneous compute with asynchronous? No. That is exactly why we see regression in performance on DirectX12. Maxwell GPUs simply cannot handle asynchronous compute. Which from now will be base of every application.

Im not arguing here with anyone, just sharing what have been told to me.
 
https://community.amd.com/community/gaming/blog/2015/05/12/on-apis-and-the-future-of-mantle
In plain english that means that Mantle 1.0 is in DirectX 12.
Microsoft started designing DirectX12 in 2010 year. Developers have had from the start access to the code. One engineer from EA Dice have come up with idea for lowering the overhead on GPUs. Went to Intel, Nvidia, Khronos. Nobody cared. And then went to AMD. Even they did not cared on the first glimpse of idea, but they started to experiment. And it was such a good idea that they made API. Then Microsoft saw the potential and they decided to implement it in DirectX12 as a low-level base and built the rest of library on top of the functional base. AMD gave it to everybody. Intel, Khronos, Apple, Imagination, Google, ARM. Everybody. Only one brand refused to use it, and to optimize for it - Nvidia. I have said many months before, that Mantle will be used for professional applications, however I completely had no idea how that will be executed. Metal shows it.

About the DirectX performance. Well both companies have had the access to the code of the game for over a year. AMD has much less in terms of resources and yet DirectX 12 performs better on their hardware than DirectX 11. On Nvidia side there is completely different thing. Its not because the software is not mature. It was over a year of work.

Nvidia underperforms because their hardware is not optimized for low-level reducing in overhead. Preemption is where their hardware excels. But simultaneous compute with asynchronous? No. That is exactly why we see regression in performance on DirectX12. Maxwell GPUs simply cannot handle asynchronous compute. Which from now will be base of every application.

Im not arguing here with anyone, just sharing what have been told to me.

I'll read that blog after work but AMD and even their Wikipedia article on Mantle contradict the idea that Mantle is 'in' DX12 or that it is derived from Mantle. The wiki even cites them stating that they were working to make it 'compatible with DX12'
 
At this rate AMD will have to sell the graphics division, which is a shame. Maybe Apple is positioning itself to buy all the patents.

http://www.techspot.com/news/61832-amd-market-share-continues-collapse.html

http://vr-zone.com/articles/nvidia-sold-over-80-percent-of-desktop-gpus-in-q2-2015/97502.html

Tripe ! Nonsense !

Factual error in first few paragraphs.

"Instead, it was just another reskin of the 300 series video cards."

It was in fact a reskin of a 200 series video card.

No respect for accuracy.

And they didn't even mention that the space heater division has IMPROVED market share. Gas and oil sellers in northern states have picketed Apple stores due to the 5K iMac taking away their business.

And that AMD cheerleader will be refuting all of this in 3...2....1...
 
  • Like
Reactions: tuxon86
Wasn't that Q2 data, without even considering 300 series and Fiji chips into account, because they were not sold in that Quarter?
 
Status
Not open for further replies.
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.