Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Source? The only thing I've seen about this recently is on an AMD marketing slide, which is clearly heavily biased (i.e. AMD wants game developers to start using more FP16 so they have an advantage when compared with consumer Pascal cards).
So, apparently there isn't much more than that marketing slide.

See https://www.pcper.com/news/Graphics...tein-2-Optimized-AMD-Vega-FP16-Shader-Support for some more (very little more) info.

FP16 seems to be something from the future, not something that's generally useful today.
[doublepost=1502312210][/doublepost]
Would be surprised if it's a bug free driver at this stage.
There is never any "bug-free" anything at any time. The best you can hope for is that there aren't any critical bugs that affect your workflow or porn viewing.
 
RX Vega cards work in macOS High Sierra. They also have native eGPU support.

20663830_348042058984852_661566813042999677_n.jpg 20708431_348042062318185_4232228836211291030_n.jpg
 
More cryptocurrencies are adopting the Proof of Stake model. No mining involved.

Right now NEO is hot. It used to be called Antshares. It's China's first blockchain developed with Ali Baba and it's called the Chinese Ethereum.

If you invested $1000 in it a few months ago you would be very comfortable right now. But even if you invested $1000 last week you would have made $6000 back already.
 
Could be because they expect demand to be high for it. If its as good at mining as rumored then it may be impossible to get until everyone gets over these cryptocurrencies.

Maybe 5 fools will spend money on mining hardware. If you see people running around saying buy new hardware for mining Eth, they work for GPU companies and are playing everyone for a sucka.
 
I know its hard for Nvidia users to understand that you can pay less for hardware you use for Machine Learning, but it appears that in upcoming months in this market "boom" will happen because of hardware that is affordable. Show me GPU that had 21 TFLOPs of FP16 that costed 400$ in previous years.

DX12 and Vulkan Games will use a lot of FP16 also. It will start becoming the most important metric in gaming for those games, and VR applications using Vulkan and DX12(actually, Vulkan because it can be used everywhere).

Maybe you are not aware of how important FP16 is rapidly becoming? Nvidia was marketing this just one year ago. Now the goalpost moved, because Nvidia said so? I thought you were professional tied with Machine Learning, so you should know all this.

That is why I posted: RX Vega 56, with 21 TFLOPs FP16, and 210W TDP, will have best performance/watt/price ratio in upcoming months, for FP16 market.

P.S. How big total cost would be buying specified Tensor SOC with 400$ GPU with 21 TFLOPs FP16, and what performance you can get out of this combo? ;)

Will it be cheaper than buying GV100, for the same end results? ;)
Is there any more to this than wishful thinking based on the AMD marketing slide?

I am definitely into ML with CUDA - so I was surprised to hear your claim that FP16 was important for gaming. (And as an ML professional, it should be excusable if I haven't heard the very recent stories that ATI is pushing FP16 for gaming. Since it's only been a few months that FP16 consumer cards began to trickle onto the market.)
[doublepost=1502414058][/doublepost]
Probably pretty good. AMD has been demoing its ability to handle 8K video when connected to an SSD.
Really? When no current Mac supports a Vega card. Is AMD demoing FCPX on Hackintoshes?
 
Is there any more to this than wishful thinking based on the AMD marketing slide?

I am definitely into ML with CUDA - so I was surprised to hear your claim that FP16 was important for gaming. (And as an ML professional, it should be excusable if I haven't heard the very recent stories that ATI is pushing FP16 for gaming. Since it's only been a few months that FP16 consumer cards began to trickle onto the market.)

FP16 is less accurate than FP32, so bad for folding Proteins and math equations, but great for games that shoot people in the face? I mean, I guess shooting people doesn't require the accuracy of other mathematical operations, but isn't AMD saying VEGA is Super Fast at going half the speed?

Really? When no current Mac supports a Vega card. Is AMD demoing FCPX on Hackintoshes?

I know right? 8k playback has nothing really to do with Final Cut Pro X, a horizontal and vertical playback of a video without stutter or dropped isn't the same as a program using that file to edit or do effects. A fast card makes Final Cut Pro X faster, but 8k playback of a file, isn't 8k realtime in Final Cut Pro X. Also Final Cut Pro X isn't reflective of how fast AMD is but of how a company(APPLE) can write proper drivers to use a GPU's full potential.

Go demo a Autodesk Flame System, Fully spaced Davinci Resolve System or Filmlight Baselight. Realtime 4k doing multiple color effects all because the software is properly written to use the GPU's to their full potential, it is all software. Baselight4 a 4k Realtime grading system, back in 2009/2010, used consumer gaming cards to do real 4k color grading.. IN 2009.. They where using off the shelf hardware to create a system that cost 200k, all because they wrote software to use all the hardware. They also split a 4k signal between multiple GPU's.

Adobe for example is a very hacky company, their software covers too much ground, from Web Development to Graphics to Editorial... Without a cohesive methodology to use all computers hardware to it's fullest in both the CPU and GPU, we have some very underutilized hardware. Adobe Premiere lags in a lot of areas not because of hardware, but because Adobe can't keep up with the coding.

Final Cut Pro X is fast because Apple is trying to use 90% to 100% of the CPU and GPU available, they only have a few hardware configurations to worry about, so in a lot of ways they have a clear advantage. I would say Adobe doesn't have enough full time Premiere or After Effects coders to keep up with all the hardware advancements coming from intel, Nvidia and now AMD.

Final Cut Pro X isn't faster because of AMD making great hardware, but because of APPLE making better software. If the VEGA dominates in Final Cut Pro X it is because of Apple not AMD.
 
Really? When no current Mac supports a Vega card. Is AMD demoing FCPX on Hackintoshes?

You don't have to be a jerk here. Obviously AMD wouldn't be demoing unreleased Apple hardware. Apple is really good at getting the most out of their hardware with their software. If AMD can add support for editing 8k video with realtime effects in premiere, then I'm sure Apple can add it to Final Cut Pro.

Actually, High Sierra supports Vega in external enclosures.
 
.

Final Cut Pro X is fast because Apple is trying to use 90% to 100% of the CPU and GPU available, they only have a few hardware configurations to worry about, so in a lot of ways they have a clear advantage. I would say Adobe doesn't have enough full time Premiere or After Effects coders to keep up with all the hardware advancements coming from intel, Nvidia and now AMD.

On this forum I throughly benched 4K workflow and render in Premiere on a cMP. macOS and Windows versions were tested on the same machine.

The rendering was 4x faster on Windows. This confused us. Further testing reveals that GPU based encode/decode is sometimes crippled and sometimes non-existent outside Final Cut. It's not properly available to third party apps.

Apple isn't changing that until people protest. In High Sierra for example they have crippled HEVC decode in GPU drivers and use the CPU decoder that only exists from Kabylake onward. Boot into Windows Boot Camp and you have every hardware spec available to every app. There's no discrimination or cheating.

That's why it's important that Apple don't control drivers or use proprietary parts to render our machines almost useless and force us to keep buying the latest.
 
Last edited:
The only cMP capable one is the Vega 56 XL (without science). I'm reading some conflicting information however. Some say 210w, some say 285w. Some say dual 8-pins, some say an 8-pin + 6-pin.

Would be nice once that's solidified, because that's a 10.5 tFlop card. If it however performs similarly to a GTX 1070 for real-world gaming performance, then I must say the GTX 1070 is still the better card/deal.

Even the RX VEGA 64 liquid cooled is theoretically possible with a simple PSU mod in the cMP.
So all of these cards' performance numbers are relevant to cMP users.
 
Last edited:
Final Cut Pro X is fast because Apple is trying to use 90% to 100% of the CPU and GPU available, they only have a few hardware configurations to worry about, so in a lot of ways they have a clear advantage. I would say Adobe doesn't have enough full time Premiere or After Effects coders to keep up with all the hardware advancements coming from intel, Nvidia and now AMD.

Get the OpenGL Driver Monitor from Apple's developer tools, and then enable "GPU Core Utilization" on an NVIDIA card (or the new "Device Utilization" or whatever it's called on any vendor) and run the BruceX benchmark. Last time I checked, which granted was a year or two ago, GPU utilization was nowhere near 100% on my GeForce GTX TITAN X (GM200, i.e. Maxwell). Apple might have tuned it to run better on the low-end AMD GPUs that they ship, but a real high-end GPU is going to sit there and twiddle its thumbs for more than 50% of the time, based on what I've seen. My theory is that they're doing all their video decode/encode on the CPU using QuickSync (if available) and transferring raw video frames across the PCIe bus, rather than using the GPU's video decode/encode hardware and leaving everything in the GPU's memory.
 
  • Like
Reactions: owbp
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.