Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Even if the video steam is a 30 Hz, it's from a camera filming a monitor right? So the image here probably represent 1/60 second.
It was print screen, here.

The stream was according to Anandtech journalist https://forum.beyond3d.com/posts/1983197/ at 30 Hz.

To add more troubles to the ants nest: https://www.reddit.com/r/Amd/comments/6edd3h/amd_rx_vega_computex_pray_demo_fps/

P.S. The dimensions for the linked on previous page Phanteks Evolv Shift X:
phanteks-1b.jpg
 
Why do you think it struggled?

Tearing? Screen tearing appears not only when refresh rate of the display is higher than the framerate generated by GPUs, but also when it is lower than the framerate generated by the GPUs.


Nobody has any information about the framerate of those GPUs.

lFKALdd_d.jpg

30 Hz stream sampling 4 frames(at least) in single refresh. Which means 120 FPS lock, without Vsync.

AMD demoed there not overall performance, but the fact that ThreadRipper does not bottleneck any of the GPUs, with its 64 PCIe lanes.

Thats fair. I mean a web stream is a really hard way to determine performance especially since they didn't include a FPS counter. I bet this demo was more to prove threadripper can drive multiple GPUs then it was showing how fast Vega is.
 
Disappointing but not surprising. Vega 11 has been basically absent of rumors. No appearances in drivers, no shipping manifests, no rumored specs, etc.
Already has been in drivers ;). Quite some time ago. Not specific DeviceID, but Vega 11 was spotted already in drivers.
 
What chip is AMD using for Pro WX 2100? A Polaris with 64bit bus?

WX 2100 TDP < 35W
WX 3100 TDP < 50W

From the specs it looks to be a kneecapped derivative of the RX 550.

WX 3100 --> Polaris 12 4GB uses full 128bit path
WX 2100 --> Polaris 12 2GB using just half of the memory bus.

The drop in TDP is mostly memory. The GPU computation clocks are exactly the same. It looks to be wild hand waving where the max TFLOPs is exactly the same. The gap between reality and the WX 2100 is going to be noticeably wider.
[doublepost=1496341956][/doublepost]
HBM cache means APU SoC with CPU, GPU and HBM(2)?

HBM cache just means using HBMv2 . Instead of pointing at that as the 'VRAM' and saying it is the only video memory ( Fury series ) , they use the HBMv2 memory as a large "L3" cache to flat memory mapped main RAM ( and/or mmap to persistent file in NAND drive). As long as the GPU stays in the right zone of virtual memory it is all local but have a large data set you can map in chunks as you need them ( letting the memory management unit do a decent chunk of the work.)

It is like Intel's MCDRAM in Xeon Phi.

http://www.anandtech.com/show/9794/a-few-notes-on-intels-knights-landing-and-mcdram-modes-from-sc15
 
  • Like
Reactions: derohan
HBM cache means APU SoC with CPU, GPU and HBM(2)?
HBM cache role is to change perception of people, about the role of Memory. High Bandwidth Cache Controller's job is to page, and map up to 512 TB of data in your system regardless of where they are. SSD, System RAM, Network Storage, other GPUs. HBCC has access to all of it, and can deliver the data to the GPU when it is needed, when it is needed. There is no need for storing the data in GPU memory anymore. It is Unified Memory from HSA 2.0 on hardware level.
 
HBM cache role is to change perception of people, about the role of Memory. High Bandwidth Cache Controller's job is to page, and map up to 512 TB of data in your system regardless of where they are. SSD, System RAM, Network Storage, other GPUs. HBCC has access to all of it, and can deliver the data to the GPU when it is needed, when it is needed. There is no need for storing the data in GPU memory anymore. It is Unified Memory from HSA 2.0 on hardware level.

There is still a need for very fast memory to be sitting close to the GPU. HBM operates at 500 GB/s, whereas a high end SSD is 3.5 GB/s.
 
HBM cache role is to change perception of people, about the role of Memory. High Bandwidth Cache Controller's job is to page, and map up to 512 TB of data in your system regardless of where they are. SSD, System RAM, Network Storage, other GPUs. HBCC has access to all of it, and can deliver the data to the GPU when it is needed, when it is needed. There is no need for storing the data in GPU memory anymore. It is Unified Memory from HSA 2.0 on hardware level.
Yes, that's why I asked, because there are restrictions from CPU level... and that's why APU would be ideal. Anyhoo, it'll make new applications possible for a workstation.
 
  • Like
Reactions: derohan
If you remember about my post asking about potential partnership between AMD and ASRock for upcoming Vega product...

It is all about next generation Small Form Factor - Mini-STX. The GPU is integrated into motherboard. Expect avalanche of products in upcoming months. Desktop PC is changing. It goes from custom PC's to new, smaller and more efficient form factors(that is actually the future).
 
It seems that if RX 570 and RX 580 are hard to find, it might be the miners buying them.
 
Just from Jurno's duty. If you remember the Computex presentation of Threadripper+Vega and the terribleness of it, someone have deciphered that in some frames the Vega tandem was getting 162 FPS, because of 2.7 tears per cycle.

What this means. In the context of competition?
Supposedly in the same settings single GTX 1080 Ti is averaging @2 GHz core clock 59 FPS, with dips to 52-53 FPS. Depending on the scaling of Vega GPUs it would get(single GPU) 81 FPS in the same scene.

It can be hype, from AMD fans. It can have a lot of accuracy. We will see at the end of June, if someone will pay AMD the price they ask for Frontier Edition, and test it in games.
 
Just from Jurno's duty. If you remember the Computex presentation of Threadripper+Vega and the terribleness of it, someone have deciphered that in some frames the Vega tandem was getting 162 FPS, because of 2.7 tears per cycle.

What this means. In the context of competition?
Supposedly in the same settings single GTX 1080 Ti is averaging @2 GHz core clock 59 FPS, with dips to 52-53 FPS. Depending on the scaling of Vega GPUs it would get(single GPU) 81 FPS in the same scene.

It can be hype, from AMD fans. It can have a lot of accuracy. We will see at the end of June, if someone will pay AMD the price they ask for Frontier Edition, and test it in games.

Please, if they were getting 162FPS, they would have showed it with FPS counter and zoomed in hundreds of times to make sure we actually see it. ;)
 
Basically what iMac Pro got is GP100 level of performance in FP16, but with lower power consumption. I think I will get more information on the GPUs in upcoming weeks.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.