Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

A bit of "Monday Morning Quarterbacking" , but at least back to the topic of the Mac Pro but ..... It is puzzling why Apple didn't prepare a "plan B" about 1-1.5 years ago that could have used these as replacements for the current Mac Pro. It wouldn't have been an "omg it is better than sex" product, but it would have been something Apple could limp along with for another 1-2 years while they figure out what they really want to do.

In 2012, Apple 'bumped' the CPUs to show that they had something (even if it was just new firmware and slightly different order from Intel. ). Price cuts only is indicative that they weren't working on anything. Even entry level GPUs for the new machine. Extremely strange if the something like the 570-580 wasn't at the "entry level" spectrum of what they were working on up until it failed. I get how the "top end" GPU offerings failed, but how on Earth did the entry-mid level ones all fail also ???????????? Seriously.

A D510 (RX 570 ) and D610 ( Rx 580) wouldn't be a huge hassle thermal wise to put into the current system. Actually, they probably would get less failures because these are designed to run on the thermal zone that the Mac Pro provides and are pretty close thermal match to the E5 v2 limits.

[ Redoing the Xeon E5 to v4 would require the CPU board and blackplane board ( with PCH) changes which propagate more widely and get more expensive. But doing two more GPU cards. That isn't a huge leap. ]

Those would not have solved all of the problems outlined in their roundtable sessions, but it certainly would have addressed their major problem that folks don't believe they "care" or that they are doing anything substantive other than just cashing checks. No Plan B. No action..... really doesn't get solved by telling folks going back to hide in a hole for another year or two.


As for AMD's marketing, you can tell from the photos they are primarily marketing these card to folks with 3-4 years stuff. ( Not RX400 or anything much in last two years; just older stuff and low end iGPU. ) That's kind of the point though in the Mac Pro context .... it is an older system from that timeframe. Not sure the AMD move of shifting RX 4x5 to RX 5xx is going to work so well. They are going to get widely 'flamed' for this. That is probably going to get in the way of their advertising in more than few places. It is in the "doing something" range but they had already announced a xx5 update path... just use it. More smoke and more mirrors isn't likely going work too well.
 
{0x1002, 0x6860, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VEGA10},
{0x1002, 0x6861, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VEGA10},
{0x1002, 0x6862, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VEGA10},
{0x1002, 0x6863, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VEGA10},
{0x1002, 0x6867, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VEGA10},
{0x1002, 0x686c, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VEGA10},
{0x1002, 0x687f, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VEGA10},
Full deviceID stack of Vega 10 GPUs.
This is only for Vega 10.
 

Who is this guy????? Insightful analysis? Chuckle.

"AMD ...regain its decade-old platform leadership. .." -- this sounds like AMD has been 'king of mountain' for most of the last 10 years instead of shooting itself in the foot for most of the last 10 years.

"The ongoing correction undoubtedly offers an excellent investing opportunity in the stock for those who missed the rally. ..." -- Correction ... like movement back to a rational, quantitatively justified price point. No .... that wouldn't be a high buy state. Buying a corrected stock ( or "overly corrected ) stock yes. AMD can still screw things up, not high but it not exactly a low risk either.

"AMD will launch its Vega-based GPUs in a few weeks. " ... That is kind of funny because all the real rumblings with pictures point to RX 500 ... which is a somewhat overzealous rebadge. ( remember above how AMD could still screw things up above? )

" will outcompete even Nvidia's (NASDAQ:NVDA) most powerful graphics card for gaming, which is its Pascal-based GTX 1080 Ti ... " Huh, I guess this person's crystal ball didn't illuminate the new Titan XP that would arrive well before Vega. AMD would do well just to match. This is almost the same unduly elevated expectations talk as two years ago with Fury and HBM 1. If Vega throws 8GB of HBM2 at their top end card where is price point going to end up? If the 4GB VRAM/cache gap with the Xp going to make a difference?

Vega should make AMD more competitive. Out compete, there is highly limited evidence for that. Nvidia has yet to drop their HBM2 solution onto the gaming market.

" ... the fact is that without high-end memory support, it's impossible to deliver VR-based fully immersive gaming performance. HBM2-based Vega will have the potential to remove this bottleneck. ..."

AMD tagging HBM2 as 'cache memory" in Vega slides strongly suggests that they aren't going to get into a toe-to-toe HBM2-to-GDDR5X capacity match. Some mix of main RAM and this cache means there is software they have to tune right to get gaming dominance. That points to early gaming results probably being off of potential ( as tuning likely incomplete. )

AMD is a decent buy if they would just stop shooting themselves in the foot on a regular basis. Hype machine ... they don't need. Smooth, mostly defect free, execution they do. Vega would do well to stick with "better than Polairs" and "better than Fury" and good enough for whatever price target market they are shooting at. ( so that AMD is showing regular progress and good value. ).
 
  • Like
Reactions: tuxon86
dec, I genuinely suggest you doing your research. You have made few mistakes in your "analysis", based on uninformed opinions, you have about Vega. There is already quite nice amount of information available, especially from game developers who have had Vega in their hands.

Do your research, and come back, I am curious about your thoughts and opinions on Vega.
 
dec, I genuinely suggest you doing your research. You have made few mistakes in your "analysis", based on uninformed opinions, you have about Vega. There is already quite nice amount of information available, especially from game developers who have had Vega in their hands.

Do your research, and come back, I am curious about your thoughts and opinions on Vega.

Dec's analysis is all based on what AMD has publicly stated about Vega. If you think it is incorrect, provide reputable sources that contradict what he has said.
 
Dec's analysis is all based on what AMD has publicly stated about Vega. If you think it is incorrect, provide reputable sources that contradict what he has said.
https://forum.beyond3d.com/posts/1973875/

Quote from the link.

Just wanted to clarify that I meant AMD GCN2 (consoles) vs Nvidia's latest (Maxwell/Pascal). AMD PC GPUs have also improved since GCN2.

Improvements for general performance:
- GCN3 introduced delta color compression. Including ability to sample/load compressed textures without decompress step.
- GCN3 improved geometry tessellation performance
- GCN4 improved geometry performance in general (including fast strips, primitive discard, etc).
- GCN4 improved delta color compression.
- GCN4 added instruction prefetch (reduces pipeline latency, again helps with geom bottleneck).
- GCN4 improved async compute scheduling (GPU side)

GCN5 (Vega) adds these general performance improvements:
- L2 cache includes L2 ROP cache (L1 ROP caches under L2). Don't need to flush caches between pixel shader passes.
- Tiled rasterizer. Reduces overdraw, bandwidth and makes ROPs more efficient in general.
- Improved geometry pipeline (including proper load balancing, up to 2x higher peak throughput)
- General purpose memory paging system

(I didn't list features that don't bring performance improvements without programmer intervention)

All of these improvements mean that GCN5 should run general purpose pixel/vertex shader code much better than GCN2. GCN5 has most of the same tricks that are seen in modern Nvidia GPUs. There are nice compute improvements as well, but they need special programmer support (DPP, SDWA, FP16). We will see the real impact of these improvements when DX12 SM 6.0 becomes available. Doom is already using these features with Vulkan, resulting in nice gains.

This is game developer who works for Ubisoft. If you ever wanted to ask anything about GCN architecture, in general, I think only people who know more are actually AMD engineers. In previous years he was at EA DICE, and currently works for Ubi.


My analysis based on what we know on Vega, and Pascal GPUs.

General Purpose Memory Paging system works in a way that is pretty much revolutionary. You have 512 TB indexing available done on hardware level, for full HSA 2.0 Unified Memory compatibility without any software level abstraction. Lets take a theoretical approach, and lets say we have 3072 GCN core GPU, with 4 GB of HBM2 with 512 GB/s of bandwidth.

Current models of Memory System store both used, and unused data in memory of the GPU, because in general GPUs do not have enough horsepower to handle all of it in particular time. Vega changes this approach. Tile Based Rasterization, next generation Pixel Engine, that is connected to L2 cache, and massively improved geometry performance increase throughput of the GPU. What is important is feeding this GPU with data. GDDR5 memory cannot give enough bandwidth to feed those cores, with reasonable amounts of power consumed. Neither does GDDR5X. Titan Xp memory system consumes around 50W of power alone, and memory subsystem consumes at peak 75W of power, due to amount of memory controllers, but averages are lower due to both Memory Compression, and Tile Based Rasterization, and ROPs connected to L2 cache. HBM2 memory cubes - 8W, and whole memory subsystem will consume at peak 15W of power, and you still get the benefit of Tile Based Rasterization, ROPs connected to L2 cache, rather than memory controller, ETC.

What actually does this Memory format? Framebuffer compared to large amount of GDDR5/X memory is smaller, but the data are available immediately to the GPU, and larger portions can be executed in particular time. Think about this like Non-Volatile data stored in memory, and indexed data in the system(because the memory controller has access to data in the System RAM, SSD's, HDD's in your computer and even network storage) is volatile. You save even more memory power consumption because of unused data. The Framebuffer is small enough to not exceed PCIe bandwidth, so the data can be delivered where its needed, when it is needed. Its all done on hardware, without any software abstraction.

Im sure people can explain all of this much more clearly and in more detail.

P.S. If anyone is interested. RX Vega has been demoed by AMD in January, and averaged 72 FPS in 4k Doom, Vulkan. How does it stack with GTX 1080 Ti, with latest drivers?

doom_3840_2160.png

687F:C1 DeviceID for this GPU and it is supposedly clocked at 1.2 GHz.

The thing is this. There is currently huge rumor mill, that this is NOT top of the line GPU from AMD. And top of the line RX Vega has 1.5 GHz, around 275W TDP, and Liquid Cooling. My previous rumors, about Core amounts of GTX 1080 Ti, and Titan Xp were correct, so this can also be correct, because they are actually from retail lines.
 
Last edited:
https://forum.beyond3d.com/posts/1973875/

Quote from the link.

Just wanted to clarify that I meant AMD GCN2 (consoles) vs Nvidia's latest (Maxwell/Pascal). AMD PC GPUs have also improved since GCN2.

Improvements for general performance:
- GCN3 introduced delta color compression. Including ability to sample/load compressed textures without decompress step.
- GCN3 improved geometry tessellation performance
- GCN4 improved geometry performance in general (including fast strips, primitive discard, etc).
- GCN4 improved delta color compression.
- GCN4 added instruction prefetch (reduces pipeline latency, again helps with geom bottleneck).
- GCN4 improved async compute scheduling (GPU side)

GCN5 (Vega) adds these general performance improvements:
- L2 cache includes L2 ROP cache (L1 ROP caches under L2). Don't need to flush caches between pixel shader passes.
- Tiled rasterizer. Reduces overdraw, bandwidth and makes ROPs more efficient in general.
- Improved geometry pipeline (including proper load balancing, up to 2x higher peak throughput)
- General purpose memory paging system

(I didn't list features that don't bring performance improvements without programmer intervention)

All of these improvements mean that GCN5 should run general purpose pixel/vertex shader code much better than GCN2. GCN5 has most of the same tricks that are seen in modern Nvidia GPUs. There are nice compute improvements as well, but they need special programmer support (DPP, SDWA, FP16). We will see the real impact of these improvements when DX12 SM 6.0 becomes available. Doom is already using these features with Vulkan, resulting in nice gains.

This is game developer who works for Ubisoft. If you ever wanted to ask anything about GCN architecture, in general, I think only people who know more are actually AMD engineers. In previous years he was at EA DICE, and currently works for Ubi.


My analysis based on what we know on Vega, and Pascal GPUs.

General Purpose Memory Paging system works in a way that is pretty much revolutionary. You have 512 TB indexing available done on hardware level, for full HSA 2.0 Unified Memory compatibility without any software level abstraction. Lets take a theoretical approach, and lets say we have 3072 GCN core GPU, with 4 GB of HBM2 with 512 GB/s of bandwidth.

Current models of Memory System store both used, and unused data in memory of the GPU, because in general GPUs do not have enough horsepower to handle all of it in particular time. Vega changes this approach. Tile Based Rasterization, next generation Pixel Engine, that is connected to L2 cache, and massively improved geometry performance increase throughput of the GPU. What is important is feeding this GPU with data. GDDR5 memory cannot give enough bandwidth to feed those cores, with reasonable amounts of power consumed. Neither does GDDR5X. Titan Xp memory system consumes around 50W of power alone, and memory subsystem consumes at peak 75W of power, due to amount of memory controllers, but averages are lower due to both Memory Compression, and Tile Based Rasterization, and ROPs connected to L2 cache. HBM2 memory cubes - 8W, and whole memory subsystem will consume at peak 15W of power, and you still get the benefit of Tile Based Rasterization, ROPs connected to L2 cache, rather than memory controller, ETC.

What actually does this Memory format? Framebuffer compared to large amount of GDDR5/X memory is smaller, but the data are available immediately to the GPU, and larger portions can be executed in particular time. Think about this like Non-Volatile data stored in memory, and indexed data in the system(because the memory controller has access to data in the System RAM, SSD's, HDD's in your computer and even network storage) is volatile. You save even more memory power consumption because of unused data. The Framebuffer is small enough to not exceed PCIe bandwidth, so the data can be delivered where its needed, when it is needed. Its all done on hardware, without any software abstraction.

Im sure people can explain all of this much more clearly and in more detail.

P.S. If anyone is interested. RX Vega has been demoed by AMD in January, and averaged 72 FPS in 4k Doom, Vulkan. How does it stack with GTX 1080 Ti, with latest drivers?

doom_3840_2160.png

687F:C1 DeviceID for this GPU and it is supposedly clocked at 1.2 GHz.

The thing is this. There is currently huge rumor mill, that this is NOT top of the line GPU from AMD. And top of the line RX Vega has 1.5 GHz, around 275W TDP, and Liquid Cooling. My previous rumors, about Core amounts of GTX 1080 Ti, and Titan Xp were correct, so this can also be correct, because they are actually from retail lines.


All I see for your source is a random forum post speculating on features that we don't know how they will impact performance. You can list out lots of impressive features but its all just words on a spec sheet until we have cards in hand and can directly compare it to the competition. And no, an AMD run benchmark at a trade show is not a valid comparison point.

You cite rumors saying that the top of the line Vega chip will need water cooling, this sounds a lot like Fury X that needed a liquid cooler to even be competitive with the 980 Ti. If Vega needs water cooling to be competitive with the 1080 Ti despite being a larger GPU with more expensive memory then AMD's GPU division is in trouble.
 
  • Like
Reactions: tuxon86
All I see for your source is a random forum post speculating on features that we don't know how they will impact performance. You can list out lots of impressive features but its all just words on a spec sheet until we have cards in hand and can directly compare it to the competition. And no, an AMD run benchmark at a trade show is not a valid comparison point.

You cite rumors saying that the top of the line Vega chip will need water cooling, this sounds a lot like Fury X that needed a liquid cooler to even be competitive with the 980 Ti. If Vega needs water cooling to be competitive with the 1080 Ti despite being a larger GPU with more expensive memory then AMD's GPU division is in trouble.
Its not speculation. I think you do not know who sebbbi is. This is not a random guy, on random forum.

So let me repeat. He is a game developer, who works for Ubisoft, and previously he worked for many years at EA Dice. Beyond3D forum is considered as most knowledgeable forum in terms of understanding GPU architectures, because it is populated by game devs, professionals, and tech enthusiasts. I do not have account there, because I am not knowledgeable enough to even be there. But I do read it to get the knowledge, and understanding. The point of the post was not comparison to Nvidia, but to understand what does HBCC with HBM2.
 
https://forum.beyond3d.com/posts/1973875/

Quote from the link.

Just wanted to clarify that I meant AMD GCN2 (consoles) vs Nvidia's latest (Maxwell/Pascal). AMD PC GPUs have also improved since GCN2.

Improvements for general performance:
- GCN3 introduced delta color compression. Including ability to sample/load compressed textures without decompress step.
- GCN3 improved geometry tessellation performance
- GCN4 improved geometry performance in general (including fast strips, primitive discard, etc).
- GCN4 improved delta color compression.
- GCN4 added instruction prefetch (reduces pipeline latency, again helps with geom bottleneck).
- GCN4 improved async compute scheduling (GPU side)

GCN5 (Vega) adds these general performance improvements:
- L2 cache includes L2 ROP cache (L1 ROP caches under L2). Don't need to flush caches between pixel shader passes.
- Tiled rasterizer. Reduces overdraw, bandwidth and makes ROPs more efficient in general.
- Improved geometry pipeline (including proper load balancing, up to 2x higher peak throughput)
- General purpose memory paging system

(I didn't list features that don't bring performance improvements without programmer intervention)

All of these improvements mean that GCN5 should run general purpose pixel/vertex shader code much better than GCN2. GCN5 has most of the same tricks that are seen in modern Nvidia GPUs. There are nice compute improvements as well, but they need special programmer support (DPP, SDWA, FP16). We will see the real impact of these improvements when DX12 SM 6.0 becomes available. Doom is already using these features with Vulkan, resulting in nice gains.

This is game developer who works for Ubisoft. If you ever wanted to ask anything about GCN architecture, in general, I think only people who know more are actually AMD engineers. In previous years he was at EA DICE, and currently works for Ubi.


My analysis based on what we know on Vega, and Pascal GPUs.

General Purpose Memory Paging system works in a way that is pretty much revolutionary. You have 512 TB indexing available done on hardware level, for full HSA 2.0 Unified Memory compatibility without any software level abstraction. Lets take a theoretical approach, and lets say we have 3072 GCN core GPU, with 4 GB of HBM2 with 512 GB/s of bandwidth.

Current models of Memory System store both used, and unused data in memory of the GPU, because in general GPUs do not have enough horsepower to handle all of it in particular time. Vega changes this approach. Tile Based Rasterization, next generation Pixel Engine, that is connected to L2 cache, and massively improved geometry performance increase throughput of the GPU. What is important is feeding this GPU with data. GDDR5 memory cannot give enough bandwidth to feed those cores, with reasonable amounts of power consumed. Neither does GDDR5X. Titan Xp memory system consumes around 50W of power alone, and memory subsystem consumes at peak 75W of power, due to amount of memory controllers, but averages are lower due to both Memory Compression, and Tile Based Rasterization, and ROPs connected to L2 cache. HBM2 memory cubes - 8W, and whole memory subsystem will consume at peak 15W of power, and you still get the benefit of Tile Based Rasterization, ROPs connected to L2 cache, rather than memory controller, ETC.

What actually does this Memory format? Framebuffer compared to large amount of GDDR5/X memory is smaller, but the data are available immediately to the GPU, and larger portions can be executed in particular time. Think about this like Non-Volatile data stored in memory, and indexed data in the system(because the memory controller has access to data in the System RAM, SSD's, HDD's in your computer and even network storage) is volatile. You save even more memory power consumption because of unused data. The Framebuffer is small enough to not exceed PCIe bandwidth, so the data can be delivered where its needed, when it is needed. Its all done on hardware, without any software abstraction.

Im sure people can explain all of this much more clearly and in more detail.

P.S. If anyone is interested. RX Vega has been demoed by AMD in January, and averaged 72 FPS in 4k Doom, Vulkan. How does it stack with GTX 1080 Ti, with latest drivers?

doom_3840_2160.png

687F:C1 DeviceID for this GPU and it is supposedly clocked at 1.2 GHz.

The thing is this. There is currently huge rumor mill, that this is NOT top of the line GPU from AMD. And top of the line RX Vega has 1.5 GHz, around 275W TDP, and Liquid Cooling. My previous rumors, about Core amounts of GTX 1080 Ti, and Titan Xp were correct, so this can also be correct, because they are actually from retail lines.
More propaganda from the usual suspect...
 
A bit of "Monday Morning Quarterbacking" , but at least back to the topic of the Mac Pro but ..... It is puzzling why Apple didn't prepare a "plan B" about 1-1.5 years ago that could have used these as replacements for the current Mac Pro. It wouldn't have been an "omg it is better than sex" product, but it would have been something Apple could limp along with for another 1-2 years while they figure out what they really want to do.

In 2012, Apple 'bumped' the CPUs to show that they had something (even if it was just new firmware and slightly different order from Intel. ). Price cuts only is indicative that they weren't working on anything. Even entry level GPUs for the new machine. Extremely strange if the something like the 570-580 wasn't at the "entry level" spectrum of what they were working on up until it failed. I get how the "top end" GPU offerings failed, but how on Earth did the entry-mid level ones all fail also ???????????? Seriously.

A D510 (RX 570 ) and D610 ( Rx 580) wouldn't be a huge hassle thermal wise to put into the current system. Actually, they probably would get less failures because these are designed to run on the thermal zone that the Mac Pro provides and are pretty close thermal match to the E5 v2 limits.

[ Redoing the Xeon E5 to v4 would require the CPU board and blackplane board ( with PCH) changes which propagate more widely and get more expensive. But doing two more GPU cards. That isn't a huge leap. ]

Those would not have solved all of the problems outlined in their roundtable sessions, but it certainly would have addressed their major problem that folks don't believe they "care" or that they are doing anything substantive other than just cashing checks. No Plan B. No action..... really doesn't get solved by telling folks going back to hide in a hole for another year or two.


As for AMD's marketing, you can tell from the photos they are primarily marketing these card to folks with 3-4 years stuff. ( Not RX400 or anything much in last two years; just older stuff and low end iGPU. ) That's kind of the point though in the Mac Pro context .... it is an older system from that timeframe. Not sure the AMD move of shifting RX 4x5 to RX 5xx is going to work so well. They are going to get widely 'flamed' for this. That is probably going to get in the way of their advertising in more than few places. It is in the "doing something" range but they had already announced a xx5 update path... just use it. More smoke and more mirrors isn't likely going work too well.

If you're referring to the 5,1 model, I believe the only reason there was a 2012 refresh with 'newer' CPUs using the same socket was that the 2010 model Nehalem CPUs were about to be discontinued by Intel. 2012 was considered a speed bump at the time but Apple were forced to act by something out of their control.

The 6,1 model not being updated in 3 years is more mystifying. The entire industry was always moving towards a single powerful GPU rather than trying to spread the workload evenly across 2 GPUs - a solution that always was the preserve of specially written software for maximum benefit. Throwing more performance into a single CPU/GPU combination is something every programmer has access to. Don't forget also that the 6,1 heatsink was triangular so that GPU heat output would have to match the CPU heat output too, presumably this ruled out cooler running GPUs such as the RX Polaris series.

There's also a report around that tied the firmware heavily into the existing GPUs meaning serious engineering work every time a new GPU was going to be added.

All in all a deeply flawed offering.

And finally, we have the evolution of the METAL API which replaces OpenCL which was the big thing that Apple was trying to push when they launched the 6,1. Crucially, this is billed as Apple's own version of DirectX so they are at least taking control of their own graphics destiny rather than relying on ancient implementations of OpenGL or OpenCL.

My thinking at the time of release of the 6,1 was why 2 relatively low powered GPUs which needed software writing to optimise the niche when so many folks needed 1 powerful one (or go with integrated as a base model). Apple didn't have anything like Crossfire or SLI support as OS level so the niche would be very small indeed even if there was a reason to support 2 GPUs for an extra 20% performance over 1. I wasn't aware of the heat design flaw or the later firmware issues either.

External GPUs are also one for laptop users on USB-C but buying an enclosure that would cost double the price of a beefy GPU just to plug it into a box is stupid for price conscious users wanting a Mac desktop that could have just had a PCIe x16 slot to fit a classic 'Mac Edition' GPU.

The simple solution would be to properly swallow pride - create a silent running box housing a motherboard that could take Mac Edition PCIe cards, throw in lashings of USB-C ports but leave the graphics to Nvidia or AMD with acceptable drivers. Surely that wouldn't take over a year to do but that probably wouldn't be seen as innovating on a configuration that Mac users have been asking for since the 5,1 cheese grater got killed off.

Budgeted cleverly it's the xMac that buyers of the top end Mac Mini were always looking for, the guys that were unable to justify buying the base level 2013 Mac Pro, the ones who begrudgingly had to buy top end 5k iMacs.

Just make sure the cooling solution can deal with an Nvidia 1080Ti or an AMD VEGA and Apple can set about designing a silent case around a single GPU and maybe a set of E3 Xeon CPUs to keep the hackintoshers at bay, all the while getting Nvidia or AMD to create Mac Editions of their hardware built to fit the slot (custom size if need be) and also help prevent people from flashing Mac ROMS onto a variety of non standard hardware.

What I think Apple will be taking their time over is making a small case that will be as silent or quiet as the 2013 Mac Pro while being able to continue fitting on the desktop and they may find themselves in competition with a lot of vendors if that's all they were aiming for. It's also Apple so we should expect more innovation than that - just don't involve dousing modules in mineral oil or something equally as exotic :rolleyes:.

If they wait long enough there will be a proliferation of cores on potential 2018 Intel CPUs in response to AMD Ryzen while AMDs VEGA GPUs will be available.

Pro users might be thrilled if they can make it a standard rack mountable size or take in a configuration with a second E5 class Xeon CPU.
 
Last edited:
  • Like
Reactions: ekwipt
I have a problem with R6. I don't understand how an APU with 6 graphics cores can have 8 ROPs. I would expect 6.
[doublepost=1492387259][/doublepost]
I think the ROPs are not part of the CUs, some info on the web seems to be incorrect.
 
Last edited:
More propaganda from the usual suspect...
If bringing understanding is propaganda, then yes, I am guilty of making a propaganda on this forum :).

Its funny that you say That I have a propaganda. Because you, mr usual suspect, have a propaganda about me personally ;).
 
If bringing understanding is propaganda, then yes, I am guilty of making a propaganda on this forum :).

Its funny that you say That I have a propaganda. Because you, mr usual suspect, have a propaganda about me personally ;).
Yeah, yeah, keep dancing...
 
Yeah, yeah, keep dancing...
Ok, play time is over. Come, and bring any knowledge to this thread. Then we can discuss if I am doing a propaganda. You know what you are doing right now? Trolling. And Trolling is forbidden, by the forum rules, and you are deserving a ban.

You have a propaganda that I have a propaganda towards AMD on this forum. Even if I post things about architectures, you reiterate with posts like this. Which one of us is bringing less to this forum? If you do not agree, or believe I am doing a Propaganda, why don't you bring anything that contradicts my words. ANYTHING! Yet, you never do, and accuse me for doing PR for AMD.

Why don't you do something good, and if you do not agree with what I am providing, provide something that contradicts what I have written, in the post you quoted, in the first place.

Until you will come up with anything worth discussing, you will be considered a troll.
 
You're quite triggered now...
I don't have to do one thing to debunk your silly PR cut and paste job since others have been doing it since you first started posting here. We get it, you love AMD, really we get it.
But this is the mac pro section of the macrumors forum, not the amd PR subforum.
 
You're quite triggered now...
I don't have to do one thing to debunk your silly PR cut and paste job since others have been doing it since you first started posting here. We get it, you love AMD, really we get it.
But this is the mac pro section of the macrumors forum, not the amd PR subforum.
Who is doing this? You and Aiden?

Have you read any of the posts of Sebbbi, to claim that this is my PR stunt? Have you read that post at all?

If not, then do not claim that this is. You are trolling. Maybe you do not see this, but this is just simply trolling. Invalidation of forum rules. Try to invalidate my posts, more convincingly. Lets discuss. You do not want to discuss you want to troll. Maybe you just hate AMD so much, so that my posts are offending you so much? Or maybe you hate me, that my posts are offending you so much? Just because you are offended does not mean you are right, my friend.
 
Koyoot, AMD (and you) tried to convince everyone that Polaris 10 would return AMD to competitiveness in performance per watt. As someone who likes AMD GPUs because I feel they generally offer a good compromise between graphics and compute performance I want AMD to succeed. However, when I see charts like this (that aggregate graphics performance including Doom and other DX12 and vulkan titles):

perfwatt_2560_1440.png


It says to me that I am not going to believe any AMD hype until they release a product that shows up on the bottom half of this chart. It would also help if that product has enthusiast class performance. Until that time comes, I am not going to believe any list of features or hype coming from you or AMD. Vega may bring AMD back, but all we have at this point are staged demos and some architecture sneak peeks.

Its nothing personal, but I think anyone who has followed GPUs over the last few years has been disappointed by AMD's offerings, especially when it comes to performance and efficiency. So when you come in here and claim that yet another AMD GPU is going to be the best thing ever and that Nvidia has terrible architectures and what not, you are setting yourself up for some criticism.
 
  • Like
Reactions: tuxon86
Koyoot, AMD (and you) tried to convince everyone that Polaris 10 would return AMD to competitiveness in performance per watt. As someone who likes AMD GPUs because I feel they generally offer a good compromise between graphics and compute performance I want AMD to succeed. However, when I see charts like this (that aggregate graphics performance including Doom and other DX12 and vulkan titles):

perfwatt_2560_1440.png


It says to me that I am not going to believe any AMD hype until they release a product that shows up on the bottom half of this chart. It would also help if that product has enthusiast class performance. Until that time comes, I am not going to believe any list of features or hype coming from you or AMD. Vega may bring AMD back, but all we have at this point are staged demos and some architecture sneak peeks.

Its nothing personal, but I think anyone who has followed GPUs over the last few years has been disappointed by AMD's offerings, especially when it comes to performance and efficiency. So when you come in here and claim that yet another AMD GPU is going to be the best thing ever and that Nvidia has terrible architectures and what not, you are setting yourself up for some criticism.
Show me where I claim that AMD GPU will be best thing ever? Don't you see there is a problem? LATELY people are reading in my posts WAY too much, compared to what I actually write. It was seen in GTX 1080 thread, and when I have said that GTX 1080 will be Bnecked by CPUs in MP 5.1, people jumped to conclusion that I am attacking Nvidia! And now, Tuxon claims that I am making a PR. And He did not even read that post fully, he did not understood what was the point of the post, he did not read any posts from Sebbbi, (and he provides A LOT of information both on Nvidia and AMD GPU architectures), and he f****** claims that I am doing PR for AMD. That is laughable trolling.

All I did was compared what AMD provided, to benchmarks, which we have seen in other sites(AMD actually used the same settings that TechPowerUp uses in their reviews for Doom, which can be seen in the film posted by LinusTechTips on Vega arch, from I think January). I brought a quote from Game developer who had his hands on Vega arch, and also I provided my thoughts on how the memory system will work based on talk with people who have much more understanding on the matter than I do. How this can be hyping the hardware?

Even If I know how Vega will perform, I promised that I will not say anything about it. Nobody have so far correctly guessed the performance of the Vega architecture, and that is the best thing what happened for this architecture.

If it is a let down - nobody will be actually surprised. If it is a golden architecture - everybody will be surprised. Which shows what AMD has an appeal, as a brand.
 
It was seen in GTX 1080 thread, and when I have said that GTX 1080 will be Bnecked by CPUs in MP 5.1

And this too general statement is just not true, at least not in 4K gaming, as I showed you: https://forums.macrumors.com/threads/webdriver-for-gtx-1080-1070.1979778/page-10 Just a negligible difference between the two CPU's. Of course the GPU is the bottleneck in 4K gaming. But when the GPU is the bottleneck, it doesn't matter if it is fired by a Mac Pro W3690 or an i7-7700K. So the Mac Pro still can be used well.
 
Last edited:
And this too general statement is just not true, at least not in 4K gaming, as I showed you: https://forums.macrumors.com/threads/webdriver-for-gtx-1080-1070.1979778/page-10 Just a negligible difference between the two CPU's. Of course the GPU is the bottleneck in 4K gaming. But when the GPU is the bottleneck, it doesn't matter if it is fired by a Mac Pro W3690 or a i7-7700K. So the Mac Pro still can be used well.
You used a GPUs that will not be Bnecked by the CPUs in 4K. GTX 1080 - will be Bnecked by the CPUs. It has much more power. GTX 1070 and GTX 980 TI will not be Bnecked by CPUs in 4K gaming. GTX 1080 - will be. Test it for yourself.
 
You used a GPUs that will not be Bnecked by the CPUs in 4K. GTX 1080 - will be Bnecked by the CPUs. It has much more power. GTX 1070 and GTX 980 TI will not be Bnecked by CPUs in 4K gaming. GTX 1080 - will be. Test it for yourself.

I will do that when I have mine. ;)
 
  • Like
Reactions: koyoot
You used a GPUs that will not be Bnecked by the CPUs in 4K. GTX 1080 - will be Bnecked by the CPUs. It has much more power. GTX 1070 and GTX 980 TI will not be Bnecked by CPUs in 4K gaming. GTX 1080 - will be. Test it for yourself.

A GTX 1070 is not much slower than a GTX 1080, so it's not clear to me why you think one will be fine and the other won't be fine.

You really can't make blanket statements like "a GTX 1080 will be bottlenecked by the CPU in a cMP, period" because it really depends on the application, resolution and settings. Most games will drive the GPU hard enough at 4K that even the slow CPUs in a cMP will be able to keep up, and thus still be GPU limited. If you're running at a lower resolution, then you are likely going to be CPU limited in games. However, there are plenty of pro app workloads (DaVinci Resolve etc) that will be completely GPU limited on a GTX 1080, even on a cMP. BareFeats ran a bunch of these, for reference:

http://barefeats.com/cmp_pascal.html

Plenty of GPU-limited benchmarks using a cMP and various high-end Pascal cards.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.