Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
The more I dig through the AMD webpages for their workstation cards, the madder I get.

I REALLY like the WX series - marrying 3 or 4 of those cards with AMD ProRender Engine is really exciting. Especially since they aren't as power hungry. I could build the equivalent of a Vega, but wouldn't have to do anything like a Pixlas mod to power it.
 
RTX 2080 and RTX 2080 Ti details attached. Pixlas mod required. No Mac driver released.

- RTX 2080: 285 W TDP, 6 + 8 pin, 2944 CUDA cores, 8 GB GDDR6
- RTX 2080 Ti, 285 W TDP, 8 + 8 pin, 4352 CUDA cores, 11 GB GDDR6

RTX 2080.png RTX 2080 Ti.png

2080 TI: 1350MHz, 1545MHz
1080 TI: 1480MHz, 1797MHz

2080: 1515MHz, 1710MHz
1080: 1607MHz, 1785MHz
 
Last edited:
  • Like
Reactions: tsialex
i still dont know how ray tracing parts will work in normal app's, may be relay fun.
what will they do?
will we start seeing app's that are not dater center or high end 3D renders that can do something i want to do with the extra acceleration ?
 
What is your point?

The iMac Pro is old for when it came out, this was ok. The 1080 has 8GB, the 1080Ti has 11GB, both are fine. This is the next generation of GPUs. A bump would have been nice, 2080 to 11GB and 2080Ti to maybe 16GB.

11-16GB is a sweet spot to "quickly" try a few things on a local machine. 8GB is usually not enough for NNs with computer vision application. The heavy lifting is done on clusters anyway.
 
The iMac Pro is old for when it came out, this was ok. The 1080 has 8GB, the 1080Ti has 11GB, both are fine. This is the next generation of GPUs. A bump would have been nice, 2080 to 11GB and 2080Ti to maybe 16GB.

11-16GB is a sweet spot to "quickly" try a few things on a local machine. 8GB is usually not enough for NNs with computer vision application. The heavy lifting is done on clusters anyway.

The RTX 2080 is a gaming card. If you want a professional/prosumer card for computer vision or other deep learning tasks, then get a Quadro or Titan instead (those will all come with much more memory, as mentioned above).
 
It looks like the RTX 2070 is the sweet spot, power wise, for the cMP it would seem. The only card with a single 8-pin.
 
  • Like
Reactions: BillyBobBongo
Based on what I'm seeing so far, the GTX 1080 FE likely will be my last NVIDIA GPU for the MacPro5,1. As much as I'd love to jump on the 2XXX series, it will likely not happen until a MacPro7,1 is available or I'm fully on Windows.
 
I’d be surprised if we could ever use one of these new Nvidia cards in macOS. On the bright side (for us consumers) this release may push prices of AMD cards further down.
 
They had "Mac" as one of the chat usernames in the ultra cheesy ad that gave away the branding, etc. I see no reason for Nvidia to suddenly halt MacOS support.

Even if the Vega 64 came down to $300 this year, it's still too power hungry! It's also fishy that Nvidia hasn't done any direct comparisons to the outgoing Pascal cards. Just based on TFLOP #s, I'm not expecting a huge performance increase for a bunch of scenarios (password cracking), and we're actually paying for a bunch of tech we don't need.

Maybe Intel will fill this niche with affordable, no frills compute GPUs.
 
  • Like
Reactions: darksithpro
Even if the Vega 64 came down to $300 this year, it's still too power hungry!


Not only that, but I wasn't aware that the AMD VEGA cards cannot use all of their shaders at once. No wonder they under perform compared to Nvidia cards, well according to this guy that's the case, not sure if it's true. Makes you wonder if the 580 is unable to use all of it's shaders either:

https://www.reddit.com/r/hardware/comments/98c0ey/what_makes_a_good_gpu/

"middle_twix

18 points·2 days ago·edited 2 days ago


Depends on the architecture. For instance, a 1080ti has 3584 cuda cores and a Vega 64 has 4096, but the 1080ti outperforms the Vega. Even at the same clocks, the 1080ti outperforms it.

Drivers and features of the silicon have a lot to do with this, Vega is a much, much bigger card than a 1080ti but underperforms. This is partly due to how much of the transistor budget in Vega is taken up by features, such as RPM and primitive shader discard. Those features arent used by many games, and is a big technology that AMD was betting on that never took off. One thing Nvidia has going for them is how widely adopted their architectures are, and therefore most engines and non opensource APIs are coded with a longer CUDA based pipeline in mind. Also, their cards have more ROPS per CU, which is why their geometry engine is much, much more advanced than AMD's GCN architectures, which traditionally have many more CUs than Nvidia, but the pipeline is shorter and are much smaller. This is also why AMD cards are better at compute, because any paralleled task can take advantage of the numerous SPs. And also why its hard to code games for AMD cards, because you cant use all the pipelines without heavy optimization. Vega, for instance, cant keep all of its stream processors filled. So a 1080ti can use all of its CUDA cores in a game, while the Vega might only use 3800ish of its cores. This is also why the slightly cut down flagship AMD cards usually perform just as well as the full card. This has been a thing since the 7950 and 7970, a 7950 performs within 10% of a 7970, but the 7950 has almost 300 less stream processors.

That is a current gen example. For gaming, a long pipeline and many ROPs make a card "good". Driver support plus game implementation is also very important. On paper, a Vega 64 should destroy a 1080 ti. Absolutely slaughter it. But in real performance, because lack of optimization, the GCN scheduler, and all of the underused features that has plagued GCN for a long time, the 1080 ti slaughters it.

Off topic, this is also a reason why AMD cards are often referred to as "fine wine", because there is much room for driver optimization and over the years the AMD driver team has been able to squeeze more and more performance out of their GCN cards.

So if we took brands and specific architectures out of this, looking at a gpu to see if its good or not entails many things, for many different use cases.

For gaming, a more advanced geometry engine (Low SP/CUDA to ROP ratio) and high clock speeds help. Also good game engine support is a major deciding factor. If games were optimized for more SPs and TMUs, then of course go with the trend as it will net you more performance.

For compute, look for lots of CUDA/SPs cores. Memory bandwidth is also very important for this, especially for AI and raytracing.

I would look more into the architecture of the GPU itself than its specs. In Kepler vs Tahiti, Tahiti won on paper and in performance. Tahiti had 500 more cores than Keplar and had about the same clock speed potential. Driver support was also pretty good on both cards and Tahiti didnt have many special features that took up lots of the CU, so across the board at launch, they were neck and neck. 7 years later, the Tahiti smokes it. Vega vs Pascal, on paper Vega murders it, but in practice Pascal wins, for reasons as I stated above.

Hope this is what you were looking for. I had to condense it a bit but hopefully that got my point across lol."
 
Not only that, but I wasn't aware that the AMD VEGA cards cannot use all of their shaders at once. No wonder they under perform compared to Nvidia cards, well according to this guy that's the case, not sure if it's true. Makes you wonder if the 580 is unable to use all of it's shaders either:

https://www.reddit.com/r/hardware/comments/98c0ey/what_makes_a_good_gpu/

"middle_twix

18 points·2 days ago·edited 2 days ago


Depends on the architecture. For instance, a 1080ti has 3584 cuda cores and a Vega 64 has 4096, but the 1080ti outperforms the Vega. Even at the same clocks, the 1080ti outperforms it.

Drivers and features of the silicon have a lot to do with this, Vega is a much, much bigger card than a 1080ti but underperforms. This is partly due to how much of the transistor budget in Vega is taken up by features, such as RPM and primitive shader discard. Those features arent used by many games, and is a big technology that AMD was betting on that never took off. One thing Nvidia has going for them is how widely adopted their architectures are, and therefore most engines and non opensource APIs are coded with a longer CUDA based pipeline in mind. Also, their cards have more ROPS per CU, which is why their geometry engine is much, much more advanced than AMD's GCN architectures, which traditionally have many more CUs than Nvidia, but the pipeline is shorter and are much smaller. This is also why AMD cards are better at compute, because any paralleled task can take advantage of the numerous SPs. And also why its hard to code games for AMD cards, because you cant use all the pipelines without heavy optimization. Vega, for instance, cant keep all of its stream processors filled. So a 1080ti can use all of its CUDA cores in a game, while the Vega might only use 3800ish of its cores. This is also why the slightly cut down flagship AMD cards usually perform just as well as the full card. This has been a thing since the 7950 and 7970, a 7950 performs within 10% of a 7970, but the 7950 has almost 300 less stream processors.

That is a current gen example. For gaming, a long pipeline and many ROPs make a card "good". Driver support plus game implementation is also very important. On paper, a Vega 64 should destroy a 1080 ti. Absolutely slaughter it. But in real performance, because lack of optimization, the GCN scheduler, and all of the underused features that has plagued GCN for a long time, the 1080 ti slaughters it.

Off topic, this is also a reason why AMD cards are often referred to as "fine wine", because there is much room for driver optimization and over the years the AMD driver team has been able to squeeze more and more performance out of their GCN cards.

So if we took brands and specific architectures out of this, looking at a gpu to see if its good or not entails many things, for many different use cases.

For gaming, a more advanced geometry engine (Low SP/CUDA to ROP ratio) and high clock speeds help. Also good game engine support is a major deciding factor. If games were optimized for more SPs and TMUs, then of course go with the trend as it will net you more performance.

For compute, look for lots of CUDA/SPs cores. Memory bandwidth is also very important for this, especially for AI and raytracing.

I would look more into the architecture of the GPU itself than its specs. In Kepler vs Tahiti, Tahiti won on paper and in performance. Tahiti had 500 more cores than Keplar and had about the same clock speed potential. Driver support was also pretty good on both cards and Tahiti didnt have many special features that took up lots of the CU, so across the board at launch, they were neck and neck. 7 years later, the Tahiti smokes it. Vega vs Pascal, on paper Vega murders it, but in practice Pascal wins, for reasons as I stated above.

Hope this is what you were looking for. I had to condense it a bit but hopefully that got my point across lol."

Good read. That is what can be observed the past few years with AMD vs. Nvidia when looking at TFLOPS #'s to actual FPS gaming performance.

Where I get lost is when you mention long vs. short pipelines and how it relates to game optimization or how a PUBG or BF1 game interacts with those differing pipeline lengths...

I feel like the issue is not what makes a gaming card "good." But, what makes a game game good. Meaning, better software will inevitably make better hardware.

But, to go back to pipeline lengths, I don't know why AMD can't come up with a solution to make their shorter pipelines perform like a long pipleline, so they would be in parallel (pun intended) to Nvidia in terms of performance. Like, why are they trying to shift game industry paradigms instead of tweaking their ****? Features like Primitive Shader Discard isn't a feature if a game can't use it since it needs to be rewritten or re-coded...

To go back to the Mac, the word, "Turing," sounds similar to, "Tuning," or in-tuned (iTunes?)... so maybe Nvidia is able to tune stuff to really be in-tuned, somehow, with minimal or no re-coding necessary of current software and just works (if that makes sense...)?

We don't know yet...

Questions, like, can Metal use the RTX cores or Tensor cores for FCPX come to mind.... (without re-coding)

PS--AMD compared to fine wine is BS to me since a videogame is not wine and you don't buy a game and wait to play it 5 years later when it's fine...
 
Last edited:
  • Like
Reactions: darksithpro
Mac drivers might happen, but this thread is turning into...

th


(Honestly though, the 2080 is beyond the Mac Pro power budget, so unless Nvidia aims to ship this as a eGPU card, that's going to make Mac support less likely.)
 
  • Like
Reactions: orph
Mac drivers might happen, but this thread is turning into...

th


(Honestly though, the 2080 is beyond the Mac Pro power budget, so unless Nvidia aims to ship this as a eGPU card, that's going to make Mac support less likely.)

The 2019 Mac Pro is not even here, yet. We don't know if it will have PCIe slots or have custom GPU's like in the 2013 tcMP....

But, the older Mac Pro 5,1 should be able to power an RTX 2080 Ti with some modding... TDP of RTX 2080 Ti is 265W for the OC FE edition....

That TDP is around the neighborhood of the older GTX 1080 Ti...

What's changed is that the 2080 and 2070 TDP's have gone up from Pascal... with 215W and 185W, respectively...
 
Just wanted to chime in and agree with anything roughly related to:

-studio jumping ship if no Nvidia/CUDA support for Mac Pro
-No alternative to CUDA. GPU rendering isn't the future, it's NOW, and OpenCL won't catch up for years (if at all)
-Mac "pro" environment will soon enough just be for those developing apps
-Been a Mac user since the 90s
-how sad is that
-etc
-bye

PS I'd better add that the 5,1 can run 3x 2080Ti using external power. This is what most studios have used as a bandaid in the interim. The power argument isn't really valid.
 
It's also fishy that Nvidia hasn't done any direct comparisons to the outgoing Pascal cards.

Exactly! I'm waiting for 1:1 benchmarks. Of course their real time raytracing is faster on the newer cards, the older ones don't support it. Their demo at Gamescom was a joke. Non-RTX was all hard shadows in games. Off screen stuff could be solved as well without RTX, at least to some degree. Of course RTX is better, the question is will it be supported by more than a handful developers. And what performance hit will RTX have on other aspects (frame rate, etc.)? The stuff at Gamescom seemed to have major drops in frame rate when they activated RTX. All of this is mostly focused on games.

Tensor cores can come in handy, replacing a multiplication + accumulation per cycle by a 4x4 matrix multiplication + accumulation. I don't buy that AMD is slower in general, there's stuff that AMD can do better (ask all those miners). It's just that as soon as AMD outputs pixels, it sucks. Something they could fix with Navi (I don't have high hopes for it though). Seems like we'll get further DL support with Navi as well. And if price is right, there's always crossfire.

I'd be fine with a eGPU solution for the Mac that "just works", so it would be nice to get support for it. If not and if benchmarks show the 2080Ti isn't slower than a 1080Ti on "regular" tasks, I'll probably pick up a 2080Ti for my Linux box (not the FE though).
 
Just wanted to chime in and agree with anything roughly related to:

-studio jumping ship if no Nvidia/CUDA support for Mac Pro
-No alternative to CUDA. GPU rendering isn't the future, it's NOW, and OpenCL won't catch up for years (if at all)
-Mac "pro" environment will soon enough just be for those developing apps
-Been a Mac user since the 90s
-how sad is that
-etc
-bye

PS I'd better add that the 5,1 can run 3x 2080Ti using external power. This is what most studios have used as a bandaid in the interim. The power argument isn't really valid.

Or use the AMD ProRender Engine.

Will see not only multiple GPUs as 1, but will also harness your CPU while using the GPU.

When I get my RX 580, I'll just shift my RX 560 down a slot - the render engine will use both GPUs and the CPU.
 
Yay! Back to 1080p gaming that can't sustain 60 fps!

https://www.pcgamesn.com/nvidia-rtx-2080-ti-hands-on

"So the performance we were seeing from the cards and the RTX versions of Shadow of the Tomb Raider and Battlefield 5 aren’t necessarily 100% indicative of the performance that you might see when you get your thousand dollars worth of graphics card home and try and ray trace the fun out of your favourite new games.

But still, it’s tough not to be a little concerned when the ultra-expensive, ultra-enthusiast RTX 2080 Ti isn’t able to hit 60fps at 1080p in Shadow of the Tomb Raider. We weren’t able to see what settings the game was running at as the options screens were cut down in the build we were capturing, but GeForce Experience was capturing at the game resolution and the RTX footage we have is 1080p.

With the FPS counter on in GFE we could see the game batting between 33fps and 48fps as standard throughout our playthrough and that highlights just how intensive real-time ray tracing can be on the new GeForce hardware. We were playing on an external, day-time level, though with lots of moving parts and lots of intersecting shadows."
 
Yay! Back to 1080p gaming that can't sustain 60 fps!

https://www.pcgamesn.com/nvidia-rtx-2080-ti-hands-on

"So the performance we were seeing from the cards and the RTX versions of Shadow of the Tomb Raider and Battlefield 5 aren’t necessarily 100% indicative of the performance that you might see when you get your thousand dollars worth of graphics card home and try and ray trace the fun out of your favourite new games.

But still, it’s tough not to be a little concerned when the ultra-expensive, ultra-enthusiast RTX 2080 Ti isn’t able to hit 60fps at 1080p in Shadow of the Tomb Raider. We weren’t able to see what settings the game was running at as the options screens were cut down in the build we were capturing, but GeForce Experience was capturing at the game resolution and the RTX footage we have is 1080p.

With the FPS counter on in GFE we could see the game batting between 33fps and 48fps as standard throughout our playthrough and that highlights just how intensive real-time ray tracing can be on the new GeForce hardware. We were playing on an external, day-time level, though with lots of moving parts and lots of intersecting shadows."

There's a lot of ways to parse the new benchmarks/overall direction of the 20X0 series, but I guess I'd boil down my response to this kind of criticism to: what else did you expect?

First of all, the cards do seem to offer respectably higher TFLOPs than Pascal for good old rasterization, at a still acceptable price-point. Recall, for instance, that it chewed through that Unreal demo, which wasn't using Raytracing. Yes the 2080ti is an enthusiast card, but it's always been an enthusiast card.

But more importantly, we just need to address the raytracing thing head on. Did people seriously expect that we would ever be able to release a first-generation raytracing GPU that didn't suffer significant performance loss relative to rasterization? Especially given the fact that the developers have had mere months to learn new optimization paths for the technology.

As I see it, Nvidia is taking the best option they have: using their prohibitive market lead to invest in up-and-coming tech that they know is going to take a few GPU cycles to gain momentum. Would people seriously rather they abandoned investing in new technology just to pursue the diminishing returns of ultra-ultra-ultra high definition rasterization? Because frankly I'm far more interested in a GPU that can push 4k raytracing in a few generations than one that can push 16k rasterization. The latter is just a pointless cul de sac. And that's before we even discuss the wildcard that is the AI/deep learning component.

Whoever takes the lead in real time raytracing is going to own the next 10 years of consumer GPU production. End of story. Hell, they might even score a next-gen console contract out of it - or at least the generation after that, which is widely predicted to be cloud-based and might dovetail with Nvidia's serverside plans.

So again, what else did you actually expect them to do? We don't need more pixels (and investing in an 8k monitor is a lot more expensive than paying $100 more for a raytracing module on your GPU anyway, so that's a far more niche market to cater to), and we really don't need more frames either, unless we're talking VR which is equally niche. And even if you do want those things, these cards are still the best at providing that anyway.
 
I agree, the AMD cards are beyond a joke in comparison. Just take a look at the switch made in the iMac, instead of the power of a 2080, it has the equivalent power of a 2050/2060 and AMD don’t look like they’re going to catch up anytime soon.
Fortunately, I have heard that Apple have been planning Nvidia support for the next Mac Pro. And now, with this kind of a performance gap, they would look ridiculous if they didn’t.
 
Nice circle jerk going around in this thread.

If your workflow requires more than the old Mac Pro can handle why haven't you transitioned yet? The nMP came out in 2013, nearly 5 years ago. You are losing money and time. Why are you trying so hard to stay in the box when it clearly doesn't fit you any longer?

If you want CUDA go buy the NVIDIA DGX Station, it is a steal right now at $49,900 (25% off) or go HP.

Apple wants to leverage Metal 2 for everything, not CUDA or OpenCL. Unfortunately this requires a rewrite of every major software house out there. It takes time.
 
  • Like
Reactions: orph
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.