Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Nice circle jerk going around in this thread.

If your workflow requires more than the old Mac Pro can handle why haven't you transitioned yet? The nMP came out in 2013, nearly 5 years ago. You are losing money and time. Why are you trying so hard to stay in the box when it clearly doesn't fit you any longer?

If you want CUDA go buy the NVIDIA DGX Station, it is a steal right now at $49,900 (25% off) or go HP.

Apple wants to leverage Metal 2 for everything, not CUDA or OpenCL. Unfortunately this requires a rewrite of every major software house out there. It takes time.

Having been in the pro market for about fifteen years now, I just don't think the pro workflow now is what it was a decade or more ago, and it feels as if Apple's whole approach to the market is painting them into a corner. Fifteen years ago, the gap between pro and consumer hardware (and software) was far more clear cut, dedicated GPUs barely existed, and a closed garden approach like Apple's was a clever way to capture an entire industry. It was worth investing in bespoke and in-house technology for those optimization gains - both for Apple and its audience.

But here's what's happened since then: Apple's monopoly on "must have" pro software suites has weakened, their R&D into growing pro markets is virtually non-existent, third party middleware is ascendant (where once you only needed Photoshop, you now need Substance Painter, ZBrush, and on and on...) and consumer-level GPUs have become spectacularly cost efficient and surprisingly good at handling certain pro workloads. Ten years ago you literally needed a Mac Pro to do pro level 2D work; nowadays it barely matters. 3D has seen an order of magnitude increase in use by pro users, but Apple offers zero hardware or software advantage there either.

So people can sneer at the thought of sticking a filthy consumer card in their machine, but the fact is the bang for your buck that you can get nowadays from those cards even in a pro environment pretty much brute forces its way past Apple's alternatives. And claims like "if your workflow can't use an old Mac Pro you shouldn't be using a Mac Pro at all" are exactly why Apple's position in the market is flagging. It's kind of like death by a thousand cuts: people insist "Macs aren't for 3D", "Macs aren't for game development", "Macs aren't for AI/deep learning"... but after a certain point you realize that there's virtually nothing left in the pro market to even cater to other than the stagnant/shrinking 2D scene.

Would the Pro line - indeed the Mac line generally - be as anemic now if a decade ago they'd actually tried to push themselves into those emerging markets, and embraced some superior third party tech that they stand no chance of catching up to? Would Final Cut Pro even run 5% slower than it does now as a result? I doubt it.
 
Apple wants to leverage Metal 2 for everything, not CUDA or OpenCL. Unfortunately this requires a rewrite of every major software house out there. It takes time.

With the AMD ProRender Engine, there are already a number of plug-ins ready to go, from Blender to Maya to Solidworks to Cinema3D, and others.

I don't know about the rest of you, but I like the idea of a render engine that will use either team green or team red for my rendering needs.
 
So people can sneer at the thought of sticking a filthy consumer card in their machine, but the fact is the bang for your buck that you can get nowadays from those cards even in a pro environment pretty much brute forces its way past Apple's alternatives. And claims like "if your workflow can't use an old Mac Pro you shouldn't be using a Mac Pro at all" are exactly why Apple's position in the market is flagging. It's kind of like death by a thousand cuts: people insist "Macs aren't for 3D", "Macs aren't for game development", "Macs aren't for AI/deep learning"... but after a certain point you realize that there's virtually nothing left in the pro market to even cater to other than the stagnant/shrinking 2D scene.

Would the Pro line - indeed the Mac line generally - be as anemic now if a decade ago they'd actually tried to push themselves into those emerging markets, and embraced some superior third party tech that they stand no chance of catching up to? Would Final Cut Pro even run 5% slower than it does now as a result? I doubt it.

Not to engage in needless discussion but my point was that if the Apple eco-system wasn't working for us Pros, then why invest in it?

What better way to show Apple that they have entirely missed the market segment by simply NOT buying Apple.

It would be so trivial to update the innards of the old Mac Pro that it is almost insulting to us, the users.
 
It would be so trivial to update the innards of the old Mac Pro that it is almost insulting to us, the users.

It is insulting. Its frustrating that as a "Pro" user, I'm shopping for another 2010 CMP for work because the 2017 MacBook "Pro" they provided, struggles with exporting 200-300 mb files from photoshop to JPG.

They completely missed the mark when the brought the trash can out. The recent why we're all here circle jerking is because as "Pro" users, we'll know exactly if apple is listening to their "Pro" users with 7.1. And we want to continue to use the Mac line just not one that is almost 10 years old for our current pro workflow.

Personally as a professional designer, 7.1 needs to have PCE-e and USB 3.0. And why shouldn’t I be able to choose which $1000 graphic card I want in it? Being limited to inferior graphics cards for 7.1 will turn me to a pc user, cus you know RTX OR BUST!

p.s. newbie, but I've been lurking and learning from you guys here for about 2 years.
 
Last edited:
First Benchmark from Nvidia. DLSS looks interesting.

"For game after game, Turing delivers big performance gains (see chart, above). Those numbers get even bigger with deep learning super-sampling, or DLSS, unveiled Monday.

DLSS takes advantage of our Tensor Cores’ ability to use AI. In this case it’s used to develop a neural network that teaches itself how to render a game. It smooths the edges of rendered objects and increases performance."

https://blogs.nvidia.com/blog/2018/08/22/geforce-rtx-60-fps-4k-hdr-games/?ncid=so-red-tg-56997

TuringVsPascal_EditorsDay_Aug22_v3-2.png
 
Not to engage in needless discussion but my point was that if the Apple eco-system wasn't working for us Pros, then why invest in it?

What better way to show Apple that they have entirely missed the market segment by simply NOT buying Apple.

It would be so trivial to update the innards of the old Mac Pro that it is almost insulting to us, the users.

You're right, I suppose I was just approaching it from the perspective taken by some people that if you object to the Mac's direction then you're not its target market. Which is like saying if the wheels on your cart have rusted up, you must be using the wrong horse. I felt the primacy of that issue subsumed your question and answered it indirectly.

But here's a more direct answer to your question:
  • I still use macs for a lot of mid-level work and personal use, and prefer their OS all things being equal. As such, having an all Apple ecosystem would still be ideal
  • I'm still a captive audience as far as a couple of pieces of software go
  • the Pro line is bad and getting worse, but still not necessarily "unusable". Especially if you own a ton of legacy hardware and software already
  • Windows has its own fair share of problems, which is why I still want Apple to get its act together
  • Call me shallow, but I'm hugely fond of their industrial design. Put simply, their beauty is a factor too
 
Last edited:
The new GPUs look interesting, several benchmark leaks the past days. Will probably wait until Black Friday and then grab a 2080Ti for home use. Some graphics and basic ML. Still not happy that the Ti didn't get a memory bump, but that's how it is. Won't buy a Quadro, they're just not worth it. I'm not running 24/7, but even if I did, I could rent a VM in the cloud, run it all year long and it would still be cheaper than buying one. I think I'd have to get 5+ years out of it to make it a good investment at which point they'll be outdated. Might be worth it if NVIDIA is giving a nice discount for high volume orders, maybe 50+ or 100+ GPUs, but that's not for home use and even our research cluster at the university won't need so many cards.
 
Just don't buy a reference card. It's not just the glue, so many screws!

At least they have a boring look this time, so no loss.
 
With the AMD ProRender Engine, there are already a number of plug-ins ready to go, from Blender to Maya to Solidworks to Cinema3D, and others.

I don't know about the rest of you, but I like the idea of a render engine that will use either team green or team red for my rendering needs.

I agree, the best thing would be a "vendor-free" render engine but the real world isn´t perfect. Yes, you have the AMD ProRender which looks promising but it´s far from the best renderers today. Octane, Redshift, Arnold, Vray, Maxwell...is based on CUDA, and those are the industry standard. And that makes the missing option for Nvidia cards on the Mac today a platform which is very limited for serious 3D work.
 
I agree, the best thing would be a "vendor-free" render engine but the real world isn´t perfect. Yes, you have the AMD ProRender which looks promising but it´s far from the best renderers today. Octane, Redshift, Arnold, Vray, Maxwell...is based on CUDA, and those are the industry standard. And that makes the missing option for Nvidia cards on the Mac today a platform which is very limited for serious 3D work.

No 10 bit color output on GeForce cards on Windows or Mac. So serious 3D and color work for true HDR content is a problem on GeForce. Nvidia thinks we are stupid enough to buy Quadro just to get 10 bit enabled. No thanks.
 
Not only that, but I wasn't aware that the AMD VEGA cards cannot use all of their shaders at once. No wonder they under perform compared to Nvidia cards, well according to this guy that's the case, not sure if it's true. Makes you wonder if the 580 is unable to use all of it's shaders either:

https://www.reddit.com/r/hardware/comments/98c0ey/what_makes_a_good_gpu/

"middle_twix

18 points·2 days ago·edited 2 days ago


Depends on the architecture. For instance, a 1080ti has 3584 cuda cores and a Vega 64 has 4096, but the 1080ti outperforms the Vega. Even at the same clocks, the 1080ti outperforms it.

Drivers and features of the silicon have a lot to do with this, Vega is a much, much bigger card than a 1080ti but underperforms. This is partly due to how much of the transistor budget in Vega is taken up by features, such as RPM and primitive shader discard. Those features arent used by many games, and is a big technology that AMD was betting on that never took off. One thing Nvidia has going for them is how widely adopted their architectures are, and therefore most engines and non opensource APIs are coded with a longer CUDA based pipeline in mind. Also, their cards have more ROPS per CU, which is why their geometry engine is much, much more advanced than AMD's GCN architectures, which traditionally have many more CUs than Nvidia, but the pipeline is shorter and are much smaller. This is also why AMD cards are better at compute, because any paralleled task can take advantage of the numerous SPs. And also why its hard to code games for AMD cards, because you cant use all the pipelines without heavy optimization. Vega, for instance, cant keep all of its stream processors filled. So a 1080ti can use all of its CUDA cores in a game, while the Vega might only use 3800ish of its cores. This is also why the slightly cut down flagship AMD cards usually perform just as well as the full card. This has been a thing since the 7950 and 7970, a 7950 performs within 10% of a 7970, but the 7950 has almost 300 less stream processors.

That is a current gen example. For gaming, a long pipeline and many ROPs make a card "good". Driver support plus game implementation is also very important. On paper, a Vega 64 should destroy a 1080 ti. Absolutely slaughter it. But in real performance, because lack of optimization, the GCN scheduler, and all of the underused features that has plagued GCN for a long time, the 1080 ti slaughters it.

Off topic, this is also a reason why AMD cards are often referred to as "fine wine", because there is much room for driver optimization and over the years the AMD driver team has been able to squeeze more and more performance out of their GCN cards.

So if we took brands and specific architectures out of this, looking at a gpu to see if its good or not entails many things, for many different use cases.

For gaming, a more advanced geometry engine (Low SP/CUDA to ROP ratio) and high clock speeds help. Also good game engine support is a major deciding factor. If games were optimized for more SPs and TMUs, then of course go with the trend as it will net you more performance.

For compute, look for lots of CUDA/SPs cores. Memory bandwidth is also very important for this, especially for AI and raytracing.

I would look more into the architecture of the GPU itself than its specs. In Kepler vs Tahiti, Tahiti won on paper and in performance. Tahiti had 500 more cores than Keplar and had about the same clock speed potential. Driver support was also pretty good on both cards and Tahiti didnt have many special features that took up lots of the CU, so across the board at launch, they were neck and neck. 7 years later, the Tahiti smokes it. Vega vs Pascal, on paper Vega murders it, but in practice Pascal wins, for reasons as I stated above.

Hope this is what you were looking for. I had to condense it a bit but hopefully that got my point across lol."
Did you spotted that was discussion about gaming performance?

In professional and compute workloads Vega has no problem with feeding those 4096 cores with work. And in properly optimized software is equally fast as GTX 1080 Ti.
With the AMD ProRender Engine, there are already a number of plug-ins ready to go, from Blender to Maya to Solidworks to Cinema3D, and others.

I don't know about the rest of you, but I like the idea of a render engine that will use either team green or team red for my rendering needs.
You came to a wrong forum to post this ;).

Remember this is a forum that says AMD GPUs are useless and joke, for professional applications while looking at gaming benchmarks.
 
  • Like
Reactions: ssgbryan
Can you elaborate on this? What applications are affected by the "very low compute performance" compared with Vega?
If you write FP32 or FP64 OpenCL programs you could be affected. If you use off the shelf libraries the impact will vary.

FP32 is not really bad on NVIDIA. The 2080 Ti tops Vega.
 
Last edited:
Remember this is a forum that says AMD GPUs are useless and joke, for professional applications while looking at gaming benchmarks.
I've said this before, in some cases AMD is faster than NVIDIA, but let's put them roughly on the same level for some professional applications. The problem that remains is, if I have to choose between two similar performing cards, but one is giving me much better gaming performance, which one would I choose?


If you write FP32 or FP64 OpenCL programs you could be affected. If you use off the shelf libraries the impact will vary.
And here's another problem. How are we going to write those nice OpenCL programs with OpenCL being deprecated in Mojave? I can't get around CUDA in my research area, some stuff works with OpenCL, but not everything. Metal 2 only is going to be problematic. Again, for some stuff this might work, but how do I scale it to clusters for real number crunching? I guess they have to bring back the Xserve with major GPU support then. :D
 
If you write FP32 or FP64 OpenCL programs you could be affected. If you use off the shelf libraries the impact will vary.

FP32 is not really bad on NVIDIA. The 2080 Ti tops Vega.

That doesn't jive with what you said though, you claimed Turing had "very low non-NN compute performance". How can a GPU with "very low compute performance compared with Vega" beat Vega? AMD's architectures have always done well on synthetic FLOPS-maximizing tests because they do tend to have more raw horsepower. However, this often doesn't translate into real-world performance, even in compute applications.
 
That doesn't jive with what you said though, you claimed Turing had "very low non-NN compute performance". How can a GPU with "very low compute performance compared with Vega" beat Vega? AMD's architectures have always done well on synthetic FLOPS-maximizing tests because they do tend to have more raw horsepower. However, this often doesn't translate into real-world performance, even in compute applications.
Miners prefer AMD GPUs precisely because of compute performance and price.

Quite often one sees that a comparable AMD card offers about double the FP64 power in the consumer space.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.