Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

the8thark

macrumors 601
Apr 18, 2011
4,628
1,735
Never admitting that the DLSS titles are being upscaled.
I thought that was the whole idea of DLSS. To be a better and more efficient way to upscale than checkerboard and other methods.

I also found this research paper titled:
"Comparing upscaling algorithms from HD to ultra HD by evaluating preference of experience"
So for those interesting in the science of video upscaling, this research paper is for you. It's a few years old now but the science in it is still good.
 

diamond.g

macrumors G4
Mar 20, 2007
11,435
2,659
OBX
I thought that was the whole idea of DLSS. To be a better and more efficient way to upscale than checkerboard and other methods.

I also found this research paper titled:
"Comparing upscaling algorithms from HD to ultra HD by evaluating preference of experience"
So for those interesting in the science of video upscaling, this research paper is for you. It's a few years old now but the science in it is still good.
It comes across weird though. Either use DLSS for every title, or drop all the titles that can't play in native 8k (or in this case drop the DLSS titles).
 
  • Like
Reactions: the8thark

GrumpyCoder

macrumors 68020
Nov 15, 2016
2,126
2,706
It comes across weird though. Either use DLSS for every title, or drop all the titles that can't play in native 8k (or in this case drop the DLSS titles).
I think the paper has a different focus. It's focused on classical upscaling. DLSS is a little different. When DLSS came out my first thought was this has to use some form of GAN and/or convolutional autoencoder. Meanwhile Nvidia published it is indeed based on an AE. I don't think the exact details are published anywhere, but I could be wrong. From what I know, developers have to provide certain data to Nvidia who then run the games (or data?) on their clusters to create the AE that is then embedded into the drivers for the game. So technically it's probably more a reconstruction of a high-res original (on a Nvidia server) from a low-res sample on a consumer PC and not a "upscaling". The nice thing is the use of their tensor cores which are perfect for convolutions to make it very efficient. Last time I checked (long ago, IIRC Titan V/V100), we were talking about 1 mul-acc operation per GPU clock vs. 1 4x4 mat-mul-acc per GPU clock. Not sure about current gen, please correct me if I'm wrong.

When looking at the Nvidia slides, I'd be more interested if the games running at 7680x4320 without DLSS are actually 8k or rather a lower resolution and then upscaled with a "simple" algorithm as listed in the paper. The numbers for games like Apex Legends, Destiny 2, Forza 4 or Rainbow 6 are very impressive if it's native 8k, given the 2080ti couldn't even do >60fps in 4k for most games. Then again, we don't know what settings were used for the 8k numbers and if those are max. fps, average, etc. RAM could be an issue too, especially with high-res textures. We should see some benchmarks soon.
 

JMacHack

Suspended
Mar 16, 2017
1,965
2,424
It's about performance per watt. AMD cards were criticized as being hot and loud because they were drawing more power while also being slower than competing Nvidia cards.

Again, what makes Nvidia different from Intel is performance per watt. Quoting directly from their press website: "The GeForce RTX 30 Series, NVIDIA’s second-generation RTX GPUs, deliver up to 2x the performance and 1.9x the power efficiency over previous-generation GPUs."
Power/Watt is exactly the reason I'm not impressed by the performance of the new RTX series. May seem to buy NVidia's marketing hook line and sinker without a skeptical eye. It's silly, especially when often the same voices crop up ready to call out any other company's marketing B.S.

Intel has been struggling for years now trying to improve the efficiency of their CPUs and therefore was able to bring only modest performance upgrades with each new generation of processors while at the same time substantially rising the TDP under full load.
An increase of over 100 watts isn't significant? Maxwell to Pascal decreased TDP for performance, in fact, that's what used to happen all the time every new gen. Somehow with the last few gens of CPUs and GPUs we've been trained to accept "well it has more performance, so it has to have higher power draw".

The MSRP of the Nvidia 3070 is actually 499$. And it genuinely looks like a good deal considering the level of performance declared by Nvidia.
Mea culpa, it's only a 40% markup over Pascal.

Screen Shot 2020-09-07 at 3.13.47 AM.png


Bargain of the century.
 

jinnyman

macrumors 6502a
Sep 2, 2011
762
671
Lincolnshire, IL
I mean, if you are considering RTX 3090 or 3080 line and you are only building one or two machines, you are in the target demographic where nobody really cares about power consumption. That changes when you go large scale like rendering farm business where performance/watt is one of the critical factors for RoI.

I'd rather see much more improvement in performance and worry about cooling solution than 1080ti to 2080ti level of increment. nVidia doesn't seem to fully utilize fab change benefit, but, you know, atleast the performance improvement seems substantial compared to 1000 to 2000 change.

For example the PC communities I go where they care about hobby rendering/computer gaming, everyone is excited. For these people, high power consumption is not so much as the factor compared to the performance gain. Two 3090 running in tandem is a dream setup for them.

The story would have been different if power consumption went way overhead with little performance boost. But this time it's not.

I kinda see why some are frustrated. If they gotta run dual 3090 nVidia in their much more expensive Mac Pros with noise, I'd be really frustrated.
 
Last edited:
  • Like
Reactions: 2Stepfan

JMacHack

Suspended
Mar 16, 2017
1,965
2,424
I mean, if you are considering RTX 3090 or 3080 line and you are only building one or two machines, you are in the target demographic where nobody really cares about power consumption.
If you don't care about power consumption that's your prerogative.

My point revolves around these cards being "impressive." They're really not, unless you buy into NVidia's marketing hype (which you should not, marketing hype regardless of company is all fluff.) Independent benchmarks are leaking now https://videocardz.com/newz/nvidia-geforce-rtx-3080-synthetic-and-gaming-performance-leaked and while it shows a generational lift of 40% (which is NOT bad, but average) that comes with a huge wattage increase, which makes these cards, in my opinion, unimpressive.

Yet, many in the tech community bought the hype hook, line, and sinker. And as such hailed these cards as some sort of miracle, bestowed upon us by NVidia and it's prophet Jensen. And without looking at previous generations of cards, even ones as recent as 2017, called the prices "low" when they're higher than they've ever been.

People need to stop huffing the marketing glue, and look at things for what they are.
 

leman

macrumors Core
Oct 14, 2008
19,517
19,664
My point revolves around these cards being "impressive." They're really not, unless you buy into NVidia's marketing hype (which you should not, marketing hype regardless of company is all fluff.) Independent benchmarks are leaking now https://videocardz.com/newz/nvidia-geforce-rtx-3080-synthetic-and-gaming-performance-leaked and while it shows a generational lift of 40% (which is NOT bad, but average) that comes with a huge wattage increase, which makes these cards, in my opinion, unimpressive.

From the technical standpoint, Nvidia GPUs are relatively straightforward parallel number-crunchers that are not very exciting. They are probably the least sophisticated GPUs of the bunch (save maybe for raytracing hardware). But I suppose that simplicity is not the worst design principle when it comes to achieving performance. By radically simplifying their hardware and focusing on ALUs, they can deliver higher performance.
 

JMacHack

Suspended
Mar 16, 2017
1,965
2,424
From the technical standpoint, Nvidia GPUs are relatively straightforward parallel number-crunchers that are not very exciting. They are probably the least sophisticated GPUs of the bunch (save maybe for raytracing hardware). But I suppose that simplicity is not the worst design principle when it comes to achieving performance. By radically simplifying their hardware and focusing on ALUs, they can deliver higher performance.
I feel like I should clarify, I don't hate NVidia's gpus or even this gen. They are pushing the envelope in many ways, like raytracing (which I've warmed up on from a year ago) and their new "broadcast" tech. They haven't been like Intel and only improving performance a tiny bit year over year, consistently offering 30%-40% performance uplifts each gen. There's no question that NVidia makes good GPUs. (unless it's Thermi)

Which is good.

However, I do have a pet peeve over the cult of personality that formed over them. That and the acceptance of increased prices each gen which people should've said "enough" to years ago.
 

GrumpyCoder

macrumors 68020
Nov 15, 2016
2,126
2,706
Independent benchmarks are leaking now https://videocardz.com/newz/nvidia-geforce-rtx-3080-synthetic-and-gaming-performance-leaked and while it shows a generational lift of 40% (which is NOT bad, but average) that comes with a huge wattage increase, which makes these cards, in my opinion, unimpressive.
I wouldn't say unimpressive, the price is certainly impressive. However, performance is misleading. While there is a improvement for games, the new cards still can't run 4k max setting for every game >60fps. They still have drops to lower fps. So their claim to get 8k gaming is a little misleading. Sure it will work and it requires DLSS. But it won't work for every game with useable framerates and people might still choose to play with lower resolution / less detail, just like the 20x0-series cards. Was really looking forward for 4k60 across the board for the one or two games I play every year, since I play them on a TV.
 

jinnyman

macrumors 6502a
Sep 2, 2011
762
671
Lincolnshire, IL
If you don't care about power consumption that's your prerogative.

My point revolves around these cards being "impressive." They're really not, unless you buy into NVidia's marketing hype (which you should not, marketing hype regardless of company is all fluff.) Independent benchmarks are leaking now https://videocardz.com/newz/nvidia-geforce-rtx-3080-synthetic-and-gaming-performance-leaked and while it shows a generational lift of 40% (which is NOT bad, but average) that comes with a huge wattage increase, which makes these cards, in my opinion, unimpressive.

Yet, many in the tech community bought the hype hook, line, and sinker. And as such hailed these cards as some sort of miracle, bestowed upon us by NVidia and it's prophet Jensen. And without looking at previous generations of cards, even ones as recent as 2017, called the prices "low" when they're higher than they've ever been.

People need to stop huffing the marketing glue, and look at things for what they are.
Well. I kinda agree what you are trying to say. Don't get me wrong, but talking about buying into marketing hype in "Apple" community? Well.... we all buy into Apple's don't we?
 

JMacHack

Suspended
Mar 16, 2017
1,965
2,424
Well. I kinda agree what you are trying to say. Don't get me wrong, but talking about buying into marketing hype in "Apple" community? Well.... we all buy into Apple's don't we?
Personally I try to keep my expectations in check. In the case of Apple however, we do see quite a bit of negative or skeptical feedback (see several threads on MacRumors even), which is good in any case for any company. Apple at the very least has about as many detractors as they do loyal fans, it sort of balances it out I'd say. We need those "devil's advocates."

I saw none of that with NVidia's release last week, and it put me off. The skepticism and detraction wasn't there. Like the YouTube thumbnail I posted above, it was all pretty much like that.

I wouldn't say unimpressive, the price is certainly impressive. However, performance is misleading. While there is a improvement for games, the new cards still can't run 4k max setting for every game >60fps. They still have drops to lower fps. So their claim to get 8k gaming is a little misleading. Sure it will work and it requires DLSS. But it won't work for every game with useable framerates and people might still choose to play with lower resolution / less detail, just like the 20x0-series cards. Was really looking forward for 4k60 across the board for the one or two games I play every year, since I play them on a TV.
That's also what I guessed. The performance was misleading, which we expect for any product release. Though the cards aren't duds, we (consumers in general) should be ready to look at it with a very skeptical eye.
 

psingh01

macrumors 68000
Apr 19, 2004
1,586
629
These cards are good news if you want to build a pc for gaming or maybe stick it in an egpu for use with boot camp If you have an intel Mac. I don’t expect Mac gaming to improve with the change to Apple silicon anyway. So it’s an apples to oranges decision here.
 

diamond.g

macrumors G4
Mar 20, 2007
11,435
2,659
OBX
I feel like I should clarify, I don't hate NVidia's gpus or even this gen. They are pushing the envelope in many ways, like raytracing (which I've warmed up on from a year ago) and their new "broadcast" tech. They haven't been like Intel and only improving performance a tiny bit year over year, consistently offering 30%-40% performance uplifts each gen. There's no question that NVidia makes good GPUs. (unless it's Thermi)

Which is good.

However, I do have a pet peeve over the cult of personality that formed over them. That and the acceptance of increased prices each gen which people should've said "enough" to years ago.
Nvidia hasn’t had good competition till recently to force them the lower their prices. Some would argue that they still don’t have any competition. I mean I am kind of baffled that folks think getting 2080ti performance for half the price is still too expensive.
 

jerwin

Suspended
Jun 13, 2015
2,895
4,652
The performance uplift is there, but you won't be enjoying it as much since mGPU support is limited to a handful of games. And then there is also the money issue. The cheapest RX 5600 XT and RX 5700 sell for $275 and $295, respectively. In total, you're looking at a $570 investment. You can pick up a custom Nvidia GeForce RTX 2070 Super for $500 and likely get the same amount of performance at around half the power consumption.

 

jerryk

macrumors 604
Nov 3, 2011
7,421
4,208
SF Bay Area
I think the paper has a different focus. It's focused on classical upscaling. DLSS is a little different. When DLSS came out my first thought was this has to use some form of GAN and/or convolutional autoencoder. Meanwhile Nvidia published it is indeed based on an AE. I don't think the exact details are published anywhere, but I could be wrong. From what I know, developers have to provide certain data to Nvidia who then run the games (or data?) on their clusters to create the AE that is then embedded into the drivers for the game. So technically it's probably more a reconstruction of a high-res original (on a Nvidia server) from a low-res sample on a consumer PC and not a "upscaling". The nice thing is the use of their tensor cores which are perfect for convolutions to make it very efficient. Last time I checked (long ago, IIRC Titan V/V100), we were talking about 1 mul-acc operation per GPU clock vs. 1 4x4 mat-mul-acc per GPU clock. Not sure about current gen, please correct me if I'm wrong.

When looking at the Nvidia slides, I'd be more interested if the games running at 7680x4320 without DLSS are actually 8k or rather a lower resolution and then upscaled with a "simple" algorithm as listed in the paper. The numbers for games like Apex Legends, Destiny 2, Forza 4 or Rainbow 6 are very impressive if it's native 8k, given the 2080ti couldn't even do >60fps in 4k for most games. Then again, we don't know what settings were used for the 8k numbers and if those are max. fps, average, etc. RAM could be an issue too, especially with high-res textures. We should see some benchmarks soon.

Are they training the AutoEncoder to recognize certain assets in the game/data. Such as characters in games versus backgrounds?
 

diamond.g

macrumors G4
Mar 20, 2007
11,435
2,659
OBX
Are they training the AutoEncoder to recognize certain assets in the game/data. Such as characters in games versus backgrounds?
I am not sure there is just 1 autoencoder. Seems like they have at least 2, one for denoising and the other for upsampling.
 

GrumpyCoder

macrumors 68020
Nov 15, 2016
2,126
2,706
These cards are good news if you want to build a pc for gaming or maybe stick it in an egpu for use with boot camp If you have an intel Mac. I don’t expect Mac gaming to improve with the change to Apple silicon anyway. So it’s an apples to oranges decision here.
I don't think that's what it is about. It's pretty clear that we can't use Nvidia cards in the macOS environment, that ship is sailed. The big question is, where do we go? While AMD is currently in Apple hardware, the support is very limited to the Apple software eco-system, focused on Metal and Apple's Libraries, without much for OpenCL, ROCm and the likes.
So what's going to happen next, where will we go?

Or to put it in other words. I need Nvidia/Cuda for my research, I can still do some things on my Macs, using clusters for the serious number crunching. I can also still run some graphical simulations on the Macs. Will it stay this way, get better or worse and if it gets worse, how much? Am I looking at the point where I could use a potential 16" iPad Pro for research and writing, presenting, publishing and do all the processing on a cloud based solution? Will iOS get it's own file system, full mouse/keyboard support and replace a MBP/Desktop because these two solutions won't allow me to do what I need due to hardware/software restrictions?

To be fair, I don't see the hardware as a problem, Apple will deal with it. The question is, will things like Tensorflow, PyTorch, etc. eventually be available for Apple GPUs or support specific functions of their SoC for speeding things up.

Are they training the AutoEncoder to recognize certain assets in the game/data. Such as characters in games versus backgrounds?
Doubtful, as that's not what an AE is for unless you combine it with a CNN or hybrid approaches with, say a SVM.
My guess is it's used for reconstruction of high-res assets from low-res samples. That would explain how they can go from 1440p to 8k, but leaves the question of how exactly they deal with temporal distortions.
 

leman

macrumors Core
Oct 14, 2008
19,517
19,664
To be fair, I don't see the hardware as a problem, Apple will deal with it. The question is, will things like Tensorflow, PyTorch, etc. eventually be available for Apple GPUs or support specific functions of their SoC for speeding things up.

Tensorflow was notably absent from the list of open source frameworks Apple mentioned during the WWDC, so it doesn't really look good. I hope that next version of CoreML will add missing functionality that would allow one to implement a Tensorflow backend on Apple hardware. It seems to me that they have been so far focusing on consumer usage of ML and completely disregarding the scientific community... which is sad, to say the least.
 

GrumpyCoder

macrumors 68020
Nov 15, 2016
2,126
2,706
Tensorflow was notably absent from the list of open source frameworks Apple mentioned during the WWDC, so it doesn't really look good. I hope that next version of CoreML will add missing functionality that would allow one to implement a Tensorflow backend on Apple hardware. It seems to me that they have been so far focusing on consumer usage of ML and completely disregarding the scientific community... which is sad, to say the least.
Not sure, but did Apple mention GPU support for open source frameworks? I can't imagine it's difficult to get Tensorflow running on a Apple Silicon CPU. After all, I've been running Tensorflow and PyTorch on ARM for ages.

The point of consumer ML from Apple is spot on. They develop what they need on the fly. For those who want to perform their own research or develop a specific product, Apple's offerings are, with a few exceptions, useless. And while I'd love to create or port a framework suitable for macOS and the hardware, I simply don't have the time for it.
 

leman

macrumors Core
Oct 14, 2008
19,517
19,664
Not sure, but did Apple mention GPU support for open source frameworks? I can't imagine it's difficult to get Tensorflow running on a Apple Silicon CPU. After all, I've been running Tensorflow and PyTorch on ARM for ages.

CPU — anytime. But you really want GPU support. Not to mention that Apple has all these fancy ML accelerators, would be nice if these could be used for Tensorflow.

The point of consumer ML from Apple is spot on. They develop what they need on the fly. For those who want to perform their own research or develop a specific product, Apple's offerings are, with a few exceptions, useless. And while I'd love to create or port a framework suitable for macOS and the hardware, I simply don't have the time for it.

The quality of their Accelerate libraries (and BLAS implementation) is excellent however. It's just that CoreML is not as full-featured as one would wish. I am not an ML expert, so I can't really comment on what exactly is missing.
 

psingh01

macrumors 68000
Apr 19, 2004
1,586
629
Or to put it in other words. I need Nvidia/Cuda for my research, I can still do some things on my Macs, using clusters for the serious number crunching. I can also still run some graphical simulations on the Macs. Will it stay this way, get better or worse and if it gets worse, how much? Am I looking at the point where I could use a potential 16" iPad Pro for research and writing, presenting, publishing and do all the processing on a cloud based solution? Will iOS get it's own file system, full mouse/keyboard support and replace a MBP/Desktop because these two solutions won't allow me to do what I need due to hardware/software restrictions?

Well my comment was for the OP which I assumed he was interested in the gpu for gaming as that is what the average person would want it for (although it wasn’t clear in the post).

Still I think it is similar to your situation. If someone wants to game, get the nvidia and use it with windows (either as egos/boot camp or custom pc) for gaming. No point in getting a Mac for native gaming.

In your situation you need it for your research and you don’t need a Mac. Get what you need. Writing, presenting, publishing....just as easily done on a PC as a Mac. Why jump through hoops just to use a Mac for these tasks? The tool should work for you, not the other way around.

You can always get both if you want to spend the money. I’ve done that too. The previous time I upgraded computers I went from a MBP to an Alienware for gaming plus a MBA for regular stuff. But if you have to choose one or the other, the choice should be what you need instead of what you want.
 

Erehy Dobon

Suspended
Feb 16, 2018
2,161
2,017
No service
Nope, just some guy who doesn't understand Nvidia's total business.

The HPC/technical computer/data center segment is massive and growing much faster than the consumer gaming graphics segment.

But many technologists can't see the forest for the trees.
 

GrumpyCoder

macrumors 68020
Nov 15, 2016
2,126
2,706
In your situation you need it for your research and you don’t need a Mac. Get what you need. Writing, presenting, publishing....just as easily done on a PC as a Mac. Why jump through hoops just to use a Mac for these tasks? The tool should work for you, not the other way around.
Yes and No. I don't really want to miss the Apple eco system. Everything is well integrated, macOS, iOS and the watch. This is personal thing for me. Of course I could switch, I simply don't want to. My research workflow is optimised as well, DevonThink, Scrivener, Latex and Bookends. There are alternatives, none of which I personally think are as well thought out as these.

And yes, I am using multiple systems. Serious number crunching on clusters with Nvidia V100s, Xeon Platinum and RTX8000 under the desk and few other cards like Titan RTX and Radeon VII to try things. I guess I'm just curious where Apple will go GPU wise and if they can compete with Nvidia. After all, the Google TPUs are fantastic and much, much cheaper than what Nvidia offers.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.