DLSS2.0 is “temporal anti-aliasing on steroids”. TAA works by jiggling the camera a tiny amount, less than a pixel, every frame. If nothing on screen is moving and the camera’s not moving, then you could blend the last dozen or so frames together, and it would appear to have high resolution and smooth edges without doing any extra work. If the camera moves, then you can blend from “where the camera used to be pointing” and get most of the same benefits. If objects in the scene are moving, then you can use the information on “where things used to be” (it’s a graphics engine, we know where things used to be) and blend the same way. If everything’s moving quickly then it doesn’t work, but in that case you won’t notice a few rough edges anyway. Good quality and basically “free” (you were rendering the old frames anyway), especially compared to other ways of doing anti-aliasing.
Nvidia have a honking big supercomputer that renders “perfect very-high resolution frames”, and then tries out untold billions of different possibilities for “the perfect camera jiggle”, “the perfect amount of blending”, “the perfect motion reconstruction” to get the correct result out of lower-quality frames. It’s not just an upscaler, it has a lot of extra information - historic and screen geometry - to work from, and can sometimes generate more accurate renders than rendering at native resolution would do. Getting the information on what the optimal settings are is absolute shitloads of work, but the output is pretty tiny - several thousand matrix operations - which is why it’s cheap enough to apply on every frame. So yeah, not big enough to worry about.
There’s a big fraction of AAA games that use Unreal engine and aim for photorealism, so if you’ve trained it up on that, boom, you’re done in most cases. Indie games with indie game engines tend not to be so demanding, and so don’t need DLSS, so you don’t need to tune it up for them.
DLSS2.0 is “temporal anti-aliasing on steroids”. TAA works by jiggling the camera a tiny amount, less than a pixel, every frame. If nothing on screen is moving and the camera’s not moving, then you could blend the last dozen or so frames together, and it would appear to have high resolution and smooth edges without doing any extra work. If the camera moves, then you can blend from “where the camera used to be pointing” and get most of the same benefits. If objects in the scene are moving, then you can use the information on “where things used to be” (it’s a graphics engine, we know where things used to be) and blend the same way. If everything’s moving quickly then it doesn’t work, but in that case you won’t notice a few rough edges anyway. Good quality and basically “free” (you were rendering the old frames anyway), especially compared to other ways of doing anti-aliasing.
Nvidia have a honking big supercomputer that renders “perfect very-high resolution frames”, and then tries out untold billions of different possibilities for “the perfect camera jiggle”, “the perfect amount of blending”, “the perfect motion reconstruction” to get the correct result out of lower-quality frames. It’s not just an upscaler, it has a lot of extra information - historic and screen geometry - to work from, and can sometimes generate more accurate renders than rendering at native resolution would do. Getting the information on what the optimal settings are is absolute shitloads of work, but the output is pretty tiny - several thousand matrix operations - which is why it’s cheap enough to apply on every frame. So yeah, not big enough to worry about.
There’s a big fraction of AAA games that use Unreal engine and aim for photorealism, so if you’ve trained it up on that, boom, you’re done in most cases. Indie games with indie game engines tend not to be so demanding, and so don’t need DLSS, so you don’t need to tune it up for them.