The point and substance of an argument are made with more precise and nuanced words, not by using less of them poorly. There is no point and substance to deliberately trying to portray this as generative AI, which a lot of the comments are trying to do.
For example, you’ve said nothing and have absolutely not made the point and substance of your problem with DLSS 5 clear, while I actually have. People would have to take wild guesses to try to get to “the point and substance” of your issue with it.
Perhaps what you are actually referring to is the tendency for people to justify lying and throwing shade about a thing if they hate the thing associated with it enough. That’s just throwing sloppy shade to me. Judging by the downvotes and the correlation that exists between this tendency and them, I suspect this might apply here instead.
I’ve actually just been corrected, this was referred to employ some form of generative AI by Jensen. It’s also significantly different enough to what I generally thought of as AI slop and my issues with it that it could also be said that I am a supporter of generative AI now. I am surprised by the application of the label, but it does prove me wrong.
Fine then. Make it clear how it is not approproate to label this generative AI. That’s the basis of your claim that everyone else is being sloppy. Back it up with more than just your own declaration.
Even here you’ve not backed up your beliefs or statements with anything beyond restating your original point.
To anyone just glancing at the promo before and after image, this appears to just be applying image generating AI toolchain tech to the preexisting frame generation. There is at least some amount of responsibility on nvidia for using an image that gives off this look.
Pretending that a reasonable conclusion that a large amount of people are drawing simply isn’t reasonable, and that it is for reasons entirely self-evident, is just masturbation.
There are no “image generating AI toolchain tech” involved. There is no image generation happening.
To quote the literal title of a previous post, “Nvidia’s DLSS 5 AI-infused tech transforms pixels with photorealistic lighting and materials” - but it does not transform geometry. I know this because rather than live in my assumptions, I dared look up more information about it instead of tucking in my presumptions at the end of my comment.
It does involve AI (just like previous DLSS has), and just like previous versions, looks at color and motion vectors. It’s outputs are lighting and material properties, “applying a mask”. It can be criticized, but for different reasons. It seems to create an uncanny valley effect worse than generative AI would in actual usage, precisely because it is not changing geometry or shapes, “image generating”.
It can be confirmed by looking at the examples. I urge you to do the same, but I don’t have a lot of hope. MAGA exists because of confirmation bias, and it does not have exclusivity on it. While wrong and being an asshole about it, thanks for at least making some effort of an explanation this time.
I’ve actually just been corrected, this was referred to employ some form of generative AI by Jensen. It’s also significantly different enough to what I generally thought of as AI slop and my issues with it that it could also be said that I am a supporter of generative AI now. I am surprised by the application of the label for the aforementioned reasons, but it does prove me wrong.
I’ve not been an asshole here, you’ve consistently talked down at everyone calling this slop due to some minor technicality in terminology that you’ve still failed to back up or expand on beyond linking to the same video a second time.
You also have really zeroed in on some claims that I’ve literally never heard anyone make:
It is not changing geometry. It is changing lighting. It is changing material properties.
No one has said shit about geometry, lighting, or materials because that is not the level at which DLSS operates. Both in previous versions and in this latest version.
It’s not what anyone thinks is going on here, and it calls into question your own understanding of all this that you’ve now insisted upon it twice. It’s not making lighting and materials changes. You’re confusing raytracing which is often turned on and off in graphics presets alongside DLSS because of the intense resource usage, but it is not part of DLSS. Go download a mod for finer grained graphics settings controls in Cyberpunk 2077 and that much will be made clear.
There are plenty of tools people can use to get an idea of how any games’ rendering pipline works, such as Special K as shouted out by the video you linked. Personally I like Reshade for getting a look at render passes, output targets, buffers etc.
DLSS operates on a completed “flat” render output/buffer. As far as I’m aware, It has no knowledge of geometry, materials, or shaders unless the devs are really doing wacky shit and have direct line to nvidia devs. Maybe they’re passing it the depth and normal buffers as well as the flat render output. That opens a lot of options (see marty’s RTGI shader) but is demonstrably still just working with slightly more than gets slapped on the screen as a flat raster image.
It can do edge detection as movement detection through comparing a number of the previous input frames using the types of techniques used in video compression to detect and handle movement, as the end of your video makes small mention of.
Usually it’s used on the output of the 3d render pipeline before the flat HUD elements are slapped on top. Apparently a lot of games the guy that made the video tested didn’t seperate out the HUD layer, or maybe it had something to do with his previous methodology. I’m not watching multiple of his videos to check, and I find it kind of hilarious that someone would think they were some voice of knowledge on how this stuff works if they put the kind of effort they indicated they had for their previous videos without using Special K.
I had already watched the video you linked. I’ve now watched it twice to ensure I didn’t miss anything.
It’s some guy playing with the features in Special K that allow you to utilize DLSS at arbitrary upscalong ratios while allowing HUD elements to render at the viewport resolution. It has nothing to do with the underlying tech or how DLSS works beyond showing that the defaults in most games could be better tuned.
He has a short bit talking about older anti-aliasing tech, then says that DLSS is an advancement without actually getting into how it works.
In all 18 minutes, there is hardly 60 seconds discussing the actual tech, and it literally uses the term generation.
So to be clear, since you seem to be highly mistaken about this: DLSS uses image generation technology along with some very fancy edge detection to attempt to fill in gaps and generate extra details that are not present in the original image.
It is not rendering only the needed sections at higher resolution or anything along those lines, but I can see how someone may think that was implied by your video.
So again, now that I hopefully have shown you that I do in fact know more than a decent bit about how DLSS works, and you still have not provided more to back up your point beyond a video of some guy fucking around with Special K and going “whoa cool”…
What part of DLSS generating image data that does not exist in a lower resolution source image and using it to fill in what would otherwise be repeated pixels in a traditionally upscaled (nearest neighbor, bilinear, trilinear, etc) image… how is that not generative?
Edit:
Would it kill you to not double the length of your goddamn comment after posting it?
I’ve got better things to do at this point than continue this, but at a glance I see that you took Nvidia’s news post’s wording as gospel.
Edit again:
It’s clear now, you got hung up on some misleading marketing wording in one of the headlines. You even admit it uses AI to generate additional image data. Stop being condescending.
DLSS 4 introduced a transformer model architecture with NVIDIA GeForce RTX 50 Series GPUs. That enabled a leap in image quality over our previous convolutional neural network. The second-gen transformer model for DLSS 4.5 Super Resolution uses 5x more compute and is trained on an expanded data set, so it has greater context awareness of every scene and more intelligent use of pixel sampling and motion vectors.
“It AlTeRs ThE fINaL iMaGe So It GeNeRaTeS iMaGe DaTa” at this point. I don’t think you are even bothering to check just how many things you could call image generation at that point.
you took Nvidia’s news post’s wording as gospel.
“ThE dEvElOpEr Is LyInG!”
NVIDIA might be many things, in marketing particularly so, but in this particular blog it is not. Then again, it’s like what I said:
Perhaps what you are actually referring to is the tendency for people to justify lying and throwing shade about a thing if they hate the thing associated with it enough.
Ergo, now nothing NVIDIA says can be trusted now. If you were going to be this reductive, not sure why you didn’t open with this. It’s a clear win from your perspective, but I don’t think there’s any hope of a shared reality between us. It’s all a lie by big corpo, after all.
It’s funny how you complain about me not providing more links, while calling the most direct ones lies. All I would have done is having to subject a creator to the same sort of shade you are trying to throw at me. After all, if the primary source of information is lying, those reporting it are just spreading lies.
Not gonna subject other people to downvotes and harassment from assholes, they get enough of that already. I’m afraid you’ll just have to disingenuously act as if you can’t perform searches yourself or that they exist.
I already was pretty certain nothing that I said could convince you, but it’s going to be so funny when in a few months this take becomes so obviously bad. I like to type and edit, sue me, although it’s also funny how quickly you also decided to participate in the endeavor. Call it a chance to disengage.
It’s just tragic how having the capacity to know better, some people fool themselves. This is not image generation, buddy, and that’s what AI slop typically refers to. The term AI has long preceded the term AI slop.
Sorry, gonna have a wonderful day.
I call it how I see it - close minded because of how set you are arguing against something that seems rather evident, an asshole because you downvote first and don’t provide explanations without an ordeal of an interaction that immediately begins with belittling me with false claims (there was plenty of backing up you skipped over with your downvotes across the threads), and compared to MAGA because they are just such an evident example of people stuck in their own bubble through extreme confirmation bias and closed mindedness. I could be more respectful, but were you?
I think at this point in time, we have to come up with a term for these sort of threads: circlejerk slop. Guys, stop making generative AI look good, as bad as it is I’d choose it any day of the week over these circlejerk hallucinations. Do not expect them to carry across time and place.
I’ve actually just been corrected, this was referred to employ some form of generative AI by Jensen, who presumably did not lie in this instance. It’s also significantly different enough to what I generally thought of as AI slop and my issues with it that it could also be said that I am a supporter of generative AI now. I am surprised by the application of the label, but it does prove me wrong.
My guy you literally linked some guy fucking around in Special K as supposedly an explanation of the tech, you misread a marketing headline as being technically descriptive, and yourself even admit that it uses AI to generate which is the common usage nowadays for the label.of slop.
I definitely appreciate being called close minded, an asshole, and compared to MAGA for not agreeing with your personal stick up your ass about what you think is proper terminology though.
Is relying on the NVIDIA release and developer blogs as the primary source the misinformation? Because that’s what I’m relying on my basis. If not, could you clear it up, what is the misinformation I’m spreading?
I just want to make sure what you consider misinformation, because it might be something I consider fact, and it might just be a controversial opinion under the circumstances. If it is something I consider fact, I’m not going to argue it, but you are going to have to tell me what it actually is so that I censor myself.
In regards to my attitude, I’ll be nicer and just suffer the downvotes for relying on primary sources, it’s partly my fault since I already suspected that the conversation would not be fruitful, given the downvotes and initial premise.
Is it a limited to this particular community, or to the account?
NVIDIA ACE is a suite of AI technologies—spanning models, developers tools, and on-device inference—that’s designed to help middleware and game developers build knowledgeable, actionable, and conversational in-game characters. The NVIDIA Nemotron Nano 9B V2 model is now available through NVIDIA ACE as an In-game Inferencing (IGI) SDK plugin. It simplifies the integration within your gaming pipeline and optimizes simultaneous AI inference and graphics processing for accelerated game performance
“Twenty-five years after NVIDIA invented the programmable shader, we are reinventing computer graphics once again,” said Jensen Huang, founder and CEO of NVIDIA. “DLSS 5 is the GPT moment for graphics — blending handcrafted rendering with generative AI to deliver a dramatic leap in visual realism while preserving the control artists need for creative expression.”
NVIDIA’s entire page is AI summaries and AI xyz tool, sdk, etc. Clearly they’re marketing this and are not hiding it.
Again, please take some time to reconsider before making condescending trollish posts. This kind of behavior is not tollerated on our instance.
Final warning. If you consider continuing to act in bad faith you will be banned.
I stand corrected, I would not have referred to the process being described as generative AI, and neither did the sources I watched. Thank you for the clarification. I wasn’t acting in bad faith, which unfortunately means I will have to take greater care, as something that I did not intend to seems to be happening, and it is under a rather broad definition. I’ll lay off of the topic, and thank you for the rather direct rebuttal, I would like to think I would have accepted it had it been offered elsewhere in the thread.
Feel free to bring up any other concerns, specially in case your warning extends beyond any other behavior beyond the current topic.
The point and substance of an argument are made with more precise and nuanced words, not by using less of them poorly. There is no point and substance to deliberately trying to portray this as generative AI, which a lot of the comments are trying to do.
For example, you’ve said nothing and have absolutely not made the point and substance of your problem with DLSS 5 clear, while I actually have. People would have to take wild guesses to try to get to “the point and substance” of your issue with it.
Perhaps what you are actually referring to is the tendency for people to justify lying and throwing shade about a thing if they hate the thing associated with it enough. That’s just throwing sloppy shade to me. Judging by the downvotes and the correlation that exists between this tendency and them, I suspect this might apply here instead.
I’ve actually just been corrected, this was referred to employ some form of generative AI by Jensen. It’s also significantly different enough to what I generally thought of as AI slop and my issues with it that it could also be said that I am a supporter of generative AI now. I am surprised by the application of the label, but it does prove me wrong.
Fine then. Make it clear how it is not approproate to label this generative AI. That’s the basis of your claim that everyone else is being sloppy. Back it up with more than just your own declaration.
Even here you’ve not backed up your beliefs or statements with anything beyond restating your original point.
To anyone just glancing at the promo before and after image, this appears to just be applying image generating AI toolchain tech to the preexisting frame generation. There is at least some amount of responsibility on nvidia for using an image that gives off this look.
Pretending that a reasonable conclusion that a large amount of people are drawing simply isn’t reasonable, and that it is for reasons entirely self-evident, is just masturbation.
I don’t think it’s possible to convince anyone with a closed mind, but sure.
This is doing the same thing as here: https://www.youtube.com/watch?v=DKCyk3CeUFY
It is not changing geometry or shapes.
It is changing lighting.
It is changing material properties.
There are no “image generating AI toolchain tech” involved. There is no image generation happening.
To quote the literal title of a previous post, “Nvidia’s DLSS 5 AI-infused tech transforms pixels with photorealistic lighting and materials” - but it does not transform geometry. I know this because rather than live in my assumptions, I dared look up more information about it instead of tucking in my presumptions at the end of my comment.
It does involve AI (just like previous DLSS has), and just like previous versions, looks at color and motion vectors. It’s outputs are lighting and material properties, “applying a mask”. It can be criticized, but for different reasons. It seems to create an uncanny valley effect worse than generative AI would in actual usage, precisely because it is not changing geometry or shapes, “image generating”.
It can be confirmed by looking at the examples. I urge you to do the same, but I don’t have a lot of hope. MAGA exists because of confirmation bias, and it does not have exclusivity on it. While wrong and being an asshole about it, thanks for at least making some effort of an explanation this time.
I’ve actually just been corrected, this was referred to employ some form of generative AI by Jensen. It’s also significantly different enough to what I generally thought of as AI slop and my issues with it that it could also be said that I am a supporter of generative AI now. I am surprised by the application of the label for the aforementioned reasons, but it does prove me wrong.
I’ve not been an asshole here, you’ve consistently talked down at everyone calling this slop due to some minor technicality in terminology that you’ve still failed to back up or expand on beyond linking to the same video a second time.
You also have really zeroed in on some claims that I’ve literally never heard anyone make:
No one has said shit about geometry, lighting, or materials because that is not the level at which DLSS operates. Both in previous versions and in this latest version.
It’s not what anyone thinks is going on here, and it calls into question your own understanding of all this that you’ve now insisted upon it twice. It’s not making lighting and materials changes. You’re confusing raytracing which is often turned on and off in graphics presets alongside DLSS because of the intense resource usage, but it is not part of DLSS. Go download a mod for finer grained graphics settings controls in Cyberpunk 2077 and that much will be made clear.
There are plenty of tools people can use to get an idea of how any games’ rendering pipline works, such as Special K as shouted out by the video you linked. Personally I like Reshade for getting a look at render passes, output targets, buffers etc.
DLSS operates on a completed “flat” render output/buffer. As far as I’m aware, It has no knowledge of geometry, materials, or shaders unless the devs are really doing wacky shit and have direct line to nvidia devs. Maybe they’re passing it the depth and normal buffers as well as the flat render output. That opens a lot of options (see marty’s RTGI shader) but is demonstrably still just working with slightly more than gets slapped on the screen as a flat raster image.
It can do edge detection as movement detection through comparing a number of the previous input frames using the types of techniques used in video compression to detect and handle movement, as the end of your video makes small mention of.
Usually it’s used on the output of the 3d render pipeline before the flat HUD elements are slapped on top. Apparently a lot of games the guy that made the video tested didn’t seperate out the HUD layer, or maybe it had something to do with his previous methodology. I’m not watching multiple of his videos to check, and I find it kind of hilarious that someone would think they were some voice of knowledge on how this stuff works if they put the kind of effort they indicated they had for their previous videos without using Special K.
I had already watched the video you linked. I’ve now watched it twice to ensure I didn’t miss anything.
It’s some guy playing with the features in Special K that allow you to utilize DLSS at arbitrary upscalong ratios while allowing HUD elements to render at the viewport resolution. It has nothing to do with the underlying tech or how DLSS works beyond showing that the defaults in most games could be better tuned.
He has a short bit talking about older anti-aliasing tech, then says that DLSS is an advancement without actually getting into how it works.
In all 18 minutes, there is hardly 60 seconds discussing the actual tech, and it literally uses the term generation.
So to be clear, since you seem to be highly mistaken about this: DLSS uses image generation technology along with some very fancy edge detection to attempt to fill in gaps and generate extra details that are not present in the original image.
It is not rendering only the needed sections at higher resolution or anything along those lines, but I can see how someone may think that was implied by your video.
So again, now that I hopefully have shown you that I do in fact know more than a decent bit about how DLSS works, and you still have not provided more to back up your point beyond a video of some guy fucking around with Special K and going “whoa cool”…
What part of DLSS generating image data that does not exist in a lower resolution source image and using it to fill in what would otherwise be repeated pixels in a traditionally upscaled (nearest neighbor, bilinear, trilinear, etc) image… how is that not generative?
Edit:
Would it kill you to not double the length of your goddamn comment after posting it?
I’ve got better things to do at this point than continue this, but at a glance I see that you took Nvidia’s news post’s wording as gospel.
Edit again:
It’s clear now, you got hung up on some misleading marketing wording in one of the headlines. You even admit it uses AI to generate additional image data. Stop being condescending.
Confirmation bias and closed mindedness it is.
https://developer.nvidia.com/blog/nvidia-dlss-4-5-delivers-super-resolution-upgrades-and-new-dynamic-multi-frame-generation/
(DLSS 4.5)
“It AlTeRs ThE fINaL iMaGe So It GeNeRaTeS iMaGe DaTa” at this point. I don’t think you are even bothering to check just how many things you could call image generation at that point.
“ThE dEvElOpEr Is LyInG!”
NVIDIA might be many things, in marketing particularly so, but in this particular blog it is not. Then again, it’s like what I said:
Ergo, now nothing NVIDIA says can be trusted now. If you were going to be this reductive, not sure why you didn’t open with this. It’s a clear win from your perspective, but I don’t think there’s any hope of a shared reality between us. It’s all a lie by big corpo, after all.
It’s funny how you complain about me not providing more links, while calling the most direct ones lies. All I would have done is having to subject a creator to the same sort of shade you are trying to throw at me. After all, if the primary source of information is lying, those reporting it are just spreading lies.
Not gonna subject other people to downvotes and harassment from assholes, they get enough of that already. I’m afraid you’ll just have to disingenuously act as if you can’t perform searches yourself or that they exist.
I already was pretty certain nothing that I said could convince you, but it’s going to be so funny when in a few months this take becomes so obviously bad. I like to type and edit, sue me, although it’s also funny how quickly you also decided to participate in the endeavor. Call it a chance to disengage.
It’s just tragic how having the capacity to know better, some people fool themselves. This is not image generation, buddy, and that’s what AI slop typically refers to. The term AI has long preceded the term AI slop.
Sorry, gonna have a wonderful day.
I call it how I see it - close minded because of how set you are arguing against something that seems rather evident, an asshole because you downvote first and don’t provide explanations without an ordeal of an interaction that immediately begins with belittling me with false claims (there was plenty of backing up you skipped over with your downvotes across the threads), and compared to MAGA because they are just such an evident example of people stuck in their own bubble through extreme confirmation bias and closed mindedness. I could be more respectful, but were you?
I think at this point in time, we have to come up with a term for these sort of threads: circlejerk slop. Guys, stop making generative AI look good, as bad as it is I’d choose it any day of the week over these circlejerk hallucinations. Do not expect them to carry across time and place.
I’ve actually just been corrected, this was referred to employ some form of generative AI by Jensen, who presumably did not lie in this instance. It’s also significantly different enough to what I generally thought of as AI slop and my issues with it that it could also be said that I am a supporter of generative AI now. I am surprised by the application of the label, but it does prove me wrong.
My guy you literally linked some guy fucking around in Special K as supposedly an explanation of the tech, you misread a marketing headline as being technically descriptive, and yourself even admit that it uses AI to generate which is the common usage nowadays for the label.of slop.
I definitely appreciate being called close minded, an asshole, and compared to MAGA for not agreeing with your personal stick up your ass about what you think is proper terminology though.
Have a rotten day.
Please refrain from spreading misinformation and toxic trolling. We do not condone this kind of behaviour on our instance.
Is relying on the NVIDIA release and developer blogs as the primary source the misinformation? Because that’s what I’m relying on my basis. If not, could you clear it up, what is the misinformation I’m spreading?
Is the misinformation present here? - https://lemmy.ca/post/61897561/22253720
Or here? - https://lemmy.ca/post/61900649/22251320
I just want to make sure what you consider misinformation, because it might be something I consider fact, and it might just be a controversial opinion under the circumstances. If it is something I consider fact, I’m not going to argue it, but you are going to have to tell me what it actually is so that I censor myself.
In regards to my attitude, I’ll be nicer and just suffer the downvotes for relying on primary sources, it’s partly my fault since I already suspected that the conversation would not be fruitful, given the downvotes and initial premise.
Is it a limited to this particular community, or to the account?
From your own link
and on NVIDIA’s own DLSS 5 announcement page:
NVIDIA’s entire page is AI summaries and AI xyz tool, sdk, etc. Clearly they’re marketing this and are not hiding it.
Again, please take some time to reconsider before making condescending trollish posts. This kind of behavior is not tollerated on our instance.
Final warning. If you consider continuing to act in bad faith you will be banned.
I stand corrected, I would not have referred to the process being described as generative AI, and neither did the sources I watched. Thank you for the clarification. I wasn’t acting in bad faith, which unfortunately means I will have to take greater care, as something that I did not intend to seems to be happening, and it is under a rather broad definition. I’ll lay off of the topic, and thank you for the rather direct rebuttal, I would like to think I would have accepted it had it been offered elsewhere in the thread.
Feel free to bring up any other concerns, specially in case your warning extends beyond any other behavior beyond the current topic.