I hate Nvidia and think this demo (mostly) looks like shit but these hyperbolic reactions are making me feel like the crazy one. I know it’s janky and running on 2 cards but it’s wild that it’s happening in real time and IMO it’s really interesting tech. There are so many cool ways this could be applied beyond hyper realism
You’re not crazy, you’re just reading a topic associated with AI and so it’s full of bots, their misinformation and outrage, and the idiots that are influenced by them.
Like all of these threads, we get these insane bad faith ‘arguments’, misinformation and heavy vote manipulation.
There are certainly valid criticisms about DLSS. It creates visual artifacts, it’s often used as a crutch by games to create performance, in the case of DLSS 5 the overall effect is weird as you’ve said. I agree with a lot of the complaints and I’ll probably enable DLSS 5 once and then go back to native… but I think that a lot of comments here are just ridiculous so you’re not alone there :P.
I read some more about it and it looks like developers have a lot of granular control. Not just % applied but options per object type. So they can max it out for faces, 50% for water, 25% for foliage, etc.
There are some legitimately awesome use cases for this especially if they let developers train their own models. I didn’t play Death Stranding but I know they’ve got detailed face scans of Norman Reedus…imagine if the Norman filter got applied to his character in-game.
If it’s that controllable that’s pretty cool. I could see it being useful to do things that are normally expensive (like raytracing shadows on grass) but which don’t really matter if they’re altered a bit. Being able to exclude faces or important set pieces would be a big plus.
Not that it matters much for me, my next card will likely be AMD for Linux reasons.
I hate Nvidia and think this demo (mostly) looks like shit but these hyperbolic reactions are making me feel like the crazy one. I know it’s janky and running on 2 cards but it’s wild that it’s happening in real time and IMO it’s really interesting tech. There are so many cool ways this could be applied beyond hyper realism
You’re not crazy, you’re just reading a topic associated with AI and so it’s full of bots, their misinformation and outrage, and the idiots that are influenced by them.
Like all of these threads, we get these insane bad faith ‘arguments’, misinformation and heavy vote manipulation.
There are certainly valid criticisms about DLSS. It creates visual artifacts, it’s often used as a crutch by games to create performance, in the case of DLSS 5 the overall effect is weird as you’ve said. I agree with a lot of the complaints and I’ll probably enable DLSS 5 once and then go back to native… but I think that a lot of comments here are just ridiculous so you’re not alone there :P.
I read some more about it and it looks like developers have a lot of granular control. Not just % applied but options per object type. So they can max it out for faces, 50% for water, 25% for foliage, etc.
There are some legitimately awesome use cases for this especially if they let developers train their own models. I didn’t play Death Stranding but I know they’ve got detailed face scans of Norman Reedus…imagine if the Norman filter got applied to his character in-game.
If it’s that controllable that’s pretty cool. I could see it being useful to do things that are normally expensive (like raytracing shadows on grass) but which don’t really matter if they’re altered a bit. Being able to exclude faces or important set pieces would be a big plus.
Not that it matters much for me, my next card will likely be AMD for Linux reasons.