Uh huh…
will all be under our artists’ control
Literally impossible. The entire point of the tech is it is autonomous, that it can “improve” things moment by moment. That is by definition outside of their control. Also, it better be fucking optional because only the 1% are playing games with dual-5090s. These fuckers are so out of touch.
Also,
This is a very early look
Motherfucker, you say this is releasing within the year. How is this “very early”? It should be in the polishing up stages by any reasonable, professional timeline.
I’ve now read more about this and developers DO have a ton of control. They can choose what parts of the image to apply it to and with what intensity. So I guess it’s not literally impossible
I don’t see why it wouldn’t work the same way as shaders. There’s just no way a developer making a 3d puzzle game would be forced to have it enabled
You don’t understand, once DLSS 5 is released into the wild then nobody will have a choice. It’s basically Skynet, the end of the world, Snow Crash, a breach in the Black Wall.
It will install itself the moment a person searches for Godot tutorials and nobody can ever disable it. It would be LITERALLY IMPOSSIBLE (didn’t you see that they said ‘literally’?!) for an artist to control.
/s
I hate Nvidia and think this demo (mostly) looks like shit but these hyperbolic reactions are making me feel like the crazy one. I know it’s janky and running on 2 cards but it’s wild that it’s happening in real time and IMO it’s really interesting tech. There are so many cool ways this could be applied beyond hyper realism
You’re not crazy, you’re just reading a topic associated with AI and so it’s full of bots, their misinformation and outrage, and the idiots that are influenced by them.
Like all of these threads, we get these insane bad faith ‘arguments’, misinformation and heavy vote manipulation.
There are certainly valid criticisms about DLSS. It creates visual artifacts, it’s often used as a crutch by games to create performance, in the case of DLSS 5 the overall effect is weird as you’ve said. I agree with a lot of the complaints and I’ll probably enable DLSS 5 once and then go back to native… but I think that a lot of comments here are just ridiculous so you’re not alone there :P.
I read some more about it and it looks like developers have a lot of granular control. Not just % applied but options per object type. So they can max it out for faces, 50% for water, 25% for foliage, etc.
There are some legitimately awesome use cases for this especially if they let developers train their own models. I didn’t play Death Stranding but I know they’ve got detailed face scans of Norman Reedus…imagine if the Norman filter got applied to his character in-game.
If it’s that controllable that’s pretty cool. I could see it being useful to do things that are normally expensive (like raytracing shadows on grass) but which don’t really matter if they’re altered a bit. Being able to exclude faces or important set pieces would be a big plus.
Not that it matters much for me, my next card will likely be AMD for Linux reasons.
Motherfucker, you say this is releasing within the year. How is this “very early”? It should be in the polishing up stages by any reasonable, professional timeline.
Don’t worry, they’ll speed up the dev and QA time with AI.
Claiming something is still in progress and that major change can happen before release is a classic tech industry public relations game, and too many “influencers” take it at face value.
It’s infuriating.
It was a matter time for AI slop to came to videogames. But the solution is rather easy, just don’t buy those games.

Didnt Todd outright say ‘this is the way I would have wanted start field to look’? I swear that was said in the digital foundery video.
This is damage control till we can get it in our hands and hope we forgot we dont like the slop they are feeding their money pigs.
Time to…

I think it’s a testament to DLSS 5 that people are calling it AI slop and can’t seem to recognize that the geometry/shape aren’t being modified, just lighting and material effects. Reminds me of this: https://www.youtube.com/watch?v=DKCyk3CeUFY
There is a lot of valid criticism of AI slop, there is even a lot of criticism about DLSS multi frame generation, but people who misuse the same term for everything just take away the meaning and credibility it and they had. For example, this technology wasn’t even trained on IP theft for a change!
The term slop is essentially meaningless.
It’s like people that ‘woke’ as an insult, it applies to everything they don’t like despite nobody having a clear definition of what it actually means.
To me, slop is the mass produced articles/videos created by generative and not ‘everything that is done with machine learning’.
Simply calling everything AI ‘slop’ is meaningless virtue signaling, like using ‘woke’.
It’s a single shot, picked to showcase the technology. Even here the ear’s outline is messed with.
But more importantly the material/texture being replaced is wrong. Its way too bright and sharp. It’s no more realistic than the original, it simply has different drawbacks and frankly looks jarringly out of place. It also fucks up the eyes’ tracking and the water ripples on the ground.



