Uh huh…

  • Ech@lemmy.ca
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    edit-2
    17 hours ago

    will all be under our artists’ control

    Literally impossible. The entire point of the tech is it is autonomous, that it can “improve” things moment by moment. That is by definition outside of their control. Also, it better be fucking optional because only the 1% are playing games with dual-5090s. These fuckers are so out of touch.

    Also,

    This is a very early look

    Motherfucker, you say this is releasing within the year. How is this “very early”? It should be in the polishing up stages by any reasonable, professional timeline.

    • glimse@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 hours ago

      I’ve now read more about this and developers DO have a ton of control. They can choose what parts of the image to apply it to and with what intensity. So I guess it’s not literally impossible

    • glimse@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      8 hours ago

      I don’t see why it wouldn’t work the same way as shaders. There’s just no way a developer making a 3d puzzle game would be forced to have it enabled

      • FauxLiving@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        7 hours ago

        You don’t understand, once DLSS 5 is released into the wild then nobody will have a choice. It’s basically Skynet, the end of the world, Snow Crash, a breach in the Black Wall.

        It will install itself the moment a person searches for Godot tutorials and nobody can ever disable it. It would be LITERALLY IMPOSSIBLE (didn’t you see that they said ‘literally’?!) for an artist to control.

        /s

        • glimse@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          7 hours ago

          I hate Nvidia and think this demo (mostly) looks like shit but these hyperbolic reactions are making me feel like the crazy one. I know it’s janky and running on 2 cards but it’s wild that it’s happening in real time and IMO it’s really interesting tech. There are so many cool ways this could be applied beyond hyper realism

          • FauxLiving@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            4 hours ago

            You’re not crazy, you’re just reading a topic associated with AI and so it’s full of bots, their misinformation and outrage, and the idiots that are influenced by them.

            Like all of these threads, we get these insane bad faith ‘arguments’, misinformation and heavy vote manipulation.

            There are certainly valid criticisms about DLSS. It creates visual artifacts, it’s often used as a crutch by games to create performance, in the case of DLSS 5 the overall effect is weird as you’ve said. I agree with a lot of the complaints and I’ll probably enable DLSS 5 once and then go back to native… but I think that a lot of comments here are just ridiculous so you’re not alone there :P.

            • glimse@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              4 hours ago

              I read some more about it and it looks like developers have a lot of granular control. Not just % applied but options per object type. So they can max it out for faces, 50% for water, 25% for foliage, etc.

              There are some legitimately awesome use cases for this especially if they let developers train their own models. I didn’t play Death Stranding but I know they’ve got detailed face scans of Norman Reedus…imagine if the Norman filter got applied to his character in-game.

              • FauxLiving@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                4 hours ago

                If it’s that controllable that’s pretty cool. I could see it being useful to do things that are normally expensive (like raytracing shadows on grass) but which don’t really matter if they’re altered a bit. Being able to exclude faces or important set pieces would be a big plus.

                Not that it matters much for me, my next card will likely be AMD for Linux reasons.

    • Uruanna@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      11 hours ago

      Motherfucker, you say this is releasing within the year. How is this “very early”? It should be in the polishing up stages by any reasonable, professional timeline.

      Don’t worry, they’ll speed up the dev and QA time with AI.

    • paraphrand@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      16 hours ago

      Claiming something is still in progress and that major change can happen before release is a classic tech industry public relations game, and too many “influencers” take it at face value.