• Noxy@pawb.social
      link
      fedilink
      English
      arrow-up
      4
      ·
      6 days ago

      how does one even know and verify what an LLM’s “sources” are? wouldn’t it just vomit out whatever response and then find “sources” that happen to match its stupid output after the fact?

      • Pennomi@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        6 days ago

        Precisely my point. But if it is correct and can link to an authoritative source (eg. a news article), that is relatively easy to verify.

        How much you can trust a news article is still up for debate.

      • Pennomi@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        6 days ago

        Agreed, I’m more worried about people blindly trusting AI than I am about this particular situation.