• stoly@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    8 hours ago

    I love how everyone is so desperate to make Gabe to be a terrible person.

  • commander@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    edit-2
    11 hours ago

    We acting like people in the art community weren’t hyped up over AI until they started generating images. Before chatgpt, it was all about automating coding/it and other jobs that arent considered art. Back then it was all about how everyone could pursue their passions. The only people not excited were all the transportation employees and factory workers that had been told by the general public how excited they were to replace them

    • loonsun@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      7
      ·
      9 hours ago

      As a social scientist, pre Chat GPT NLP was like opening a whole new world of possibilities. We could finally at scale analyze one of the richest sources of behavioural data in an empirical statistically driven manner.

      Now, even as I do research with NLP to continue these goals, I can’t bring myself to every defend these tools. If they disappeared tomorrow, we’d lose a tool but we’d prevent so much undue suffering

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      edit-2
      14 hours ago

      The writing was on the wall for years. I remember memes about Altman in machine learning forums/chatrooms circa 2020, and especially 2021.

      Nothing’s changed. Anyone in the space who actually looked at what he was doing, knew. Yet the bulk of the public (and investors) lapped the Tech Bro stuff up.

      • mojofrododojo@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        9 hours ago

        Aaron Swartz said Altman was a sociopath years before AI was a gleam in anyone’s eye.

        The technologies with the worst potential outcomes will always be pioneered by people with no ethical or moral hangups getting in the way.

        • bitjunkie@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          8 hours ago

          Which unfortunately are the same techs that will be elevated by our present economic structure, precisely because those traits are what enable them to make (or grift) a shitload of money.

  • PonyOfWar@pawb.social
    link
    fedilink
    English
    arrow-up
    98
    arrow-down
    2
    ·
    19 hours ago

    Obligatory reminder that billionaires are not our friends. But also, donating to AI research in 2018 is quite a different matter than if he had done so in recent years. Most people in tech were somewhere between neutral and enthusiastic towards machine learning back then and few foresaw the monster it would become. Doubt he’s as enthusiastic nowadays, considering what it did to Valve’s hardware ambitions.

    • thingsiplay@lemmy.ml
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      18 hours ago

      Obligatory reminder that billionaires are not our friends.

      Why does this even come up?

      • TrickDacy@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        2
        ·
        17 hours ago

        Because a lot of people equate “some are less harmful than others” with “I fucking love this guy and think he’s a harmless saint!”

      • FauxLiving@lemmy.world
        link
        fedilink
        English
        arrow-up
        18
        arrow-down
        2
        ·
        17 hours ago

        If you can mentally separate the technology from the capitalist orgy around trying to shoehorn LLMs into every possible thing, he’s not wrong.

        The technology has promise, but the reality of what it can be useful for is complete overshadowed by the hype frenzy declaring the end of all knowledge workers and creatives.

        LLMs are significantly better at translation than anything we’ve been able to design, for instance. But that’s not flashy, it doesn’t generate seed funding or lure investors so it’s largely not what people think of when they hear “AI”.

        • pulsewidth@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          3
          ·
          13 hours ago

          Nah, sorry, if Gabe looked at the LLM mess of the last 5+ years and is still pumping it as ‘ermagerd this is technology that rivals the importance of the internet, or computers themselves’ he is cooked on marketing hype.

          It’s still crap.

          Its most promising commercial application in paid models (coding), is still writing code slower than professional coders, when actually measured in studies.

          The only goals it’s hit is makinh a few jerks more wealthy, move that wealth inequality needle more towards the billionaires, and set us up for the next global financial crisis that we’ll all be bailing them out on and suffering global decades long recessions through.

          I reckon 2027 it’ll hit, that’s looking like when the money guys will finally be completely out of wiggle room and there will be no more cash for the cash fire.

  • Rentlar@lemmy.ca
    link
    fedilink
    English
    arrow-up
    32
    ·
    edit-2
    17 hours ago

    At that time it was still kind of a research project than a “it’s going to take over everything” hype and FUD machine.

    His opinions on AI today seem more enthusiastic than I would be, but well clear of the delusional level of AI-boosters.