I have a boss who tells us weekly that everything we do should start with AI. Researching? Ask ChatGPT first. Writing an email or a document? Get ChatGPT to do it.

They send me documents they “put together” that are clearly ChatGPT generated, with no shame. They tell us that if we aren’t doing these things, our careers will be dead. And their boss is bought in to AI just as much, and so on.

I feel like I am living in a nightmare.

  • Clay_pidgin@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 day ago

    Our devs are implementing some ML for anomaly detection, which seems promising.

    There’s also a LLM with MCP etc that is writing the pull requests and some documentation at least, so I guess our devs like it. The customers LOVE it, but it keeps making shit up and they don’t mind. Stuff like “make a graph of usage on weekdays” and it includes 6 days some weeks. They generated a monthly report for themselves, and it made up every scrap of data, and the customer missed the little note at the bottom where the damn thing said “I can regenerate this report with actual data if it is made available to me”.

    • ragas@lemmy.ml
      link
      fedilink
      arrow-up
      3
      ·
      13 hours ago

      As someone who has done various kinds of anomaly detections, it always seems promising until it hits real world data and real world use cases.

      There are some widely recognised papers in this field, just about this issue.

      • Clay_pidgin@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 hours ago

        Once an anomaly is defined, I usually find it easier to build a regular alert for it. I guess the ML or LLM would be most useful to me in finding problems that I wasn’t looking for.