I have a boss who tells us weekly that everything we do should start with AI. Researching? Ask ChatGPT first. Writing an email or a document? Get ChatGPT to do it.
They send me documents they “put together” that are clearly ChatGPT generated, with no shame. They tell us that if we aren’t doing these things, our careers will be dead. And their boss is bought in to AI just as much, and so on.
I feel like I am living in a nightmare.


As someone who has done various kinds of anomaly detections, it always seems promising until it hits real world data and real world use cases.
There are some widely recognised papers in this field, just about this issue.
Once an anomaly is defined, I usually find it easier to build a regular alert for it. I guess the ML or LLM would be most useful to me in finding problems that I wasn’t looking for.