I have a boss who tells us weekly that everything we do should start with AI. Researching? Ask ChatGPT first. Writing an email or a document? Get ChatGPT to do it.
They send me documents they “put together” that are clearly ChatGPT generated, with no shame. They tell us that if we aren’t doing these things, our careers will be dead. And their boss is bought in to AI just as much, and so on.
I feel like I am living in a nightmare.
My boss loves it and keeps suggesting we try it. Luckily, there isn’t much use for it in our line of work.
Our company is forcing us to do everything with AI, hell they developed a “tool” to generate simple apps using AI our customers can use for their enterprise applications and we are forced to generate 2 a week minimum to “experiment” with the constant new features being added by the dev teams behind it (but we’re basically training it).
The director uses AI spy bots to tell him who read and who didn’t read his emails.
Can’t even commit code to our corporate github unless copilot gives it the thumbs up and its constantly nitpicking things like how we wrote our comments and asking to replace pronouns or to write them a different way, which I always reply with “no” because the code is what matters here.
We are told to be positive about AI and to push the use of AI into every facet of our work lives, but I just feel my career as a developer ending because of AI. We’re a bespoke software company and our customers are starting to ask if they should use AI to built their software instead of paying us, which I then have to spend hours explaining them why it would be a disaster due to the shear complexity of what they want to build.
Most if not all executives I talk to are idiots who don’t understand web development, shit some don’t even understand the basics of technology but think they can design an app.
After being a senior dev and writing code for 15 years I’m starting to look at other careers to switch to… Maybe becoming an AI evangelist? I hear companies are desperately looking for them… Lol, what a fucking disaster this shit is becoming.
I am very, very concerned at how widely it is used by my superiors.
We have an AI committee. When ChatGPT went down, I overheard people freaking out about it. When our paid subscription had a glitch, IT sent out emails very quickly to let them know they were working to resolve it ASAP.
It’s a bit upsetting because may of them are using it to basically automate their job (write reports & emails). I do a lot of work to ensure that our data is accurate from manual data entry by a lot of people… and they just toss it into an LLM to convert it into an email… and they make like 30k more than me.
The head of my agency is a gullible rube who is terrified of being “left behind”, and the head of my department is a grown-up with a family and a career who spends his days off sending AI videos and memes into the work chat.
I’ve been called into meetings and told I have to be positive about AI. I’ve been told to stop coding and generate (very important) things with AI.
It’s disheartening. My career is over, because I have no interest in generating mountains of no-intention code rather than putting in the effort to build reliable, good, useful things for our clients. AI dorks can’t fathom human effort and hard work being important.
I’m working to pay off my debts, and then I’m done. I strongly want to get a job that allows me to be offline.
It almost sounds like were both in the same company
it doesn’t exist. but i work for a company that does real work. it doesn’t bullshit.
what does your company do?
I use Excel at work, not in a traditional accounting sense, but my company uses it as an interface with one of our systems I frequently work with.
Rather than tediously search the main Excel sheets that get fed into that system for all of the data fields I have to fill in, I made separate Excel tools that consolidate all of that data, then use macros to put the data into the correct fields on the main sheets for me.
Occasionally I’ll have to add new functionality to that sheet, so I’ll ask AI to write the macro code that does what I need it to do.
Saves me from having to learn obscure VBA programming to perform a function that I do during .0001% of my work time, but that’s about the extent of it. For now.
Of course most of what I do is white collar computer work, so I’m expecting that my current job likely has a two-year-or-less countdown on it before they decide to use AI to replace me.
Surprisingly reasonable?
I was terrified that entering the corporate world would mean being surrounded by people who are obssessed with AI.
Instead like… The higher-ups seem to be bullish on it and how much money it’ll make them (… And I don’t mind because we get bonuses if the corp does well), but even they talk about how “if you just let AI do the job for you, you’ll turn in bad quality work” and “AI just gets you started, don’t rely on it”
We use some machine learning stuff in places, and we have a local chatbot model for searching through internal regulations. I’ve used Copilot to get some raw ideas which I cooked up into something decent later.
It’s been a’ight.
This is the way. I honestly don’t care how the execs think about ai or if they use it themselves, but don’t force its usage on me. I’ve been touching computers since before some of them were born. For me it’s just one extra tool that gets pulled out in very specific scenarios and used for a very short amount of time.
It’s like the electric start on my snowblower - you don’t technically need it, and it won’t do the work for you, (so don’t expect it to) but at the right time it can be extremely nice to have.
So far it’s a glorified search engine, which it is mildly competent at. It just speeds up collecting the information I would anyways and then I can get to sorting useful from useless faster.
That said, I’ve seen emails from people that were written with AI and it instantly makes me less likely to take it seriously. Just tell me what the end goal is and we can discuss how to best get there instead is regurgitating some slop that wouldn’t get is there in the first place!
I feel like giving AI our information on a regular basis is just training AI to do our jobs.
I’m a teacher and we’re constantly encouraged to use Copilot for creating questions, feedback, writing samples, etc.
You can use AI to grade papers. That sure as shit shouldn’t happen.
My subordinate is quite proud at the code AI produces based off his prompts. I don’t use AI personally, but it is surely a tool. Don’t know why one would be proud at the work they didn’t do and can’t explain though. I have to manage the AI use to a “keep it simple” level. Use AI if there is a use case, not just because it is there to be used…
Dumbass senior contract person and program managers are all for using copilot and I’ve caught several people using chatgpt as a search engine or at least that’s what they tell me they think it is.
My company is doing small trial runs and trying to get feedback on if stuff is helpful. They are obviously pushing things because they are hopeful, but most people report that AI is helpful about 45% of the time. I’m sorry your leadership just dove in head first. That’s sound like such a pain.
Sounds like your company is run by people who are a bit more sensible and not driven by hype and fomo.
Hype and FOMO are the main drivers of the Silicon Valley economy! I hate it here.
Our devs are implementing some ML for anomaly detection, which seems promising.
There’s also a LLM with MCP etc that is writing the pull requests and some documentation at least, so I guess our devs like it. The customers LOVE it, but it keeps making shit up and they don’t mind. Stuff like “make a graph of usage on weekdays” and it includes 6 days some weeks. They generated a monthly report for themselves, and it made up every scrap of data, and the customer missed the little note at the bottom where the damn thing said “I can regenerate this report with actual data if it is made available to me”.
As someone who has done various kinds of anomaly detections, it always seems promising until it hits real world data and real world use cases.
There are some widely recognised papers in this field, just about this issue.
Once an anomaly is defined, I usually find it easier to build a regular alert for it. I guess the ML or LLM would be most useful to me in finding problems that I wasn’t looking for.
I work in IT, many of the managers are pushing it. Nothing draconian, there are a few true believers, but the general vibe is like everybody is trying to push it because they feel like they’ll be judged if they don’t push it.
Two of my coworkers are true believers in the slop, one of them is constantly saying he’s been, “consulting with ChatGPT” like it’s an oracle or something. Ironically, he’s the least productive member of the team. It takes him days to do stuff that takes us a few hours.
This is all of tech right now.









