• ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
    link
    fedilink
    arrow-up
    2
    ·
    2 hours ago

    I mean they want that, but it’s just not happening with the way the tech works right now. Unless you have a deep understanding of the problem you have the model solve, then you have no way to evaluate whether it solved it correctly or not. And it’s basically like an evil genie where it will interpret your request in a dumbest way possible by default. So, you get god results when you already know what the shape of the solution should be, and you give the model concrete direction on the approach to take, algorithms to use, and so on. And that’s why the whole idea of deskilling workers or replacing them with automation is not really working out. You’d need genuine artificial intelligence for that and LLMs aren’t it.