Thinking specifically about AI here: if a process does not give a consistent or predictable output (and cannot reliably replace work done by humans) then can it really be considered “automation”?

  • sbv@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    6
    ·
    2 months ago

    If it’s “good enough” for the task, yes.

    Many tasks have loose success parameters, and an acceptable failure rate. If the automation fits in those, and it simplifies my day, then it’s reasonable automation.

  • TheOubliette@lemmy.ml
    link
    fedilink
    arrow-up
    6
    ·
    2 months ago

    Automation is just using technology to replace human labor, so yes. The exact mechanism doesn’t change that. “AI” is a buzzword but LLMs have replaced human labor already in various ways even though most of the applications are hype / BS. For example, it has certainly taken a bite out of stock images and product graphic design.

    Individual capitalists must seek out automation because reducing labor cost without decreasing productivity means a higher profit for them. Capital in aggregate seeks automation because it disciplines labor, means you can threaten and mistreat labor more easily. In that sense “AI” is serving the same purpose as historical automation even when it fails to substitute labor as a productive aspect. Companies can threaten their employees with “AI” that doesn’t work and they can rebrand firings as layoffs using media discourse that overhypes “AI” on their behalf, it is part of the PR universe.

  • howrar@lemmy.ca
    link
    fedilink
    arrow-up
    5
    ·
    2 months ago

    Consider this example:
    You have a road that forks and joins up again. You need to reach the end of this road and have a vehicle that takes you there without your input. At the fork, it will flip a coin and choose to either take the left fork or the right fork depending on the results. This agent is therefore stochastic. But no matter what it chooses, it’ll end up at the same place at the same time. Do you consider this to be automation?

  • Cowbee [he/they]@lemmy.ml
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    2 months ago

    Depends on the use-case. AI isn’t a panacea, and the excessively pro-AI camp is deeply unserious, but it does have some cases it can function fairly well at, like stock image creation, that doesn’t need to have backdrops, props, actors, etc for every random idea. That’s the extent of it, really.

    • patatas@sh.itjust.worksOP
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      2 months ago

      I mean, it can’t really do ‘every random idea’ though, right? Any output is limited to the way the system was trained to approximate certain stylistic and aesthetic features of imagery. For example, the banner image here follows a stereotypically AI-type texture, lighting, etc. This shows us that the system has at least as much control as the user.

      In other words: it is incredibly easy to spot AI-generated imagery, so if the output is obviously AI, then can we really say that the AI generated a “stock image”, or did it generate something different in kind?

      • Cowbee [he/they]@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        2 months ago

        “Every random idea” meaning AI can take the place of some stock photos, and not all, as in we don’t need to do the traditional stock photo process for every random idea, AI can replace some of them. As for the quality of the output, that’s something that varies from case to case, and further the idea isn’t to replace human art in general, but to exist alongside in instances where a human artisinally producing it isn’t the purpose, but the traditional means to an end. Therefore, it doesn’t actually matter if we can tell or not, the goal isn’t to decieve, but even that is getting blurrier and blurrier as AI improves.

        Essentially, if an AI image can fulfill the same purpose as a stock image, then the act of creating the stock image through traditional means is just unnecessary expenditure of effort. We don’t traditionally appreciate stock images for their artistic merit, but for a visual function, be it to convey information or otherwise, not because our goal is to appreciate and understand the artistic process a human went through to create it.

        • patatas@sh.itjust.worksOP
          link
          fedilink
          arrow-up
          1
          ·
          2 months ago

          If you can tell it was produced in a certain way by the way it looks, then that means it cannot be materially equivalent to the non-AI stock image, no?

          • Cowbee [he/they]@lemmy.ml
            link
            fedilink
            arrow-up
            2
            arrow-down
            1
            ·
            2 months ago

            These are distinct hypotheticals.

            In the first case, the question is if it is equivalent, does the use-value change? The answer is no.

            In the second case, the question is “if we can tell, does it matter?” And the answer is yes in some cases, no in others. If the reason we want a painting is for its artisinal creation, but it turns out it was AI-generated, then this fundamentally cannot satisfy the use of an image for its appretiation due to artisinally being generated. If the reason we want an image is to convey an idea, such that it would be faster, easier, and higher quality than an amateur sketch, but in no way needs to be appreciated for its artisinal creation, then it does not matter if we can tell or not.

            Another way of looking at it is a mass-produced chair vs a hand-crafted one. If I want a chair that lets me sit, then it doesn’t matter to me which chair I have, both are equivalent in that they both satisfy the same need. If I have a specific vision and a desire for the chair as it exists artisinally, say, by being created in a historical way, then they cannot be equivalent use-values for me.

            • patatas@sh.itjust.worksOP
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              2 months ago

              This argument strikes me as a tautology. “If we don’t care if it’s different, then it doesn’t matter to us”.

              But that ship has sailed. We do care.

              We care because the use of AI says something about our view of ourselves as human beings. We care because these systems represent a new serfdom in so many ways. We care because AI is flooding our information environment with slop and enabling fascism.

              And I don’t believe it’s possible for us to go back to a state of not-caring about whether or not something is AI-generated. Like it or not, ideas and symbols matter.

              • Cowbee [he/they]@lemmy.ml
                link
                fedilink
                arrow-up
                1
                ·
                2 months ago

                “We” in this moment is you, right now. If the end product is the same, then it is the same. If the process is the use-value then it matters, but if not, it doesn’t.

                Ideas and symbols matter, sure, but not because of any metaphysical value you ascribe them, but the ideas they convey.

                • patatas@sh.itjust.worksOP
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  2 months ago

                  First you said “it doesn’t matter if we can tell or not”, which I responded to.

                  So I’m confused by your reply here.