previous lemmy acct: @[email protected] see also: @[email protected]
This is a crisp answer, nice one.
sigh
You said it doesn’t matter if we can tell how something was made.
This conversation is over. Thanks
Ah to be fair i did misinterpret your previous statement.
But no, I am arguing that we are not able to ignore knowledge of the production process. Nothing mystical about that.
That’s not a rejection of what I said, so I assume you agree.
No, I’m saying we can no longer meaningfully separate the product and the process.
I am saying that we can no longer meaningfully separate the two things.
First you said “it doesn’t matter if we can tell or not”, which I responded to.
So I’m confused by your reply here.
This argument strikes me as a tautology. “If we don’t care if it’s different, then it doesn’t matter to us”.
But that ship has sailed. We do care.
We care because the use of AI says something about our view of ourselves as human beings. We care because these systems represent a new serfdom in so many ways. We care because AI is flooding our information environment with slop and enabling fascism.
And I don’t believe it’s possible for us to go back to a state of not-caring about whether or not something is AI-generated. Like it or not, ideas and symbols matter.
If you can tell it was produced in a certain way by the way it looks, then that means it cannot be materially equivalent to the non-AI stock image, no?
I mean, it can’t really do ‘every random idea’ though, right? Any output is limited to the way the system was trained to approximate certain stylistic and aesthetic features of imagery. For example, the banner image here follows a stereotypically AI-type texture, lighting, etc. This shows us that the system has at least as much control as the user.
In other words: it is incredibly easy to spot AI-generated imagery, so if the output is obviously AI, then can we really say that the AI generated a “stock image”, or did it generate something different in kind?
I dunno what to tell you other than that I have been consistently pointing out that AI is a process, not a tool.
If the result of that process is the same wherever it’s introduced, then your model of the world has to be able account for that.
Want to know how I know that it does?
Because the result is the same over and over and over and over and over again. Every single time!
Look at how great AI is for Colombian students as a way to change their material conditions 🙌 https://restofworld.org/2025/colombia-meta-ai-education/
You think? I dunno, I could totally see Marx getting ChatGPT to generate a quick first draft of Capital, would definitely speed up the process of spreading his ideas … I mean spreading his material
If I made a tool which literally said to you, out loud in a tinny computerised voice, “cognitive effort isn’t worth it, why are you bothering to try”, would it be fair to say it was putting forward the idea that cognitive effort isn’t worth it and why bother trying?
If so, what’s the difference when that statement is implied by the functioning of the AI system?
And that social role is, at least in part, to advance the idea that communication and cognition can be replicated by statistically analyzing an enormous amount of input text, while ignoring the human and social context and conditions that actual communication takes place in. How can that not be considered political?
AI is a process, a way of producing, it is not a tool. The assumptions baked into that process are political in nature.
Literally the only thing AI does is cause its users to cede autonomy. Its only function is to act as a (poor) facsimile of human cognitive processing and resultant output (edit: perhaps more accurate to say its function is to replace decision-making). This isn’t a hammer, it’s a political artifact, as Ali Alkhatib’s essay ‘Defining AI’ describes.
The products of artisanal labour and factory labour might indeed be able to be equivalent in terms of the end product’s use value, but they are not equivalent as far as the worker is concerned; the same loss of autonomy, the loss of opportunity for thought and problem-solving and learning and growing, these are part of the problem with capitalist social relations.
I’m trying to say that AI has this social relation baked in, because its entire purpose is to have the user cede autonomy to the system.
A no-brainer. If you’re required to be somewhere as part of your job then you must be paid for that time. Amazing that it’s apparently legal to not do this.