It’s an interesting subject, so I was dissapointed when it became clear that this text is written by AI. AI uses the following structure very often: “It’s not X, it’s Y”, so the list below tells me it’s AI beyond doubt.
I.
This is not speculation. This is not inference from supply chain tightness or price movements or anecdotal reports from frustrated procurement officers. This is the documented operational reality of…
This is not an analysis of a commodity market experiencing temporary tightness. This is a reconnaissance report from the front lines of a new form of economic warfare
This is not the blunt instrument of a traditional export ban. This is a scalpel.
The pattern suggests not reactive retaliation but proactive strategy.
The restrictions announced in 2023, 2024, and 2025 are not isolated policy responses to specific trade disputes. They are nodes in an integrated campaign
Tungsten is not the final escalation. It is another proof of concept.
The question for Western policymakers and corporate strategists and institutional investors is not whether to take this seriously (…) The question is what to do about it
II.
This is not marketing rhetoric from mining promoters or special pleading from industry lobbyists. This is physics.
These properties are not arbitrary. They emerge from tungsten’s electronic structure
This is not an abstract supply chain concern. This is industrial capacity disappearing in real time
III.
This is not a simple on-off switch. It is a tunable instrument with multiple control parameters
It can be hard to know how much a writer relied on an LLM or how much revising/editing/proofreading they did before publishing, especially on topics you’re not versed in.
These tells are going to get harder to clock very quickly.
If you want your work to be taken seriously by people who understand the limitations of LLMs, then you should edit for tells (or don’t use one).
People adopt conventions, and these LLMs are making pervasive the overuse of this phraseology. The influence is dialectical.
You might as well get used to it, LLMs are a tool that’s in wide use and it’s delusional to think that news sites will not use be using them. Personally, I absolutely do not care if the text was formatted by AI, as long as the content is factual.
I am most likely unable to spot all AI-generated articles, but when I start noticing that a typical AI giveaway appears so many times, it bothers me. In the same way as that I probably couldn’t spot all the English language errors, but when there are so many that I start noticing them, it becomes annoying and it undermines the credibility of the author. Especially given the fact that this article doesn’t cite any sources.
The writing style is annoying, though, and essentially eliminates the authorial voice of every writer. Everyone is churning out the same slop, everything sounds the same, all difference is being eliminated. Even writers that don’t use the slop machines sound like this, because it’s all they read. It’s only going to get worse, the internet is fucking ruined.
I read these articles for the content, and I find news writing has been terrible long before LLMs. At least this way it’s written closer to just being a summary that you can scan through easily. You’ll be glad to know that people are working on stuff like this already, so LLM generated content is going to read very much like traditional human written content before long. https://muratcankoylan.com/projects/gertrude-stein-style-training/
Fine tuning costs money which means they aren’t going to do it. I fully expect they’ll settle for slop (they already have) and so will everyone else. You might as well get used to it. Everything gets worse forever and nothing ever gets better.
LoRA’s are actually really cheap and fast to make. That article I linked explains how it literally took 2 bucks to do. I don’t really think anything is getting worse forever. Things are just changing, and that’s one constant in the world.
And it was still something like 30% detectable as AI. That tells me that every article will still read as samey, even if it’s different enough to fool a tool that was trained on the current trends. Authorial voice is lost, replaced by the model’s voice.
It was only when they trained on authors specifically, which cost $81, that it dropped down to 3%. They won’t do that.
Give it a year and we’ll see. These things are improving at an incredible pace, and costs continue to go down as well. Things you needed to have a data center to do just a year ago can now be done on a laptop.
It’s an interesting subject, so I was dissapointed when it became clear that this text is written by AI. AI uses the following structure very often: “It’s not X, it’s Y”, so the list below tells me it’s AI beyond doubt.
I.
This is not speculation. This is not inference from supply chain tightness or price movements or anecdotal reports from frustrated procurement officers. This is the documented operational reality of…
This is not an analysis of a commodity market experiencing temporary tightness. This is a reconnaissance report from the front lines of a new form of economic warfare
This is not the blunt instrument of a traditional export ban. This is a scalpel.
The pattern suggests not reactive retaliation but proactive strategy.
The restrictions announced in 2023, 2024, and 2025 are not isolated policy responses to specific trade disputes. They are nodes in an integrated campaign
Tungsten is not the final escalation. It is another proof of concept.
The question for Western policymakers and corporate strategists and institutional investors is not whether to take this seriously (…) The question is what to do about it
II.
This is not marketing rhetoric from mining promoters or special pleading from industry lobbyists. This is physics.
These properties are not arbitrary. They emerge from tungsten’s electronic structure
This is not an abstract supply chain concern. This is industrial capacity disappearing in real time
III.
(Anything further is for paid subscribers.)
Well that’s annoying.
LLM> also dont use em dashesYou might as well get used to it, LLMs are a tool that’s in wide use and it’s delusional to think that news sites will not use be using them. Personally, I absolutely do not care if the text was formatted by AI, as long as the content is factual.
I am most likely unable to spot all AI-generated articles, but when I start noticing that a typical AI giveaway appears so many times, it bothers me. In the same way as that I probably couldn’t spot all the English language errors, but when there are so many that I start noticing them, it becomes annoying and it undermines the credibility of the author. Especially given the fact that this article doesn’t cite any sources.
If it’s partially written by AI then the writing is either hurried, lazy, or some other bad thing.
At best, it shows it was written under deadline with someone watching the clock.
Real writing > lazy YouTube videos > AI assisted.
The writing style is annoying, though, and essentially eliminates the authorial voice of every writer. Everyone is churning out the same slop, everything sounds the same, all difference is being eliminated. Even writers that don’t use the slop machines sound like this, because it’s all they read. It’s only going to get worse, the internet is fucking ruined.
It’s not just AI users — it’s writers.
I read these articles for the content, and I find news writing has been terrible long before LLMs. At least this way it’s written closer to just being a summary that you can scan through easily. You’ll be glad to know that people are working on stuff like this already, so LLM generated content is going to read very much like traditional human written content before long. https://muratcankoylan.com/projects/gertrude-stein-style-training/
Fine tuning costs money which means they aren’t going to do it. I fully expect they’ll settle for slop (they already have) and so will everyone else. You might as well get used to it. Everything gets worse forever and nothing ever gets better.
LoRA’s are actually really cheap and fast to make. That article I linked explains how it literally took 2 bucks to do. I don’t really think anything is getting worse forever. Things are just changing, and that’s one constant in the world.
And it was still something like 30% detectable as AI. That tells me that every article will still read as samey, even if it’s different enough to fool a tool that was trained on the current trends. Authorial voice is lost, replaced by the model’s voice.
It was only when they trained on authors specifically, which cost $81, that it dropped down to 3%. They won’t do that.
Give it a year and we’ll see. These things are improving at an incredible pace, and costs continue to go down as well. Things you needed to have a data center to do just a year ago can now be done on a laptop.