- 12 Dec, 2025 *
The internet is now flooded with AI slop. So much of the content I see on social media is transparently generated by ChatGPT. LinkedIn is the worst offender. It reminds me that there are people who cannot even write a complete sentence; they’ve effectively been replaced by a bot. This should make them miserable, yet they seem thrilled with the fake views and attention they’re getting. They don’t mind appearing smart without doing the work required to become smart.
We often blame AI for degrading writing standards. Patterns like “not this… but that,” excessive em dashes, vague abstractions, and formulaic phrasing have made AI-generated prose easy to spot. But before rushing to condemn AI for flooding the internet with slop, we should ask how AI produced this slop …
- 12 Dec, 2025 *
The internet is now flooded with AI slop. So much of the content I see on social media is transparently generated by ChatGPT. LinkedIn is the worst offender. It reminds me that there are people who cannot even write a complete sentence; they’ve effectively been replaced by a bot. This should make them miserable, yet they seem thrilled with the fake views and attention they’re getting. They don’t mind appearing smart without doing the work required to become smart.
We often blame AI for degrading writing standards. Patterns like “not this… but that,” excessive em dashes, vague abstractions, and formulaic phrasing have made AI-generated prose easy to spot. But before rushing to condemn AI for flooding the internet with slop, we should ask how AI produced this slop in the first place. These models had to be trained; they learned writing from their training data. AI is trained on our collective output—so when we hate the slop, we are, in effect, hating ourselves.
We are also implying that the very patterns we now notice in AI were somehow absent in our own writing. I read widely, and the patterns we blame AI for already exist in older books in almost identical form: the comparative clause, the em dash, the long wandering sentence, the fondness for abstractions. They were always there. We simply never paid attention until a chatbot reflected them back at us in exaggerated form.
The difference is that earlier these patterns were concealed within the larger architecture of good writing. AI, trained on these patterns, assumed they constituted the essence of good writing and began applying them indiscriminately, stripping them of context and subtlety. Now that the patterns stand naked before us, we resent them because they look absurd. And soon, I’m sure, we will start noticing them more in older books, realizing that in writing, context is everything.
Dostoevsky’s writing feels strange, distinctive, and powerful when we read it. But imagine if everyone started writing like Dostoevsky in every context—in everyday communication, or worse, in the service of vapid motivational posts on LinkedIn. We would naturally recoil. The problem is not the pattern itself but its indiscriminate overuse. AI has simply accelerated this overuse to an intolerable degree.
Good writing is good thinking. AI models, despite their vast training, have learned only what humans once considered good writing, without acquiring the discernment needed to determine when a style should or should not be used. That judgment cannot be trained into a model; it belongs to the writer alone. This is why prompt engineering is so glorified—AI needs us to specify what we want because it cannot intuit the deeper structure of thought behind prose. We can identify good writing when we see it, but we still struggle to articulate what makes it good.
In conclusion, when someone relies heavily on AI for their writing, it is reasonable to assume they know nothing about writing. Even for curation, one must know what good writing looks like. And if someone can truly recognize good writing, why would they need AI to write for them? It is a paradox that collapses in on itself.