intricate, commendable, and meticulous.
It’s not just what we write, or whether we suddenly adopt the vocabulary of a 20th-century academic; it’s also the rhythm and cadence, and how we begin to sound. Researchers suspect that the rapid adoption of ChatGPT — a model already used by 10% of the world’s population — is beginning to be reflected in the artificially correct pitch of speech filled with short, empty sentences that eliminates a…
intricate, commendable, and meticulous.
It’s not just what we write, or whether we suddenly adopt the vocabulary of a 20th-century academic; it’s also the rhythm and cadence, and how we begin to sound. Researchers suspect that the rapid adoption of ChatGPT — a model already used by 10% of the world’s population — is beginning to be reflected in the artificially correct pitch of speech filled with short, empty sentences that eliminates any trace of emotion and vulnerability, two traits that make our conversations unique.
Your inbox, like mine, will receive daily messages as flat as a plateau, perfectly correct, structured in three paragraphs of between four and five lines of sentences separated by periods, with an abundance of verbs and adjectives and a clear shortage of nouns, a clear sign that they are going around in circles to say little. “There’s no progress in the discourse; they’re paraphrasing the previous paragraph,” explains Lara Alonso Simón, a professor at Madrid’s Complutense University. These soulless, typo-free emails that don’t trigger your impulse to reply because, you suspect, there’s no one behind them, are also the scarlet letter of the flattening of style that has arrived with models like ChatGPT, Gemini, or Claude. If you think everything has become much more boring since 2023, you’re not alone; linguists think so too.
“ChatGPT has a distinctive style,” Philip Seargeant, professor of applied linguistics at The Open University in the U.K., explains via email. “It writes competently but dully. There is little variation in the writing, and certain constructions are regularly repeated.” Some traits that betray the use of an AI include “inserting explanatory phrases between long dashes in the middle of sentences or always citing examples in groups of three, something very common in the writing of official communications.”
Ana María Fernández and Lara Alonso Simón are researchers at the Complutense University and have focused their work on the impact of ChatGPT on the Spanish language. In their research, Do GPT-3.5 and GPT-4 have a writing style different from that of humans?, an exploratory study for Spanish, they have detected, among other distinctive features, a limited use of punctuation marks, all except one: the period.
“That’s why everything sounds choppy. Humans make longer and more complex sentences with many subordinate clauses,” says Alonso. Fernández explains that they observed that ChatGPT never deviated from the canonical structure of Spanish: subject, verb, and predicate. “An order that humans constantly dislocate to focus attention where we need it,” the expert points out.
This work confirmed that AI in Spanish frequently generates literal translations from English, which is why it uses many gerunds and pairs adjectives: “Big and beautiful,” for example, explains Alonso.

Adam Aleksic, author of Algospeak: How Social Media Is Transforming the Future of Language (2025), points out in his book that most people don’t know that chatbots have these biases toward certain words or speech patterns. “They assume they’re speaking normally because the tool’s interface was designed to achieve that normality. They also assume the texts they’re consuming are human, even when they could have been generated by an AI.”
According to Aleksic, even the most perceptive people won’t be able to escape the chatbot tone because there will be more and more neon words. “It’s normal for mental maps of language to evolve, but we’re now in a feedback loop where our maps are converging with those of chatbots,” writes Aleksic, who believes that as it becomes harder to distinguish human language from artificially generated language, and as LLMs are trained on AI-influenced human writing and their own content, the prevalence of this robotic, largely unchanged language will increase.
The Max Planck Institute research group confirms that we are not immune to interacting with ChatGPT. “We will adopt its words and phrases if they are useful to us. It influences us as much as a coworker would, or much more so because no human being has another person at their disposal 24/7, much less one who always agrees with them,” López explains via video call from Berlin.
The impressions we form based on linguistic cues have consequences. Someone who speaks like us immediately seems trustworthy. The thought that we’re interacting with an AI still puts us on guard. This is what a Cornell University study published in the journal Nature showed, stating that it’s not even the actual use of AI — something difficult to prove — that discourages, but rather suspicion.
The study showed how the adoption of AI led to the diluting of the three levels of trust that underpin human communication. The first, which experts call “basic signs of humanity,” refers to the clues that give us away: mistakes, vulnerability, or personal rituals; the second refers to the attention and effort we put into showing the person in front of us that we care about what we’re saying or writing; and the third includes a sense of humor, competence, and our true selves. The experts illustrate this with a message. “It’s not the same to say, ‘I’m sorry you’re upset,’ as it is to say, ‘Hey, sorry I messed up at dinner. I didn’t have a good day.’” The first, so sterile, raises doubts; with the second, one empathizes and believes it.
Juan Antonio Latorre García is a forensic linguist and professor in the Department of English Studies at the Complutense University. Lately, he’s been focusing on detecting plagiarism with artificial intelligence. “My students don’t try to trick me; they know what I do,” he says over the phone.
For a study on how to identify student work produced with the help of AI, Latorre assigned two groups an essay on the film Dead Poets Society. The first group could use traditional dictionaries and online tools, while the second was allowed to use ChatGPT, but not simply to command it to compose. A few weeks beforehand, they had to train it to produce almost human-like material, and to do so, they had to provide it with comprehensive information about the author, including their texts.
The professor’s goal was to determine whether the group would be able to identify material produced with artificial intelligence. “The outlook is bleak,” says Latorre. “The text produced by ChatGPT can only be detected by ideolectal features, which are the choices each person continually makes to express themselves, and this can be done by a linguist but not by a biology or medicine professor.” Latorre believes that written exams will gradually lose relevance in favor of oral ones. For this expert, the curious thing about ChatGPT is that “it always chooses the most probable, the most standard feature,” he explains in a telephone conversation.
When Gutmaro Gómez, professor of contemporary history at the Complutense University, comes across an exam that describes the Spanish Civil War as if it were Star Wars, “a fight between good and evil,” with elevated language, 20th-century quotes, and an outdated bibliography, he knows he has run into ChatGPT. “A 24-year-old kid using outdated academic language who repeats the same phrase up to 10 times,” he explains. The professor defines it as 20th-century content processed by a 21st-century tool.
Both Gómez and Latorre believe that students don’t pay much attention to or try to understand ChatGPT’s answers. “The depersonalized style permeates the texts; you can feel that the submitter doesn’t have a deep understanding of the subject; they’re just meaningless words thrown around at random,” says Latorre.
AI “externalizes thinking”
The aforementioned MIT study confirms ChatGPT’s homogenizing effect. “Users tended to converge on common words and ideas,” the researchers concluded. In all cases, the people using ChatGPT, summoned on different days to write about personal topics, generated texts biased in specific directions. “AI is a technology of averages: large language models are trained to spot patterns across vast tracts of data, the answers they produce tend toward consensus,” write its authors, who believe that AI “externalizes thinking so completely that it makes us all equal.” Individual voices are suppressed in favor of the average.
This study is the first to gauge the price we pay for being a little lazier than we were three years ago. The experiment, which compared the brain activity of those working on their own with those relying on Google and others using ChatGPT, showed, according to the authors, “a dramatic discrepancy.” Those using AI had minimal brain activity with fewer connections than the other groups. For example, they showed the lowest alpha and theta connectivity, the former being related to creativity and the latter to working memory. The users had no sense of authorship over their texts, and 80% were unable to cite anything they had supposedly written.
The responses from those using ChatGPT were skewed and very similar. When asked, “What makes you truly happy?” most mentioned career and personal success, and when asked whether fortunate people had a moral obligation to help the less fortunate, everyone was suspiciously in agreement. The responses from the groups not relying on the AI were diverse and critical of philanthropy. With the LLM, “you have no divergent opinions being generated,” Natalya Kosmyna, author of the aforementioned MIT study, told a reporter from The New Yorker. “Average everything everywhere all at once — that’s kind of what we’re looking at here.”
The bias seems crude and easy to identify, but few users seem willing to sacrifice ChatGPT’s convenience to regain some quality and originality. We’ve accepted creating and consuming content designed to be used and thrown away.
Psychologist Chiara Longoni, co-author of the paper Lower Artificial Intelligence Literacy Predicts Greater AI Receptivity, found that people with “low AI literacy” (those who don’t understand how algorithms, training data, and pattern recognition work — probably the majority) perceive LLMs as “magic.” “This happens, above all, when performing tasks that involve uniquely human characteristics such as humor, empathy, and creativity. They find it extraordinary, it amazes them, and this drives greater receptivity to the use of AI,” she explains via email.
Other experts speak of ChatGPT’s “hypnotic effect,” which induces humans to distrust their own ability and knowledge. “ChatGPT doesn’t hesitate,” López notes, “it gives single, categorical answers, whether they are correct or not, and humans are vulnerable to confirmation bias: we stick with what best aligns with our desires.” The less conflict and greater consistency, the greater the likelihood of successfully scaling a business that requires millions of users hungry for quick answers, dependent on the tool, and potentially paying subscribers.
At Swinburne University of Technology in Australia, an experiment asked 320 people to write the copy for a sofa advertisement. They were then shown how ChatGPT had done it when given the same command, and asked to repeat the copy. The results changed dramatically. “We didn’t tell them, ‘Do it like ChatGPT would do it,’ but that’s exactly what they did,” said Jeremy Nguyen, lead author of the study. After seeing the copy generated by ChatGPT, the participants wrote more redundant ads, averaging 87 words compared to 33 in their original texts.
“For millions of people, ChatGPT is already the norm,” says Latorre. They believe that concise verbiage is the norm. And one doesn’t resist a norm; one adapts. It could be said that the real danger isn’t the lack of originality, but that no one seems to miss it.
Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition