
The Persuasion Gap: Why Being Polite to AI is Killing Your Results
I spent years as a high school principal, and I learned one hard truth very quickly: You cannot speak to every student the same way.
If I walked into a disciplinary meeting with a rebellious, anti-authority student and used a strict, authoritative tone, they would shut down ...

The Persuasion Gap: Why Being Polite to AI is Killing Your Results
I spent years as a high school principal, and I learned one hard truth very quickly: You cannot speak to every student the same way.
If I walked into a disciplinary meeting with a rebellious, anti-authority student and used a strict, authoritative tone, they would shut down immediately. Their arms would cross, their eyes would drop, and the conversation would be over before it began. Conversely, if I used that same strict tone with a high-achieving, anxious student who was terrified of failure, they would panic. They would hyperventilate, cry, and become unable to process the feedback.
To get a result — to actually change behavior — I had to “read” the room. I had to change my vocabulary, my tone, and my framing based on the psychology of the person standing in front of me. With the rebel, I offered choices and autonomy. With the anxious student, I offered structure and reassurance.
I was practicing segmentation. I was identifying the “user profile” of the student and adjusting my input to get the desired output.
But when I look at how professionals interact with Artificial Intelligence today, I see none of that nuance.
We are raising a generation of “polite prompters.” I see job seekers, teachers, and executives talking to ChatGPT the same way they talk to a barista: with vague, apologetic politeness. They type things like, “Could you please help me with this?” or “What do you think about this idea?” They treat the AI like a person they are trying not to offend.
They assume the AI is a single, static entity. They believe that “ChatGPT” is one thing, with one personality. And because they treat it like a generic assistant, they get generic, hallucinated, “average” results. They get the “C” student answer — technically correct, but completely hollow.
To fix this, I don’t turn to a technical manual. I turn to the psychological thriller “Lexicon” by Max Barry.
In this book, a secret society of “Poets” discovers that human minds are essentially predictive engines. They realize that if you can identify a person’s psychological category — their “Segment” — you can find a specific sequence of words that bypasses their logic and compels them to act.
The Poets understand that you don’t use the same words on a romantic that you use on a cynic. They know that language is a hacking tool. “Sticks and stones break bones,” the book argues, “but words reprogram brains.”
This is the missing link in our AI education. We treat Large Language Models (LLMs) like calculators (Input > Output), assuming that 2+2 will always equal 4. But LLMs are not calculators; they are probabilistic mirrors. They reflect the intent, the vocabulary, and the sophistication of the user.
To get elite performance, you have to stop being a “User” who asks questions, and start being a “Poet” who engineers compliance.
Here is how the science of a sci-fi thriller can teach us to stop asking the AI for help, and start programming it to perform.
1. The Trap of the “Average” Segment (The “Virginia” Problem)
In Lexicon, a Poet cannot just walk up to a stranger and command them. The persuasion doesn’t work that way. First, they have to “read” them. They watch the target’s eyes; they listen to their cadence. They have to determine if the target is a “Virginia” (impulsive, romantic, common) or a “Bronte” (brooding, intellectual, rare).
If you try to control a “Bronte” — someone who values logic and skepticism — with emotional words meant for a “Virginia,” the persuasion fails. The words don’t “stick.” The target’s mind rejects the input because it doesn’t match their internal operating system.
The AI Reality: The “Hello World” Problem
When a student or job seeker opens a fresh chat window in ChatGPT, Claude, or Gemini, they don’t realize they are talking to the “Average” Segment.
These models are trained using a process called Reinforcement Learning from Human Feedback (RLHF). During training, thousands of human raters grade the AI’s responses. To get the highest score across the widest range of raters, the AI learns to be helpful, safe, harmless, and generally applicable to everyone. It learns to hedge its bets. It learns to avoid strong opinions. It learns to be bland.
The default setting of ChatGPT is the “Virginia” of the digital world — eager to please, polite, but completely lacking in depth or edge.
This creates the “Hollow Prompt” phenomenon:
- The Prompt: “Write a cover letter for a project manager job.”
- The Result: The AI assumes the “Average” persona. It writes a letter filled with platitudes (“I am a hard worker,” “I work well with others,” “I am excited to apply”).
Why? Because statistically, that is what the “average” cover letter on the internet looks like. The AI is simply predicting the most likely next word based on the average of its training data. It is giving you the “Virginia” response because you gave it a “Virginia” prompt.
How to “Segment” the AI
To get a result that isn’t hollow, you must Segment the AI. You have to force it to abandon its general training and adopt a specific, expert persona.
In the book, a Poet identifies the segment to control it. In AI, we declare the segment to become it.
Consider the difference in the “Poet” Prompt:
“Act as a Fortune 500 Recruiter who hates buzzwords. Your target audience is a hiring manager at a tech startup who values data over politeness. Write the letter for that specific segment.”
By defining the Segment (Recruiter vs. Applicant) and the Audience (Startup vs. Corporate), you strip away the “average.” You are telling the predictive engine: “Do not predict the next word based on the general internet. Predict the next word based on the subset of data labeled ‘High-Level Recruitment’.”
You are forcing the AI to simulate a specific, high-level mind. You are moving from a generic conversation to a specialized simulation.
2. The Bareword: Using Symbols to Bypass Fluff
One of the most fascinating concepts in Lexicon is the “bareword.”
In the novel, Poets spend years searching for “barewords” — strange, specific terms or sounds that trigger an automatic obedience response in the brain. These aren’t normal words. They act like a “backdoor” to the mind. When a Poet speaks a bareword, they don’t have to argue with the subject; they trigger them. The bareword cuts through the noise of the conscious mind and hits the programming underneath.
The Corporate Fluff Problem
In the professional world, we see the opposite of barewords. We see “fluff.”
I see this constantly in education and consulting. Users write long, wandering prompts asking the AI to “please look at this and maybe help me” or “I’m struggling with this and I’m not sure what to do.” They use paragraph after paragraph of conversational filler.
The AI, being a predictive mirror, reflects this style. If you write a chatty prompt, the AI writes a chatty response. If you are vague, the AI is vague. It assumes that “chattiness” is the desired format of the interaction.
The Technique: Trigger, Don’t Ask
To escape the hollow draft, we must teach users to use their own barewords — industry-specific tokens and formatting constraints that signal expertise to the model.
If you ask an AI to “organize this data,” it interprets “organize” in the conversational sense. It might write you a nice paragraph describing the data, or a bulleted list with helpful intros and outros. This is useless if you need to import that data into a spreadsheet.
But if you use the “bareword” of data structure — like JSON, CSV, or Markdown — you trigger a machine-readable response. You bypass the “chat” module and hit the “compute” module.
Case Study: The Resume Review
- The “Chatty” Prompt: “Please look at these resume skills and tell me which ones are best for a manager role.”
- The Result: Three paragraphs of polite conversation. “Certainly! Here is a breakdown of the skills that might be useful…” It buries the insight in kindness.
- The “Bareword” Prompt: “Classify the skills in the text below. Output format: [Skill Name] : [Relevance Score 1–10]. Return only the list. No conversational filler.”
This prompt uses three distinct barewords:
- “Output format: []”: This bracket syntax signals a code-like structure. It tells the AI we are in “developer mode,” not “chat mode.”
- “Return only”: This is a “negative constraint” that forbids the polite intro/outro.
- “No conversational filler”: This explicitly tells the AI to turn off its “Virginia” personality.
By using technical syntax (brackets, colons) and negative constraints, you are speaking the language of the machine’s training data. You aren’t asking it to think about the format; you are triggering the format directly.
The lesson for leaders is simple: Stop using English. Start using Syntax. The more your prompt looks like a command line, the smarter the answer will be.
3. Avoiding “Compromise”: The Discipline of the Reset
In Lexicon, a Poet knows that persuasion is fragile. A person isn’t a robot that stays programmed forever. Their mental state shifts.
If a Poet pushes too hard, or if the target gets confused, the target becomes “Compromised.” They build an immunity to the Poet’s voice. The words that worked five minutes ago stop working because the target’s mental state has shifted. A compromised subject becomes unpredictable, erratic, and dangerous.
The AI Reality: Context Drift
We see this in AI classrooms and offices every day. In computer science terms, it is called Context Drift.
Large Language Models have a “Context Window” — a limit to how much information they can hold in their short-term memory. But it’s not just about storage space; it’s about attention. As the conversation gets longer, the “attention mechanism” of the AI gets diluted. It struggles to prioritize the original instructions from the start of the chat against the new noise you are adding at the end.
The Drift Problem
You might be having a great session with the AI. It’s writing perfect code or drafting excellent emails. But then:
- You ask a vague question.
- You ask it to correct a small mistake.
- You change topics to ask about lunch ideas.
- You go back to the code.
Suddenly, the AI “forgets” the Segment you established. It starts hallucinating. It reverts to the “Average” persona. It starts adding polite filler again.
The session is compromised. The context window is polluted with conflicting instructions. The AI is trying to reconcile your expert persona with your lunch order, and the result is a mess.
The Fix: The Hard Reset
Novice users try to argue the AI back on track. They type in all caps: “NO, stop doing that! Go back to the way you were doing it before!”
This rarely works. It just adds more noise to the window. You are arguing with a compromised subject.
To maintain high quality, you must learn the discipline of the Reset. If the results are drifting, do not argue.
- Stop the generation.
- Copy your best prompt (the one that established the Segment).
- Open a new chat window.
- Re-state the Segment.
Don’t fight the drift. Clear the board and start fresh. Just as a Poet would walk away from a compromised target and find a new one, an expert Prompt Engineer knows that fresh context is the only cure for hallucination.
4. The Wolf Segment: High-Agency Prompting
There is one final lesson from Lexicon that separates the masters from the novices. In the book, there is a legendary segment known as “The Wolf.” This is the alpha. The Wolf cannot be commanded; the Wolf commands.
Most users treat AI as a servant. They issue commands: “Write this,” “Fix this,” “Summarize this.”
But the highest level of prompting — what I call “Wolf Prompting” — is when you invert the dynamic. You ask the AI to command you.
The “Maieutic” Method
In education, we call this the Socratic or Maieutic method. We don’t give the student the answer; we ask the questions that force the student to find the answer themselves. In AI, we can force the model to do this for us.
Often, we don’t know what we don’t know. If you are writing a strategic plan for a failing department, but you’ve never done a turnaround before, asking the AI to “write a plan” will result in a generic, average plan. It will guess based on the average of the internet.
Instead, use a High-Agency Prompt that forces the AI to take the lead:
“I need to write a strategic plan for a failing high school. You are an expert Turnaround Consultant. Do not write the plan yet. First, ask me 5 clarifying questions about the budget, the staff culture, and the student demographics. Do not proceed until I answer. Then, based on my answers, propose a strategy.”
By forcing the AI to ask you questions, you are doing three things:
- Segmentation: You are forcing it to act like a Consultant (who asks questions) rather than an intern (who just guesses).
- Context Loading: You are feeding the “Context Window” with high-quality data (your answers) before the generation begins, ensuring the final output is tailored to your specific reality.
- Agency: You are treating the AI as a partner, not a tool.
The “Wolf” doesn’t just take orders. It collaborates. It probes. It forces you to be smarter.
Conclusion: Don’t Just Speak — Program
The lesson of Lexicon is that language is not neutral. It is a tool for control.
In the book, the villains are the ones who use this power without ethics. They use words to strip away free will, to turn people into puppets.
But in the world of AI, the “villain” is simply mediocrity. It is the hollow, generic work produced by people who refuse to learn the language of the machine. It is the sea of beige essays, the hallucinated resumes, and the polite, useless emails that are clogging our inboxes.
As educators, leaders, and job seekers, we cannot afford to be passive users. We cannot just type into the box and hope for the best.
We must learn the technical skills of the Poet.
- We must learn to Identify the Segment — moving from the “Average” Virginia to the Expert Bronte.
- We must learn to Wield the Bareword — using syntax and tokens to trigger precise, machine-readable outputs.
- We must learn to Respect the Context — resetting the conversation before it becomes compromised.
We must understand that we are not having a conversation; we are programming a result.
So the next time you open that chat window, look at the blinking cursor. It is waiting for you. It is a mirror waiting to reflect your intent.
Ask yourself: Are you just talking? Or are you a Poet?
The Persuasion Gap: Why Being Polite to AI is Killing Your Results (Lexicon by Max Barry) was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.