I have been telling people to use AI as a thinking partner, not a replacement for thinking. To let it interview them, surface ideas, enhance their output rather than generate it. I still believe that. But I have come to realise it is not enough.
Even when you use AI thoughtfully, even when you treat it as a collaborator rather than an answer machine, you are still absorbing something you did not choose.
The Wrong Debate
We are having the wrong conversations about AI. Everyone debates whether it will take jobs, whether it hallucinates too much, whether it is truly intelligent. Almost nobody is asking the question that will define the next decade: whose values are baked into these models, a…
I have been telling people to use AI as a thinking partner, not a replacement for thinking. To let it interview them, surface ideas, enhance their output rather than generate it. I still believe that. But I have come to realise it is not enough.
Even when you use AI thoughtfully, even when you treat it as a collaborator rather than an answer machine, you are still absorbing something you did not choose.
The Wrong Debate
We are having the wrong conversations about AI. Everyone debates whether it will take jobs, whether it hallucinates too much, whether it is truly intelligent. Almost nobody is asking the question that will define the next decade: whose values are baked into these models, and what happens when we outsource our thinking to them?
A senior technical leader said something to me recently that I have not been able to shake: “We think about sovereignty when we look at Chinese models. We worry about using them. But we do not think about it when using US models. We need to understand what is going into our models because otherwise we are outsourcing our brains, our decision-making, our cultural responses to a model. Whatever goes into the training data defines the culture.”
That asymmetry reveals our blind spot. We instinctively question AI that comes from cultures different to our own. We do not question AI that reflects assumptions we already hold. Which means we absorb those assumptions without noticing.
Baked In
Every AI model is trained on data that reflects specific cultural assumptions about what counts as good communication, professional behaviour, appropriate decision-making, correct priorities, and ethical choices. These are cultural artifacts, not universal truths. But AI presents them as if they are universal.
Ask an AI model about work-life balance and you will get answers shaped by US professional norms. Ask about organisational hierarchy and you will get answers reflecting Silicon Valley flat-org assumptions. Ask about management problems and you will get a worldview about what leadership should look like, how conflicts should be resolved, what good performance means. That worldview was baked in during training.
The training data for major models comes predominantly from English-language Western sources. The values embedded in responses reflect liberal US cultural assumptions. The “correct” answers to ambiguous questions align with specific cultural viewpoints. This is fine if you are comfortable with that and aware of it. It is problematic if you are not aware of it at all.
Data vs Values
Data sovereignty is a problem we understand. We have dealt with it for years: where is data stored, who has access, what jurisdictions apply. We can audit it. We can require data stays in specific countries. We can see where the servers are.
Values sovereignty is different and harder. You cannot audit what cultural assumptions are embedded in model weights. You cannot require a model “thinks” in culturally neutral ways. There is no such thing as culturally neutral. Every choice about what to include in training data, what to filter out, how to weight different sources, what outputs to reinforce, embeds specific values.
When a UK company uses US-trained AI for HR decisions, hiring recommendations, and performance reviews, whose cultural values are shaping those decisions? When a Japanese company uses Western AI for strategy recommendations, whose business philosophy is informing that strategy? These are not hypothetical questions. This is happening now, at scale, in organisations that have not thought about it.
Hallucinated Values
We have learned to watch for AI making up citations, inventing statistics, generating false information. We fact-check. We verify. We treat AI outputs with appropriate scepticism when it comes to factual claims.
But we do not apply that same scepticism to values. AI presents culturally-specific assumptions as universal truths. It embeds biases we do not notice because they match our own. It creates echo chambers of thought that feel like common sense.
People are using AI to understand historical events filtered through training data biases, to form opinions on current events shaped by the model’s embedded viewpoints, to make personal decisions based on values they did not consciously choose, to define organisational culture reflecting Silicon Valley norms by default.
If we are not careful, this kind of hallucination will leak into our understanding of ourselves, our organisations, and our history. Not through obvious errors we can catch, but through subtle assumptions we never thought to question.
No Clean Fix
There is no clean solution here.
You cannot choose a “neutral” AI because neutral does not exist. You cannot audit values the way you audit data storage. You cannot require AI providers to disclose embedded values because we do not even have good frameworks for measuring what those values are.
Some companies try to “clean” training data by filtering out bias. But that creates new problems: whose version of clean? What gets filtered and why? Who decides what counts as bias versus legitimate cultural difference? You have not removed values from the model. You have just changed whose values are embedded.
The absence of a perfect solution does not mean doing nothing. It means being thoughtful about an imperfect situation.
Model selection is a values decision, not just a cost decision. When you choose which AI models to deploy in your organisation, you are choosing whose assumptions will influence your teams. Treat it with that weight.
Diversify your AI sources. Do not rely on one model family. Use multiple models with different training approaches as a check on each other. If they give you different answers to the same question, that difference is information. It reveals where assumptions are shaping outputs.
Audit for values, not just accuracy. Just as you would review code for security vulnerabilities, review AI outputs for cultural assumptions that do not match your organisation. Ask whose viewpoint is this representing? What alternatives might exist?
Build internal AI literacy. Teams need to understand that AI is not neutral. It has embedded viewpoints. This awareness alone changes how people interact with AI outputs.
Do not outsource judgment. Use AI to gather information and explore options. Do not use it to make decisions directly. The decision should remain with humans who understand your specific context, culture, and values.
There is no such thing as unbiased AI. Every model reflects the choices made in its creation. The question is not whether your AI has values baked in. It does. The question is whether you know what those values are, whether they align with your own, and whether you are consciously choosing to adopt them.
Most organisations have not asked these questions yet. As AI adoption accelerates, as more decisions get made with AI assistance, as more thinking gets outsourced to these systems, the organisations that thrive will be the ones that engaged with this problem early.
