10 min read11 hours ago
–
Press enter or click to view image in full size
The truth-identification machine
The ultimate aspiration of a product team is to discover what is true about customers, markets, incentives, behavior, and business value. Everything else, planning, structure, rituals, frameworks, is just scaffolding around this core purpose.
Most product teams say they want to be data-driven. Very few have the discipline to seek out data that proves them wrong.
Falsification as the discipline
Karl Popper (1902–1994) was one of the 20th century’s most influential philosophers of science. His big idea was deceptively simple: Science doesn’t prove things; it disproves things. Knowledge grows by eliminating wrong ideas, not by conclusively verifying right ones.
…
10 min read11 hours ago
–
Press enter or click to view image in full size
The truth-identification machine
The ultimate aspiration of a product team is to discover what is true about customers, markets, incentives, behavior, and business value. Everything else, planning, structure, rituals, frameworks, is just scaffolding around this core purpose.
Most product teams say they want to be data-driven. Very few have the discipline to seek out data that proves them wrong.
Falsification as the discipline
Karl Popper (1902–1994) was one of the 20th century’s most influential philosophers of science. His big idea was deceptively simple: Science doesn’t prove things; it disproves things. Knowledge grows by eliminating wrong ideas, not by conclusively verifying right ones.
Popper in brief:
- He believed you can never truly “prove” a theory, you can only fail to disprove it so far.
- He thought the defining feature of a scientific theory is that it can be falsified. A real theory makes predictions that could turn out to be wrong.
- He thought progress comes from the cycle of conjecture (idea) → refutation (test).
If this sounds like the build-measure-learn loop, it’s not — at least as most teams practice it. Build-measure-learn is a process; falsification is a discipline. The difference is that falsification requires you to articulate in advance what evidence would cause you to abandon your hypothesis. Without that, the risk is shipping, then pattern-matching your way back to what you already believed.
This isn’t just a process gap; it’s a psychological one. Popper understood something deeper — that people naturally default to confirmation, not falsification. We look for evidence that flatters our beliefs and ignore anything that threatens them. Popper saw falsification as a discipline you must deliberately choose because it cuts directly against instinct, ego, and social identity.
The advantage falsification brings beyond identifying the truth is that this mindset increases velocity and decreases time to value. A sharp understanding of what would disprove your hypothesis allows you to design and sequence tests that grant you that knowledge as quickly as possible. Failing tests narrow the realm of possibility, allowing progressively better hypotheses until the underlying truth comes into view.
This concept applies to mental models as much as it does product launches. Every mental model is a set of claims that need to be pressure-tested. Better mental models create the foundation for better hypothesis generation. Product management is hypotheses all the way down.
Thus, you want a team full of people asking themselves “how can I prove myself wrong?”
The sociology of resistance
Falsification is conceptually straightforward, but deeply countercultural. Adopting a falsification mindset challenges instincts, incentives, and identity. Product managers don’t have a theory problem; they have a sociology problem.
Two of the greatest challenges are psychological: confirmation bias and the drive for certainty.
Humans are wired to seek out information that confirms our ideas and avoid information that threatens them. We gravitate to evidence that flatters our initial assumptions because it stabilizes our identity and simplifies our world. We are also profoundly certainty-seeking creatures. Uncertainty registers in the brain as a threat, so we instinctively paper over ambiguity rather than expose it. Falsification threatens both drives.
The very act of attempting to disprove a belief forces you to admit you don’t already know the answer. And if you follow the practice honestly, you will disprove your hypotheses with some regularity — publicly demonstrating that you didn’t know the answer. For many people, this is psychologically destabilizing. It feels like status loss.
The organizational dynamics are even more hostile.
Cultures that reward overconfidence often mistake intellectual humility for weakness. People are extremely status-loss avoidant, so they converge on whatever beliefs feel safest to hold in public, not the beliefs most likely to be true. In cultures where product launches that do not deliver their intended outcome are treated as failures rather than experiments, truth-seeking becomes painful and politically dangerous. The rational move in these environments is to avoid disconfirming evidence, to shield your ideas from reality, and to run ornamental tests — experiments designed to look rigorous while avoiding real risk — instead of consequential ones.
Some organizations go further and create explicit incentive structures that are incompatible with truth-seeking. The purest example is the feature factory, where the goal is not discovering what works but delivering what the executive team already believes will work, regardless of evidence. Even more sophisticated organizations can fall into the same trap through heavyweight process cultures. OKRs are a prime example. Every team knows the score that demonstrates the appropriate level of ambition. People are incredibly creative when the goal is hitting a number or satisfying a process, so creative that the number or the process becomes the target itself. In these cases, the epistemic function of the work evaporates. Falsification dies because the game is no longer about discovering truth; it’s about performing alignment.
This is why falsification must be deliberately protected. Left alone, people default to identity-protection, fear avoidance, and target-chasing. Truth-seeking requires a system that rewards curiosity, rapid updating, and intellectual honesty, and that treats “being wrong” as progress rather than a loss of status.
The following sections describe the organizational practices required for truth-seeking, and the container leaders must build to make those practices not just possible but inevitable.
Action as the path through uncertainty
I once faced a decision with enormous upside and daunting uncertainty. I had identified a change to a company’s offering that, if successful, would have a $100M+ payoff within 2 years. The new offering hinged on people having some flexibility in their schedules. Implementing this change also came with significant risk — it would cost nearly $1M to implement, and would have taken years to unwind if it underperformed, all while racking up millions in program costs. A classic one-way door product launch.
The upside was so high that the CEO viewed finding a path forward as critical, but other executives were skeptical that incentives could shift behavior to the point where the program would break even. The instinct was to analyze more, but I noticed that further analysis was just rearranging the same information. Forward motion ceased. Analysis could not resolve the core uncertainty: Would customers actually shift?
We broke the impasse by shifting our thinking. Instead of asking “what would need to be true?”, we asked “what evidence would cause us to abandon our efforts?”. Once we asked the question that way, the kill condition became clear: if fewer than 3% of visits would shift, the program would not be viable. This became the hypothesis to test. From there, we sought the lowest lift, lowest commitment intervention that could provide this data. We arrived at setting up a test using gift cards to incentivize customers to shift behavior. In 2 weeks, and at 2% of the cost of a full rollout, we had our answer: 6% of visits shifted, and the full program was greenlit.
We took a high-stakes, irreversible decision, and found a simple, low-cost, reversible action hiding within it.
We can think of decisions as fitting into three categories:
**Type 1: Irreversible, high-consequence decisions **These are decisions that meaningfully shape the state of the system and cannot be undone without enormous cost. Amazon launching Prime is a canonical example. Major contractual commitments that change the structure of a business is another example.
These decisions deserve rigor, alignment, and careful sequencing.
**Type 2: Reversible, limited-blast-radius decisions **These decisions are operational and recoverable. You can revert them quickly if needed. Changes to checkout flows, pricing tests, and personalization algorithms all qualify.
Most product decisions are Type 2. These should be fast.
**Type 3: Information decisions (epistemic actions) These are not decisions about the business — they are decisions about how to learn. They are taken not because we believe the action is inherently correct, but because acting is the only way to collapse uncertainty. **They have three defining characteristics:
- The purpose is not to improve the system; it is to reveal truth about the system.
- They are reversible by design because their purpose is not impact, but information.
- They are the only decision type designed to increase the quality of future decisions.
Examples:
- A two-week elasticity test to determine whether lowering price increases revenue or simply erodes margin.
- A targeted incentive experiment to learn whether customers will shift their behavior.
- A falsification test designed to break a core assumption. For example, testing whether “power users” actually drive referrals or merely correlate with them.
A Type 3 decision is not even a bet; it is a probe.
Analysis paralysis has different causes depending on the type of decision you’re facing.
For Type 2 decisions held to Type 1 standards, it’s a calibration problem. The fix is simply recognition: this is reversible, move faster.
For actual Type 1 decisions, the fix is decomposition. You’re not lowering your standards; you’re finding the Type 3 actions that rule out bad outcomes before committing. The mistake is treating the Type 1 decision as atomic when it’s actually a bundle of uncertainties, some of which can be resolved cheaply.
How much certainty is enough? A useful heuristic: decisions should be made with between 40% and 70% of the needed information. At less than 40%, you’re just guessing. Above 70% and you’ve waited too long.
For Type 2 decisions, 40% is enough. You’re not seeking certainty; you’re seeking velocity. Modern product development makes most changes cheap to ship and quick to reverse, so the cost of delay almost always exceeds the cost of being wrong.
For Type 1 decisions, 70% is the target, but the question isn’t “how do I analyze my way there?”, it’s “what Type 3 action gets me there fastest?”
The critical mental shift is going from treating uncertainty as a signal you cannot yet act to recognizing it as the signal that you must act. Analysis gives you deduced information: conclusions drawn from existing knowledge. Action gives you induced information: new data that didn’t exist until you moved. This is why action resolves ambiguity more efficiently than analysis. You can’t update your beliefs until you have something to update them from.
A container for truth-seeking
Everything above describes how a PM should think. This mindset is necessary, but it isn’t self-sustaining. Certain environments amplify truth-seeking; others extinguish it. Whether these behaviors take root depends far less on individual PMs than on the incentives and environment leaders design. The first environment leaders must design is their own — how they think about uncertainty, risk, and the portfolio as a whole.
This approach might feel more uncertain at the portfolio level; more bets, more individual failures. It’s best viewed as a certainty-for-certainty trade. You’re giving up the feeling of confidence in any single bet for confidence that your decision-making system works. The PM trades certainty-on-this-decision for velocity. The leader trades certainty-on-any-one-bet for confidence that the portfolio will perform. Same structure, different altitude.
This reframe is what unlocks everything else. If you’re managing the distribution of outcomes rather than each outcome, you can afford to let your PMs move faster, tolerate more individual misses, and treat negative results as information rather than failures.
This reframing naturally raises the question of the people themselves: who thrives in a truth-seeking environment and who struggles.
We’ve laid out a set of traits you want in your product managers: a high tolerance for ambiguity, paired with a high drive for epistemic clarity. A bias toward decisive action to test hypotheses. A falsification mindset to identify dead ends quickly.
Many people will find this bar difficult to meet. The tools themselves (articulating hypotheses, defining decision criteria, practicing falsification) are teachable. The real challenge is putting them into practice. That requires a particular temperament: willingness to be wrong, willingness to update, the ability to tolerate ambiguity without stress, and the instinct to remain curious rather than defensive when launches don’t deliver.
This mindset runs counter to basic psychological drives. Ambiguity is uncomfortable for most people. They identify with their ideas, so disproving an idea can feel like a threat to the self. And above all else, people associate being wrong with status loss — not internally, but socially.
You can teach skills, but temperament is much harder. Identity is nearly unteachable. So what does that imply for you as a product executive?
Your job is to create an environment where the people already wired for this mode of thinking can reveal themselves. Most people will not rewire their identities, but they will absolutely change their behavior when the system rewards certain modes of thought and penalizes others. The goal at the executive level isn’t to force people to think like you, it’s to define the rules of the game.
If a culture rewards overconfidence, bravado, rhetorical dominance, and post-hoc justification, then the people who thrive will be the ones wired for those incentives. Truth-seeking evaporates because the environment punishes it.
If a culture rewards explicit hypotheses, rapid updating, intellectual honesty, and speaking in probabilities instead of absolutes, then people wired for truth-seeking will surface. Those who aren’t will plateau or self-select out.
Only senior leaders can set these incentives.
What a truth-seeking culture needs is surprisingly simple. Leaders must reward good reasoning, not “being right.” Praise someone for updating quickly, not for winning a debate. Make public falsification safe. People need to hear leaders say, “I was wrong; we learned something important.” Treat action as information gathering, not as risk-taking. The first question after a launch should be: “What did that teach us?” Treat analysis as incomplete without movement. Ask: “What data can we generate?” not “What else can we model?” Make hypotheses explicit and transparent.
And above all else, drive the social cost of updating beliefs toward zero. You cannot have truth-seeking if people fear the humiliation of being wrong.
You don’t create truth-seekers; you create the conditions that let truth-seeking flourish and make its alternatives untenable.