Most product teams plan for success, which makes sense until you realize that planning only for success is how companies end up surprised when things go sideways. Smart teams also plan for failure.
I teach premortems in my Stanford product management class, usually in Week 8, right after students have built prototypes and before they pitch to pretend investors. By this point they’re convinced their ideas are brilliant, their prototypes are solid, and investors will obviously see the vision. They need a reality check.
A premortem is simple: imagine your product is dead eighteen months from now, then work backwards to figure out what killed it. That’s it. No fancy framework, no certification required.
Why Your Brain Likes This Better Than Risk Analysis
[Gary Klein at MIT studi…
Most product teams plan for success, which makes sense until you realize that planning only for success is how companies end up surprised when things go sideways. Smart teams also plan for failure.
I teach premortems in my Stanford product management class, usually in Week 8, right after students have built prototypes and before they pitch to pretend investors. By this point they’re convinced their ideas are brilliant, their prototypes are solid, and investors will obviously see the vision. They need a reality check.
A premortem is simple: imagine your product is dead eighteen months from now, then work backwards to figure out what killed it. That’s it. No fancy framework, no certification required.
Why Your Brain Likes This Better Than Risk Analysis
Gary Klein at MIT studied this, and teams that do premortems identify 30% more potential problems than standard risk planning. The reason has everything to do with how our brains actually work.
Our brains are better at explaining the past than predicting the future, so by making the future into a past event, we trick ourselves into clearer thinking. “What could go wrong?” gets vague answers like “technical issues” or “market challenges.” But “it failed—why?” gets specific ones: “we stored passwords in plain text” or “unit economics never worked.”
Consider what happened to Knight Capital Group, which lost $440 million in 45 minutes in 2012 because they deployed untested code to production. Their postmortem said “we should have tested better,” which is true but not particularly useful. A premortem would have asked “we just lost half a billion in an hour—what happened?” and someone would have said “bad deployment process” and they’d have built safeguards before the code ever went live.
The Four Ways Products Die
When I walk students through premortems, I make them examine four failure categories, and here’s what I’ve learned over years of teaching this: most teams only think about the first one, maybe touch on the second, and completely ignore the third and fourth until it’s too late to do anything about them.
Technical Failures: The Obvious Ones
These kill fast because they’re visible and immediate. Your database gets hacked and customer data leaks all over the internet. Your app crashes during a press launch and suddenly you’re apologizing on Twitter. You can’t ship new features because your code is such spaghetti that every change breaks three other things.
Engineers naturally think about these, which is good, but students consistently underestimate them because their prototypes with fake data and three test users work fine. Scale changes everything, and what works at small scale often falls apart spectacularly when real users and real data hit the system.
Here’s the exercise I give them: Your AI tutoring app just got hacked and 10,000 student records leaked—names, emails, learning struggles, chat logs with the AI tutor, everything. Work backwards through what broke.
Think about authentication and access control, data encryption at rest and in transit, API security, backup and recovery systems, third-party dependencies that could fail, and how the whole thing handles scale and load. Each one of these is a choice you’ll make in the next few months as you build, and each one is a potential point of failure.
Market Failures: The Slow Bleed
These are trickier because they look like success at first, which is why they’re so dangerous. You launch, you get users, everyone’s excited, the press coverage is good, and then growth stalls. Or worse—you’re growing but every new customer loses you money, which means you’re essentially paying people to use your product while burning through investor cash.
Bird Scooters raised $275 million and hit a $2.5 billion valuation before filing for bankruptcy in 2023. The unit economics never worked—high costs for scooters that broke constantly, expensive labor for repositioning them, low margins on rides. COVID killed ridership during their growth phase. They over-expanded to 350+ cities, many unprofitable from day one. They faced regulatory backlash and over 100 injury lawsuits. The math didn’t work, but investor pressure to grow kept them expanding anyway.
A premortem might have caught this.
The exercise: Pick any startup that burned through funding fast and figure out what killed them. Think about unit economics—Customer Acquisition Cost (CAC) versus Lifetime Value (LTV)—along with pricing strategy, competition, market timing, customer acquisition channels, and whether the revenue model can actually sustain the business.
Ethical Failures: The Career Enders
These are the ones that keep me up at night, because they’re about your product working exactly as designed—and causing harm anyway. You optimized for the right metrics, you hit your growth targets, your engagement numbers look fantastic, and then someone points out that what you built is actually hurting people.
Cambridge Analytica didn’t just hurt Facebook’s feelings—it cost them billions in fines, spawned GDPR, and permanently damaged trust with users and regulators. Facebook survived because it’s massive and had the resources to weather the storm. Your startup won’t.
The pattern is depressingly consistent: Build feature → users figure out how to abuse it → media pile-on → regulatory response → you’re toast. And the time between those steps keeps getting shorter.
Nextdoor built crime reporting, which seemed useful until people started using it to racially profile their neighbors. They had to completely rebuild the feature with friction and prompts, but the brand damage was done.
Robinhood gamified trading with confetti animations and made it easy to trade options, which made trading fun and accessible. Young traders lost massive amounts of money. One died by suicide after a UI bug showed a fake negative balance. Congressional hearings followed, then lawsuits, and Robinhood had to remove all the gamification features that made them special.
TikTok’s algorithm optimizes for engagement, and it turns out depression content and eating disorder content is incredibly engaging to vulnerable teenagers. The algorithm creates rabbit holes that mental health researchers are now studying. Several countries are considering bans.
None of these companies intended harm. All of them caused it anyway. That’s what makes ethical premortems so critical—you need to imagine how your well-intentioned features get weaponized or create unintended consequences before you ship them.
The exercise: Your fitness app is being blamed for contributing to eating disorders in teen girls. There’s a 50,000-signature petition demanding you shut down. Psychologists are quoted in news articles. Apple is threatening to remove you from the App Store. How did this happen?
Work backwards through your features: Was it the calorie tracking that became obsessive? The social comparison features like leaderboards and sharing? Achievement badges for weight loss? The AI coach giving dangerous advice? Community features that enabled pro-ana content? An algorithm that recommended increasingly extreme content? Targeting and marketing to teens without proper guardrails?
Your engagement metrics probably looked fantastic. Users were opening the app constantly, spending tons of time in it, coming back daily. The problem is what they were doing in there, and metrics alone would never tell you that.
Regulatory & Environmental: The Rule Changes
Laws change, and they usually change because you or someone like you did something regulators didn’t like. You build for today’s rules, the rules change, and suddenly you’re operating illegally or uneconomically.
Remember Bird? They also died from regulatory issues on top of their market problems. Cities banned them because scooters cluttered sidewalks and blocked wheelchair access, which is both an accessibility issue and a political problem. Paris banned e-scooter rentals entirely after residents revolted. Bird paid $600K in fines in their home city of Santa Monica. When they filed for bankruptcy, over 300 cities were listed as creditors.
The pattern repeats across industries: Airbnb faces bans in dozens of cities. California tried to reclassify gig workers as employees, threatening the entire business model. Countries keep banning or restricting crypto. Social media platforms face age verification laws, content moderation requirements, and data localization rules that make their global model increasingly difficult.
Environmental backlash is real too, and it’s not going away. Bitcoin mining uses more energy than Argentina. NFT minting got massive pushback for its carbon footprint. If your product is energy-intensive or depends on rare earth minerals extracted under questionable conditions, activists will eventually notice, and Gen Z actually cares about this stuff in a way previous generations didn’t.
How to Actually Do This
Give yourself 20 minutes and work through each category systematically. Don’t skip any of them just because they seem less relevant to your product—that’s exactly the category that will kill you.
Technical/Operational: How does your product break at scale? Market/Business: Why can’t you acquire customers or make money? Social/Ethical: How does your product cause harm or get weaponized? Environmental/Regulatory: What laws change or what externalities catch up with you?
Identify 3-5 highest probability × highest impact failures. These are the ones that could actually kill your company, not the minor annoyances. For each one, write down ONE thing you could do in the next month to reduce the risk. Make it concrete and actionable, not “be more careful” or “monitor the situation.”
Don’t just list obvious stuff like “server might go down” or “competitors exist.” Push yourself. The best premortems surface the thing you haven’t been thinking about, the failure mode that seems unlikely until you really imagine it happening.
Prompts if you’re stuck:
Technical: “Our database has 10 million records and someone runs a bad query.” “Our ML model gets trained on biased data.” “AWS raises prices 300%.”
Market: “Meta/Google launches this next week.” “Our target users never pay for software.” “COVID happens again.”
Ethical: “Journalists write an exposé about harm we’re causing.” “Our users weaponize this to harass people.” “Our algorithm discriminates but we don’t know it yet.”
Regulatory: “What if the EU bans this?” “What if we need licenses we don’t have?” “What if data laws change?”
What This Actually Prevents
Premortems aren’t about becoming paralyzed by fear or killing your team’s momentum with endless what-if scenarios. They’re about informed optimism—you’re still going to launch, you’re still going to take risks, but you’re going to do it with your eyes open to the specific ways things could go wrong.
The best product teams make this a habit. They do premortems before major launches, before entering new markets, before adding risky features. It becomes part of how they work, not a special exercise they pull out occasionally.
When you pitch your product—to investors, to your boss, to your team—include a slide on risks and mitigations. Not because you’re being pessimistic, but because it shows you’ve thought through how things could fail and what you’re doing to prevent it. Smart money backs people who see around corners, who understand that most risks are manageable if you spot them early enough.
The Slides Are Yours
I’ve taught this enough times to have a decent slide deck. It’s yours if you want it:
The exercises work whether you’re teaching a class or running a team meeting. Thirty minutes of imagining failure can save you eighteen months of living it.