AI Broke Interviews
Interviewing has always been a big can of worms in the software industry. For years, big tech has gone with the LeetCode style questions mixed with a few behavioural and system design rounds. Before that, it was brainteasers. I still remember how you would move Mount Fuji era. Tech has never really had good interviewing, but the question remains: how do you actually evaluate someone’s ability to reason about data structures and algorithms without asking algorithmic questions? I don’t know. People say engineers don’t need DSA. Perhaps. Nonetheless, it’s still taught in CS programs for a reason.
In real work, I’ve used data structure-y thinking maybe a handful of times, but I had more time to think. Literally days. Companies, on the other hand, need to mak…
AI Broke Interviews
Interviewing has always been a big can of worms in the software industry. For years, big tech has gone with the LeetCode style questions mixed with a few behavioural and system design rounds. Before that, it was brainteasers. I still remember how you would move Mount Fuji era. Tech has never really had good interviewing, but the question remains: how do you actually evaluate someone’s ability to reason about data structures and algorithms without asking algorithmic questions? I don’t know. People say engineers don’t need DSA. Perhaps. Nonetheless, it’s still taught in CS programs for a reason.
In real work, I’ve used data structure-y thinking maybe a handful of times, but I had more time to think. Literally days. Companies, on the other hand, need to make fast decisions. It’s not perfect. It was never perfect. But it worked well enough. Well, at least, I thought it did.
And then AI detonated the whole thing. Sure, people could technically cheat before. You could have friends feeding you hints or solving the problem with you but even that was limited. You needed friends who could code. And if your friend was good enough to help you, chances are you weren’t completely incompetent yourself. Cheating still filtered for a certain baseline. AI destroyed that filtration layer.
Everyone now has access to perfect code, perfect explanations, perfect system design diagrams, and even perfect behavioural answers. You don’t need a network. You don’t need experience. You just need a second monitor. Lying. You don’t even need that. Check this out.
And we’ve seen the impact firsthand. Candidates read picture-perfect solutions out loud without understanding a line. Others paste flawless code but forget a comma that the model inserted. Some tilt their head and silently read behavioural answers word-for-word. One person froze when asked something AI didn’t have an answer: So… what makes it interesting?
Meanwhile, all companies are panicking. Well, we are hiring many positions. We are facing this. It starts from screening to the interviews. I received four different CVs for four different positions for the same person. All legit looking. I only realized the name from the name sounding similar. Then, people can have the same names. The same email revealed it! A minor mistake. Google recently announced a return to in-person interviews because too many candidates were using AI.
I don’t blame anyone. AI is a tool. But interviews are supposed to measure your problem-solving capacity. It’s not supposed to measure your ability to prompt an LLM. Honestly, you aren’t even prompting if you are using these cheating tools.
And right now, the industry has no idea how to separate the two. So this post is really about that mess: how interviewing was already flawed, how AI blew it wide open, and what companies are now doing to rebuild a process that actually measures people again.

The Broken State of Technical Interviews
Technical interviews have been broken for so long that it almost feels intentional. Every few years the industry collectively looks at the mess, shrugs, and then continues using the same process with a slightly different coat of paint. You see posts here and there either complaining or sometimes defending about the kind of a shit show this is.
We did make progress. Remember we started with brainteasers. How many golf balls can you fit in a Boeing 747 was a real question. We somehow assumed the ability to estimate the volume of imaginary spheres translated to architecting a distributed system. Then came the LeetCode era, which pretended to be more objective. Instead, it created an entire global training pipeline where people grind patterns like they’re prepping for a standardized test. Memorize enough sliding windows and binary search variations and suddenly you’re interview-ready, regardless of whether you can debug a production incident at 4 a.m.
Even if we hate these interviews, nobody has a better alternative that scales. Algorithmic questions survive because they compress time. They give companies a fast, somewhat standardized way to compare candidates. They’re flawed. They’re artificial. They don’t map cleanly to daily work. But they fit neatly into a 45-minute slot, and hiring managers need to make decisions. Quickly.
And yes. The pressure is artificial. But the job isn’t pressure-free either. So the industry tolerated this system. It wasn’t good, but it was predictable. It produced enough signal to keep the machine running. If you practiced, you could get in. If you didn’t, you probably wouldn’t. A flawed system, but a stable one. At least until AI showed up and knocked out the last remaining pillar holding it upright.
AI Collapsed Interviews
AI didn’t influence interviews. It came and detonated the very foundation they were built on. In one go. The old interview system may have been flawed, but it relied on one assumption. The candidate sitting in front of you was the person actually doing the thinking. That assumption is now gone.
Before AI, cheating had a ceiling. You needed another human, time, coordination, and a bit of luck. Probably, most people didn’t bother. And even when they did, the advantage wasn’t overwhelming. Humans are slow. Humans make mistakes. Humans can’t instantly produce optimal code. AI is different. AI gives anyone access to expert-level output on demand.
The shift was immediate. Now, candidates started producing suspiciously perfect solutions with no intermediate steps. Some delivered final code as if they were reading it from a teleprompter. Others stumbled the moment you nudged the problem off-script. Even behavioural answers started sounding engineered and somewhat polished in a way no human naturally speaks under pressure. I’ve never given a perfect answer to any interview question ever. Period.
That’s the real issue. AI has made effortless perfection possible. The problem isn’t that people cheat. The line between cheating and genuine skill has blurred to the point of meaninglessness. A strong candidate and a well-prompted model can look identical in a 45-minute remote call.
And once interviewers lose trust in what they’re seeing, the entire format collapses.
Remote interviewing depends on transparency and authenticity. These are two things AI quietly erases. That’s why companies are retreating back to physical rooms. Not because they’re nostalgic, but because they’re trying to recover the one thing remote interviews can no longer guarantee. A real signal in real human reasoning.
The New Interview Behaviours
I think the most striking thing about interviewing today is how they are using it. You start seeing patterns, little behaviours that didn’t exist a year ago. Interviewing used to reveal how someone thinks. Now it often reveals how someone is trying to hide how they think.
You notice candidates jumping straight to a final solution, bypassing the messy steps real engineers usually go through. Creating unnecessary variables. Deleting some wrong code. False starts. Half-formed thoughts. Forgetting edge cases. Wait, let me check something moments. Natural problem-solving has texture. AI-generated answers don’t. Now, we see many people arrive at fully formed solutions. How? They also deliver with a kind of unnatural smoothness that feels more like a script than a thought process.
Then there’s the pacing. A human pauses to think. AI-assisted candidates pause to receive a perfect answer. You can mostly feel the rhythm shift. Their eyes drift slightly. You think we don’t see that, don’t you? They repeat your question back to you. I think it is for buying time so that LLM can respond. Ask them to adjust the problem by 10%, and the fluency vanishes. Ask why they chose an approach, and you get circular reasoning or generic definitions that don’t connect to the actual code they just wrote.
On the other hand, behavioural interviews aren’t immune either. Some candidates recite polished stories with perfect structure and perfect morals but zero personality. It’s like listening to a ted talk of a human being rather than the human. The more refined the answer, the less grounded it feels. My perfect person has been the guy who turned off the camera and answered without a single pause. No filler words. No nothing. You’re not talking to a person anymore. Now, you’re talking to a persona that AI created on the fly.
Unfortunately, none of these alone prove anything. We just add them to our list of red flags. This type of red flag didn’t exist before. But together, they form a pattern that interviewers can’t ignore. Even then, we can’t really put a finger on it with 100% certainty. And it puts everyone in a strange position. Interviewers now need to run two separate evaluations at the same time:
- Can this person solve the problem?
- Is this person actually the one solving it?
That second one never used to exist. It’s new, and it completely changes the dynamics of interviewing. The process becomes less about assessing skill and more about parsing authenticity. Humans have some degree of understanding that but that’s something humans aren’t particularly good at even under ideal conditions.
Returning to In-Person Interviews
The shift back to in-person interviews isn’t sentimental. We already have one or two rounds to fight against this kind of cheating. Nobody suddenly decided whiteboards are magical again. Nobody missed corporate carpeting or those dry-erase markers that barely work. This isn’t nostalgia. It’s damage control. We simply don’t want to hire people who aren’t fit for the role.
Remote interviews were built on one assumption: the interviewer is evaluating you. Not your tools, not your friend, not your second monitor. You. Once AI blurred that line, the entire format lost integrity. We realised we are no longer interviewing candidates. We are interviewing the candidate plus whatever model they were quietly consulting. And while that might sound like a minor detail, it completely breaks the purpose of an interview. Evaluate someone’s reasoning in real time. So companies did the only thing left. Take the process back into a controlled environment. Interviews can still be remote but you need to be in company premises. We need this because of the following reasons.
Real-time cognitive transparency.
When someone explains an idea while drawing it, debating it, reworking it, you see the shape of their thinking. You see hesitations, corrections, the actual mechanics of problem-solving. AI can give perfect answers, but it can’t fake the messy human process that leads to them.
Constraints that force authenticity
A physical room eliminates the easy shortcuts: second monitors, silent prompting, overly polished behavioural answers being read from a screen. It’s harder to hide behind a persona when you’re standing at a board, adapting as you go.
A more realistic collaboration signal
Software is built through conversations, arguments, and trade-offs. In-person interviews mimic the energy of real teamwork much more closely. I’m not excited to see anyone’s handwriting on a whiteboard including mine. Yet, we need the flow of the dialogue.
Reduced noise in the pipeline
Remote interviewing made it cheap to interview massive numbers of people. That inflated standards, which unintentionally rewarded candidates who used AI to hit those inflated standards. Bringing interviews back in person naturally reduces volume and raises the quality of the signal.
Rebalancing the playing field
When the default becomes everyone can produce flawless output, the only remaining metric is: Can you think? That’s what companies are trying to measure again. This isn’t about punishing candidates. We need to recover a basic truth. You can’t evaluate a person if you can’t be sure you’re actually talking to the person. In-person interviews aren’t perfect, and they won’t magically fix everything but for now, they’re the closest thing the industry has to a fair playing field.
AI-Resistant Interviewing
Now that interviews can’t rely on remote honesty or polished online coding, what can we do? I think we need to come up with something that actually measures human reasoning again. Let’s not call it AI-proof but maybe AI-resistant. We shouldn’t trap candidates again. We should force a signal that models can’t generate on demand. Here’s what that future might start to look like:
Explain This Code Instead of Write This Code
Instead of asking candidates to implement an algorithm, we can show them a non-trivial snippet and ask:
- What does this do?
- Why does this work?
- Where could it break?
- How would you improve it?
Yes, AI can write perfect code. AI can’t simulate your prior experience with bad codebases. Understanding is harder to fake than output.
Real-Time Architectural Debates
System design is shifting from drawing boxes to defend your choices. We can lean more on conversations like:
- Where is the state?
- What breaks under load?
- What’s the trade-off between consistency and latency here?
- If the business changes direction tomorrow, how do you adapt this?
I hope you can’t hand these off to a model in the middle of an interview without collapsing.
Physical Whiteboards and Live Collaboration
Writing and thinking in physical space forces you to reveal your reasoning. The medium becomes a filter:
- No second monitor.
- No hidden assistants.
- No time to prompt.
Adaptive Questioning
Static question lists are dying. All of them are solved or can be solved by AI. Maybe, we need to do more, such as:
- Throwing edge cases.
- Changing context.
- Removing a constraint.
- Adding a bad business requirement.
- Asking the candidate to think out loud through the adjustment.
Behavioral Questions Without Scripts
The classic “Tell me about a time…” is becoming less useful because AI can spit out a polished story. I’ve seen it. Maybe we should ask questions like
- What was the last thing you broke at work?
- What’s something you changed your mind about recently?
- Tell me something you wish more engineers understood.
In these questions, we can expect reflections.
Genuine Debugging
A debugging session reveals more about an engineer than 100 questions:
- How they explore
- How they form hypotheses
- How they react to confusion
- How they narrow down the problem
- How they prioritize
AI can fix code. But it can’t convincingly emulate how you navigate uncertainty.
The Return of the Two-Way Interview
As trust erodes, companies finally remember something obvious. I know its employer’s market but still candidates also need to evaluate them. Human conversation becomes a competitive advantage again:
- Can this team think clearly?
- Do they collaborate well?
- Do they communicate like adults?
People want to work with people who think.
The emerging theme
I don’t necessarily think interviewing will become harder. I think we just need to focus on
- More reasoning
- More interaction
- More adaptability
- More presence
- Less memorized nonsense
- Less polished AI monologue
In other words, interviewing might become more human again. Ironically because of AI.
The Ethical Dilemma
Now, it’s easy to sit on the hiring side and say “Don’t cheat”. Think for a second. It just becomes much harder when you’re the candidate staring at a system that feels designed to punish anyone who doesn’t. If you’re an honest candidate, you’re not just competing against other engineers anymore. You’re competing against:
- people using AI assist while interviewing
- people rehearsing AI-written behavioural stories
You might be brilliant, thoughtful, and competent. Yet, you might still lose to someone who bought that shady interview tool. And that pressure builds. It pushes otherwise principled people to ask themselves:
- Am I being stupid for not using the tools everyone else is using?
- Is it even cheating if the job itself lets me use AI every day?
- Why should I handicap myself when the system isn’t fair to begin with?
Your job expects integrity, but the interview rewards optimisation. And yes, hiring teams want honesty. But the bar has been artificially inflated because AI-assisted candidates produce superhuman output. That means genuine candidates often feel forced to operate at a disadvantage just to maintain their ethics. It creates a messed-up paradox:
- If you cheat and get caught, you’re done.
- If you don’t cheat and underperform compared to AI-assisted peers, you’re also done.
It creates a varying degree of risk. And even if you win by cheating, the victory is hollow. Better believe it. You enter a role based on someone else’s performance, not yours. That’s a liability. Companies eventually notice. Teams eventually feel it. And you feel it most intensely when you’re expected to deliver independently. You’ll get PIP’ed eventually.
The truth is, interviews have always been stressful. AI pushed it to the next level. And the industry has given them no clear guidance, no consistent rules, and no stable expectations.
That’s why these two questions matter more than ever:
- What should interviews actually measure?
- What does fair even mean in a world where everyone has a machine in their pocket that can outperform them?
What Interviews Should Actually Measure
If AI can outperform most humans on promptable tasks, then interviews need to stop pretending that flawless output equals competence assuming they haven’t done it already. Perfect answers aren’t impressive anymore. They’re suspicious. Our metric has to shift from what you produce to how you think. The question isn’t “Can you solve this problem?” The question is “Can you solve this problem without a neural network doing the heavy lifting for you?” But more importantly, What does the job actually require?
Because companies often fixate on the wrong qualities. We care about speed, recall, pattern recognition when the real job is:
- diagnosing ambiguous issues
- navigating trade-offs
- working with dependencies
- understanding systems
- adapting to constraints
- communicating clearly under pressure
- working with limited information
- maintaining technical judgment over months and years
Those are the skills that differentiate a good engineer from an average one. Great from good. And conveniently, those are also the skills AI can’t fake convincingly in a live conversation.
So a future-proof interview process should measure:
Reasoning Under Uncertainty
Because engineering is rarely a perfect algorithmic exercise. It’s a series of half-informed decisions. Watching someone reason through not knowing is far more telling than watching them implement a binary search they memorised.
Ability to Explain Thinking
A model can output solutions, but it cannot narrate the experience behind the choices. Humans can. That narration is signal:
- what they’ve struggled with
- what they’ve learned
- where they’re strong
- where they’re blind
- how they approach complexity
3. Adaptability When Constraints Change
Real work is dynamic. A candidate who may collapse when the problem shifts is someone who can’t function in a real environment. AI-assisted answers crumble instantly under changed assumptions. Humans generally don’t. Well, at least, with a hint.
4. Actual Engineering Judgment
Not pattern matching, not arbitrary optimisation, not textbook theory. It’s judgment. The ability to balance business needs, technical debt, scalability, trade-offs, team constraints, and the cost of being wrong. This is the real craft of engineering.
5. Communication That Isn’t Scripted
AI can produce clean stories. AI cannot reproduce the messy, nuanced, spontaneous way real humans communicate when they’re thinking deeply. Interviews should measure how a candidate interacts. It should never be about polished stories.
6. A Sense of Ownership
Do they care? Do they take responsibility? Do they show the maturity to say, I don’t know, but here’s how I’d find out? People with ownership create value. People without it create incidents that you don’t want.
The theme is simple
If interviews only test for skills AI can replicate, they’re testing the wrong things. The future of interviewing is about raising fidelity. We want to measure the parts of engineering that remain uniquely human.
Back To the Future
Alright, what should the future of interviewing look like? I can somewhat confidently say that it won’t be fully remote although I can see a startup idea where you send a camera to their room. Companies aren’t going back to 2015, and they’re not staying in 2025 either. We’re heading toward a hybrid model. I think it’s the only structure that preserves realism and scales. Remote interviews aren’t dead. They’re just no longer trusted as the primary signal. Here’s what the industry is converging toward:
1. Lightweight Remote Screens
Remote rounds won’t disappear especially for building remote teams. They’ll just serve a different purpose:
- basic technical screen
- basic behavioural look
- basic sanity check
Think of them as filtering noise. It won’t determine competence. Then the final evaluation happens on-site, where candidates are forced into a mode AI can’t easily assist:
- whiteboard collaboration
- debugging in front of someone
- architectural debate
- live problem-solving
- real conversation
The hard stuff moves back into the room.
2. Smaller Candidate Pools, Higher Signal
Remote interviewing lets companies interview absurd numbers of people. We have more to interview unfortunately because of mass layoffs. It felt efficient, but it created a dangerous side effect: artificially high difficulty. When you have a hundred candidates for one role, you raise the bar until only the top 1% survive. AI shattered that whole model. Companies now realise the volume game rewards:
- cheaters
- prompt optimisers
- test-takers over engineers
So the trend is shifting toward fewer candidates who get deeper attention. Quality over quantity. Human signal over machine signal.
3. Interviews That Prioritise Dialogue Over Performance
The high-fidelity interviews of the future won’t look like gladiator matches. They’ll look more like working sessions:
- Let’s debug this together.
- Walk me through how you’d approach this if the deadline was tomorrow.
- Let’s change the constraints and see where it takes us.
Less theatre. More thinking.
4. Tools Used Transparently, Not Secretly
AI won’t be banned. Why would you? It’ll be integrated honestly:
- Here’s a problem. Use AI if you want, but walk me through why you trusted its output.
- Explain what the model got wrong and how you’d correct it.
This is where the industry will eventually settle. Because the future engineer isn’t the one who avoids AI. We should all use it. But we need transparency. Not a silent copilot shoved off-screen.
5. Interviews That Don’t Try to Outgrow AI
AI-proofing interviews is pointless. We all know models get stronger every few months. So instead of fighting the tech, interviews will evolve past the capabilities of the tech. We need to shift focus to things models can’t impersonate cleanly:
- judgment
- trade-offs
- prioritisation
- lived experience
- collaborative thinking
- emotional maturity
- actual engineering intuition
Those remain uniquely human for a long time. Maybe, I’m kidding myself. I hope not.
The bottom line
We’re moving from performance-based interviews to reality-based interviews. The future won’t feel easier for candidates. It never was. Nonetheless, it will feel more honest. Companies want to see people think, not people prompt. Engineers want to show who they are, not compete with a model sitting on someone’s second screen or first. If AI broke the old system, it might just force us into building a better one.
In Conclusion
For years, technical interviewing limped along. It wasn’t great, but it was predictable. Companies asked their LeetCode questions, candidates memorised their patterns, and everyone accepted the system as a necessary evil. It wasn’t fair, but it worked well enough to keep hiring moving.
AI ended that era almost overnight. It didn’t just increase cheating. It erased trust. It broke the basic assumption that the person answering was the person thinking. And once that assumption collapsed, the entire remote-interview infrastructure collapsed with it.
We’re now entering a different phase where interviews become more human, not less. Where reasoning matters more than recall. Where conversation matters more than performance. Where companies stop trying to test for things models can do better, and start testing for things only real engineers can do.
This transition won’t be smooth. It won’t be perfect. And it definitely won’t satisfy everyone. But for the first time in a long time, interviewing has a chance to evolve into something that actually measures the things that matter:
- judgment
- clarity
- ownership
- collaboration
- adaptability
- real problem-solving
Not pattern-matching. Not memorized scripts. Not who can prompt the fastest. AI broke the old system. Perhaps, it needed breaking anyway. What comes next might finally be better. High-signal, human-centric interviews that reflect how engineering actually works. If we get this right, the future interview won’t feel like a battle against a system. It’ll feel like a conversation between people who want to build something together.