In the past few months, alongside our normal feature work, I’ve cut our CI test times by 70%, reduced API requests by 30%, and shipped features that had been stuck in "someday" territory for months. This didn’t happen because I suddenly became a better developer. It happened because AI let me explore and tinker with parts of our system I’d never dared touch before.
I worked as a frontend developer at Sweden’s largest rail operator. Millions of monthly visitors, multiple teams, high stakes. It’s exciting work. But after joining, I ran into something I didn’t expect.
The Reality of Established Codebases
When I joined, the foundations were already laid. Smart, experienced colleagues had made the architectural decisions, chosen the tech stack, and established the patterns we …
In the past few months, alongside our normal feature work, I’ve cut our CI test times by 70%, reduced API requests by 30%, and shipped features that had been stuck in "someday" territory for months. This didn’t happen because I suddenly became a better developer. It happened because AI let me explore and tinker with parts of our system I’d never dared touch before.
I worked as a frontend developer at Sweden’s largest rail operator. Millions of monthly visitors, multiple teams, high stakes. It’s exciting work. But after joining, I ran into something I didn’t expect.
The Reality of Established Codebases
When I joined, the foundations were already laid. Smart, experienced colleagues had made the architectural decisions, chosen the tech stack, and established the patterns we all followed. State management, routing, data fetching, authentication flows - all set up. CI/CD pipelines were configured. The testing infrastructure was in place.
This wasn’t some pristine, perfectly engineered codebase. Far from it. Like any large system that’s been around for a while, it had technical debt, quirky decisions, and the occasional eyebrow-raising implementation. We’d recently gone through a major system overhaul under heavy deadline pressure. Some things were done thoughtfully. Others were done quickly. That’s just the reality of shipping software at scale.
But that’s not the interesting part.
The real issue: when you join a mature project, you don’t get to make the big decisions. You inherit outcomes, not the process that led to them. You follow conventions, but you rarely get to explore alternatives.
I could work within our state management setup, but I never evaluated why it was chosen or what trade-offs it made. The CI pipeline existed. It mostly worked. Touching it felt risky - and unnecessary.
There was an unspoken rule, one most developers at established companies will recognize: if it’s not broken, don’t fix it. As senior developers left the project over the years, taking vital context with them, the rest of us became increasingly hesitant to touch anything foundational. Better to work around quirks than risk breaking something you didn’t fully understand.
On top of that, anything that wasn’t directly customer-facing rarely got a high prio. Product quite reasonably prioritized visible features. Developer experience improvements? Infrastructure work? They’d happen eventually - but there was always something more urgent.
This is the hidden cost of mature codebases. You become effective at operating within a system - but you stop learning how it actually works. The foundations start to feel untouchable.
Enter AI: Lowering the Cost of Curiosity
I wasn’t new to AI coding tools. I’d tried them back in early 2023 and was genuinely excited. But the reality fell short. Context would get muddled, suggestions would fail in subtle ways, and I spent more time fixing AI-generated code than if I’d written it myself. The hype wore off quickly.
Then, in spring 2025, I tried Claude Code. And this time, something genuinely changed.
What I realized is that AI isn’t primarily about writing code faster. It’s about reducing the cost of exploration.
It gave me leverage to:
- Navigate unfamiliar parts of the codebase
- Ask basic or repetitive questions without friction
- Try ideas quickly and validate them before committing
Features we used to estimate at two or three sprints were suddenly doable in a couple of days - without cutting corners. The code still went through review. It still met our standards. But the friction of figuring things out was dramatically lower.
More importantly, that speed created breathing room - space to explore, experiment, and learn without putting delivery at risk. And that changed how I worked.
From Shipping Features to Learning the System
With the breathing room AI created, I started exploring areas that had always lived in the mental category of “nice to have - when there’s time.” In reality, there’s never time. Until suddenly, there was.
A few examples all followed the same pattern: identify something that felt intimidating or underexplored, experiment safely, verify the outcome, and learn along the way.
Frontend caching. I analyzed our API call patterns and implemented caching strategies that reduced backend requests by about 30%. This meant less load on our backend - making it more robust during heavy traffic days like ticket releases - and lower infrastructure costs. Before AI, this felt risky - too many unknowns, too many places things could go wrong. With AI, I could reason through the system, test assumptions quickly, and iterate until it worked.
Performance optimizations. I implemented lazy loading for heavier components, both with standard patterns and using requestIdleCallback, a browser API that lets you defer non-urgent work until the main thread is idle. I didn’t become a browser scheduling expert overnight. But I could try approaches, measure results, and ship meaningful improvements without weeks of upfront study.
Testing infrastructure. Our integration test suite had over 850 tests, took more than 20 minutes to run, and was notoriously flaky. Rerunning pipelines multiple times was normal. I implemented test sharding, which cut execution time down to 5–7 minutes and almost completely eliminated flakiness. Previously, the CI configuration felt too critical - and too opaque - to touch. With AI, I could explore it incrementally and safely.
“Someday” features. Like most teams, we had ideas that never even became tickets. Improvements you notice while working, but know won’t be prioritized. Things like expanding dialog and sheet views, or letting customers see intermediate stations on their journeys. With AI, I could build a proof of concept in an afternoon, show it to designers, incorporate feedback, and ship within a week. Features that would have lingered indefinitely suddenly became real.
None of this work was assigned to me. It happened because AI lowered the barrier to learning - and made curiosity feel safe again.
What Actually Became the Bottleneck
Here’s the broader insight I didn’t expect: writing code is no longer the slow part.
For years, we’ve treated development time as the primary constraint. Sprints are planned around implementation capacity. Features are scoped based on how long we think coding will take. Technical debt accumulates because “we don’t have time.”
AI changes that math.
I can now build a credible proof of concept for a feature in an afternoon. But getting that feature approved, aligned with design, reviewed by stakeholders, and scheduled into a roadmap? That still takes weeks.
The bottleneck has shifted to everything around the code:
- Decision-making and prioritization
- Design feedback loops
- Cross-team alignment
- Review and release processes
This isn’t a complaint - it’s an observation. The nature of development work is changing. The developers who thrive won’t just be the fastest typists or the deepest specialists. They’ll be the ones who can navigate the human and organizational side of software while using AI to remove technical friction.
An Invitation to be Curious (and Cautious)
If you’re working in an established codebase and feel like there are parts of the system you’d never dare to touch, I get it. I was there. My suggestion is simple: use AI to lower the cost of trying.
AI doesn’t just help you move faster; it gives you permission to be curious again. Use it to prototype that caching layer you’ve been thinking about, take a swing at fixing those flaky tests, or build a proof-of-concept for a "nice to have" idea. You don’t need perfect understanding before you start-often it’s enough to try something, verify it works, and learn as you go.
However, curiosity is not a license for recklessness. None of this means AI is magic. It can absolutely produce subtle bugs, reinforce existing architectural mistakes, or give false confidence if used carelessly. While AI allows you to touch parts of the system you’ve been avoiding, it should never replace your judgment-it should amplify it.
The goal is to arrive at informed judgment faster, while still adhering to the fundamentals:
Don’t experiment blindly with security-critical code.
Rely heavily on tests, peer reviews, and teammate feedback.
Verify everything rather than letting "fear of the unknown" turn into "blind trust."
In a field that changes as quickly as ours, the ability to safely explore the unknown is perhaps the most valuable skill of all. Use AI to bridge that gap, but keep your hands on the wheel.
If you have similar experiences, I’m curious to hear.