I’m Travis, a staff engineer with 12+ years building data pipelines at various companies. Six months ago, I started building Flywheel - a data pipeline platform for startups. I’m a solo founder, working evenings and weekends.
Progress has been faster than expected. Not because of some secret productivity hack, but because of a combination I didn’t expect: boring software fundamentals paired with AI-assisted development (specifically Claude from Anthropic).
Why Fundamentals Matter More With AI
When I started, I made a deliberate choice to invest in foundations before features:
- Infrastructure as Code (Terraform) - Every piece of GCP infrastructure is codified
- Event-Driven Architecture - Pub/Sub for all async operations, clean separation of concerns
- **Service La…
I’m Travis, a staff engineer with 12+ years building data pipelines at various companies. Six months ago, I started building Flywheel - a data pipeline platform for startups. I’m a solo founder, working evenings and weekends.
Progress has been faster than expected. Not because of some secret productivity hack, but because of a combination I didn’t expect: boring software fundamentals paired with AI-assisted development (specifically Claude from Anthropic).
Why Fundamentals Matter More With AI
When I started, I made a deliberate choice to invest in foundations before features:
- Infrastructure as Code (Terraform) - Every piece of GCP infrastructure is codified
- Event-Driven Architecture - Pub/Sub for all async operations, clean separation of concerns
- Service Layer Pattern - Handlers delegate to services, business logic is isolated and testable
- Consistent Conventions - Every domain follows the same structure
This felt slow at first. But here’s the thing: AI tools like Claude are force multipliers for clean codebases.
When your architecture is consistent, Claude can:
- Generate new endpoints that follow your existing patterns
- Write tests that match your testing conventions
- Refactor safely because tests catch regressions
- Understand context faster because the code is organized
In a messy codebase, AI suggestions are often wrong or inconsistent. In a clean codebase, they’re usually right.
What I Built
Flywheel is a data pipeline platform designed for early-stage startups:
- Connect sources (databases, APIs, files)
- Transform and normalize data
- Export to warehouses (BigQuery, PostgreSQL, Domo, etc.)
- Real-time monitoring and scheduling
It’s the kind of infrastructure that typically takes a team 12-18 months. I built the core in 6 months, solo, while working a full-time job.
Current stats:
- 4,366 tests (2,471 backend + 1,895 frontend)
- Test suite runs in under a minute
- Merge to deployed: ~7 minutes with full build
On top of the unit tests, I’ve built a solid end-to-end test suite for the backend that exercises the full system. Now I’m working with Claude Code’s built-in Playwright agent to build out a frontend end-to-end suite. The goal: release quickly and confidently. Tests aren’t just about catching bugs - they’re what let me ship fast without second-guessing everything.
I also lean heavily on Claude Code’s built-in agents like code-explorer (for understanding unfamiliar parts of the codebase), code-reviewer (catches issues before I commit), and code-architect (helps plan features that fit existing patterns). These aren’t magic - they work because the codebase is consistent enough for them to understand.
The Claude Workflow (What It Actually Looks Like)
I’m not using AI to replace thinking - I’m using it to break down complexity and maintain velocity.
1. Decompose the idea
I start with a high-level feature idea - like a visual flow graph for pipelines - and work with Claude to break it into manageable chunks.
Real example: A friend’s company needed to sync CSV files from S3 to Domo, with Flywheel handling the column name → ordinal mapping. Sounds simple, but it required building out: S3 source support, Domo destination support, CSV parsing, and ordinal column handling.
I worked with Claude to decompose this into a plan with clear chunks. Each chunk becomes its own focused task.
2. Right-size the work
Here’s something I learned the hard way: context windows matter. Go outside them and things get messy.
I use my experience to prioritize what to build first, then have a fresh agent break each chunk into what fits well in one context window. This keeps Claude focused and prevents the drift you get in long sessions.
3. Manual test, then write tests
I test manually first because I want to iterate quickly. Once I’m happy with how something works, then I have Claude write the tests to lock it in.
I also built a /check command - scripts that verify everything: tests passing, docs up to date, linting clean. It’s my safety net between chunks.
4. Simplify at the end
Once a feature is complete and tests are green, I run a code-simplifier pass. Because we’ve built up tests along the way, refactoring is safe. This is where the clean codebase pays off - Claude can refactor confidently.
5. Refactor continuously
Every new feature that touches existing code is an opportunity to clean up. Small refactors. Better names. Clearer abstractions. It feels slow, but it’s what keeps the codebase "AI-friendly" over time. Lazy shortcuts compound into a mess that even AI can’t help you with.
6. Bugs happen (here’s how I handle critical ones)
Do I commit bugs? Absolutely. But when a critical bug escapes that never should have - I don’t let Claude just fix it. I make it write a test that reproduces the bug first. Then I verify the suggested fix manually.
This turns every escaped bug into a permanent regression test.
What Didn’t Work
- Letting AI make architectural decisions - Every time I let Claude "figure out" the structure, I ended up refactoring later. I design, Claude implements.
- Long sessions without fresh context - Agent drift is real. Fresh context windows keep things clean.
- Over-engineering early - Even with AI help, simpler is still better.
Why This Might Matter to You
If you’re building something solo (or with a tiny team), you’ve probably wondered whether AI tools are actually useful or just hype. My take: they’re a multiplier, not a replacement. They multiply whatever you already have - good or bad.
If your codebase is inconsistent, AI gives you inconsistent suggestions. If you don’t know what good looks like, you can’t evaluate what it produces. But if you’ve got experience and a clean foundation, AI lets you move at a pace that wasn’t possible before.
The 12+ years in software learning what works (and what doesn’t) is what makes Claude useful. Not the other way around.
Key Takeaways
- Invest in fundamentals before features - IaC, event-driven architecture, clean patterns pay dividends
- AI multiplies your existing skills - Experience means I know what good looks like. Claude helps me get there faster.
- Context windows are a feature, not a bug - Fresh agents for each chunk keeps work focused
- Tests after manual verification - Confirm it works, then lock it in with tests
- Refactor continuously - Every feature is an opportunity to clean up. This keeps AI effective long-term.
- Bugs become regression tests - Every escaped bug makes the system stronger
What’s Next
Flywheel is in alpha - free to use while I build it out. If you’re an early-stage startup that needs data pipelines without building infrastructure from scratch: flywheeletl.io
Has anyone else found that AI tools work better (or worse) depending on code quality? I’m curious if others have experienced this "fundamentals + AI" multiplier effect.