Claude Code is only as good as the context you give it. I’ve been building out a custom setup with skills, hooks, and MCP servers that’s turned it into a genuine productivity multiplier for our Laravel codebase. Here’s how we teach Claude our patterns, auto-suggest relevant skills on every prompt, and persist context across sessions for multi-day features.
How We Use Claude Code at Geocodio
I’ve been working with Claude Code for a while now, and it’s become a core part of how I develop features at Geocodio. What started as experimenting with an AI coding assistant has evolved into a pretty sophisticated workflow. I wanted to share on the off chance that it might help you, too, become a more efficient developer.
The thing is, **Claude Code is only as good as the context you give…
Claude Code is only as good as the context you give it. I’ve been building out a custom setup with skills, hooks, and MCP servers that’s turned it into a genuine productivity multiplier for our Laravel codebase. Here’s how we teach Claude our patterns, auto-suggest relevant skills on every prompt, and persist context across sessions for multi-day features.
How We Use Claude Code at Geocodio
I’ve been working with Claude Code for a while now, and it’s become a core part of how I develop features at Geocodio. What started as experimenting with an AI coding assistant has evolved into a pretty sophisticated workflow. I wanted to share on the off chance that it might help you, too, become a more efficient developer.
The thing is, Claude Code is only as good as the context you give it. Out of the box it’s impressive, but the real magic happens when you tailor it to your codebase and workflow.
Let me walk you through how we’ve done that.
The CLAUDE.md Foundation: Your Project’s Instruction Manual for Claude
Every Claude Code project starts with a CLAUDE.md file in your project root. Think of it as your project’s instruction manual for Claude.
Ours covers the basics: project structure, common commands, and how to run tests. But it also encodes our development philosophy (which our CTO, Mathias Hansen, wrote about here).
# Development Workflow: Think First, Code Second
## Phase 1: Research & Analysis
**Understand Before Building:**
- Search codebase for similar implementations
- Check composer.json for dependencies
- Map integration points: pipeline stages, jobs, config, test structure
The key insight here is that Claude Code reads this file every time you start a session. So whatever you put in there becomes Claude’s baseline understanding of how your project works.
We use it to enforce patterns like "always search for similar code before implementing something new." This one instruction has been surprisingly impactful.
Consistency beats novelty. When Claude searches for similar implementations first, it discovers the patterns we’ve already established. How we structure API responses. How we handle errors. How we organize database queries. How we implement feature flags.
Following these patterns means new code looks like existing code. Code review goes faster because there are fewer surprises. Solutions work with our infrastructure instead of fighting it. And future you (six months from now) isn’t confused by some one-off implementation that doesn’t match anything else.
It prevents architectural drift. Without this instruction, Claude might implement user authentication differently in every feature. Or create five different patterns for handling API pagination. We want Claude to follow established patterns and make similar choices, not go off inventing new patterns unless we explicitly ask.
Sometimes the search reveals we’ve already solved this. On a recent project, Claude discovered our existing permission checking pattern before implementing a new admin feature. This meant it used the same Policy structure as the rest of the app instead of inventing a new authorization approach. Other times it shows us we solved something adjacent, and adapting that approach beats starting from scratch.
Think of it like telling a new developer "look at how we do X before implementing Y."
One thing I added that’s been surprisingly effective is what I call the "Independent Thought" section:
Going forward, avoid simply agreeing with my points or taking my
conclusions at face value. I want a real intellectual challenge,
not just affirmation.
Question my assumptions. What am I treating as true that might
be questionable?
This single instruction has dramatically improved the quality of our conversations, and reduces the sycophancy that LLMs are notorious for. Claude now pushes back when I’m heading down a questionable path instead of just implementing whatever I ask for.
Skills: Teaching Claude Your Development Standards
Skills are where things get interesting. They’re essentially instruction sets you can invoke that give Claude deep expertise in a specific domain.
We’ve created four skills for our PHP/Laravel/Go codebase (a hybrid infrastructure I wrote about here):
- php-development - PHP 8+ type safety and coding standards
- laravel-development - Eloquent patterns, Form Requests, Policies
- phpunit-testing - Our integration-first testing philosophy
- development-planning - Structured workflow for complex features
Here’s a snippet from our PHP development skill:
## Type Safety (Mandatory)
### Always Declare Types
// ✅ CORRECT
public function processPayment(int $userId, float $amount): bool
{
return true;
}
// ❌ WRONG - Missing types
public function processPayment($userId, $amount)
{
return true;
}
### PHPDoc Usage - Types Only
**ONLY use PHPDoc for type information that cannot be expressed
in PHP syntax:**
// ✅ Use PHPDoc for generics and array shapes
/**
* @param array<string, mixed> $data
* @return Collection<int, User>
*/
public function processUsers(array $data): Collection
{
}
// ❌ NEVER include descriptions in PHPDoc
/**
* Process payment for user
* @param int $userId The user's ID
* @return bool Returns true if successful
*/
public function processPayment(int $userId): bool // NO!
Automatic Skill Activation with Hooks
Here’s where we get a bit clever. Claude Code supports hooks: shell scripts that run at specific points in the workflow.
We built a hook that analyzes every prompt and suggests relevant skills:
// skill-activation-prompt.ts
const matchedSkills: MatchedSkill[] = [];
for (const [skillName, config] of Object.entries(rules.skills)) {
const triggers = config.promptTriggers;
// Keyword matching
if (triggers.keywords?.some(kw => prompt.includes(kw.toLowerCase()))) {
matchedSkills.push({ name: skillName, matchType: 'keyword', config });
}
// Intent pattern matching (regex)
if (triggers.intentPatterns?.some(pattern =>
new RegExp(pattern, 'i').test(prompt)
)) {
matchedSkills.push({ name: skillName, matchType: 'intent', config });
}
}
When I type something like "write a test for the user service," the hook detects keywords like "test" and "service" and outputs:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🎯 SKILL ACTIVATION CHECK
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📚 RECOMMENDED SKILLS:
→ phpunit-testing
→ laravel-development
ACTION: Use Skill tool BEFORE responding
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
This ensures Claude always has the right context loaded before writing code. No more generic Laravel code when we have specific patterns we follow.
Task Context: Memory Across Sessions
One limitation of Claude Code is that each session starts fresh. For multi-day features, you lose context between sessions.
We solved this with a simple file-based system.
When starting a large task, we create a directory under .claude/context/dev/active/[task-name]/ with three files:
[task-name]-plan.md- The implementation plan[task-name]-context.md- Key files and decisions[task-name]-tasks.md- A checklist of work
We even have a hook that fires when exiting plan mode. It reminds Claude to create these files:
# check-plan-context.sh
cat >&2 <<'EOF'
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📋 PLAN MODE EXIT REQUIREMENT
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
⚠️ This plan requires context files!
REQUIRED ACTIONS:
1. Create directory:
mkdir -p .claude/context/dev/active/[task-name]/
2. Create three files:
• [task-name]-plan.md
• [task-name]-context.md
• [task-name]-tasks.md
EOF
exit 2
When I start a new session, I just say "continue working on enterprise-clickhouse-reads" and Claude reads the context files to pick up right where we left off. Magic!
MCP Servers: Connecting to Your Tools
MCP (Model Context Protocol) servers extend what Claude can see and do.
We have three configured:
{
"mcpServers": {
"dash-laravel-boost": {
"command": "docker-compose",
"args": ["exec", "geocodio", "sh", "-c",
"cd dash && php artisan boost:mcp"]
},
"api-laravel-boost": {
"command": "docker-compose",
"args": ["exec", "geocodio", "sh", "-c",
"cd api && php artisan boost:mcp"]
},
"api-spx-mcp": {
"command": "docker-compose",
"args": ["exec", "geocodio", "sh", "-c",
"cd api && php artisan spx-mcp:serve"]
}
}
}
Laravel Boost gives Claude access to our database schemas, routes, logs, and documentation search. Instead of me copying schema information, Claude can query it directly. The SPX profiler integration lets Claude analyze performance profiles when we’re optimizing code.
This is pretty powerful. Claude can run tinker commands, query the database (read-only), search Laravel docs for the exact version we’re running, and check the last error in our logs. It’s like giving Claude access to all the tools I use daily.
The Workflow in Practice
So what does this look like day-to-day? Here’s a typical feature implementation:
- I describe what I want to build
- The skill-activation hook suggests relevant skills
- Claude enters plan mode and researches the codebase
- We agree on a plan and Claude creates context files
- Claude implements step-by-step, running tests continuously
- If I end the session, I can resume it later by referencing the task name
The code Claude produces follows our patterns because it has context about our patterns. Tests are integration-first because the skill tells it to prefer integration tests. Files are formatted correctly because Pint runs automatically.
The Reality: It’s Not "AI Does It All"
Here’s what the workflow doesn’t show: the iteration loop.
Claude produces good code following our patterns.
But "good" isn’t "perfect," and "following patterns" doesn’t mean "making the right architectural decisions." That’s where one’s judgment as a developer matters most.
And at Geocodio, nothing gets shipped without multiple rounds of human review from myself and other engineers.
The Review Process
After Claude implements something, I’m looking for:
Does this solve the actual problem? Sometimes Claude nails the technical implementation but misses the business logic nuance. Only a human engineer can catch that.
Are there edge cases we missed? Claude writes the happy path really well. But what happens when the API returns a 429? When the user has no permissions? When the dataset is empty? I’m constantly asking "what breaks this?"
Is this the right abstraction? Claude might implement something correctly but in a way that paints us into a corner later. It’s important to think three features ahead: will this pattern scale? Will future-me curse present-me for this design?
Does it pass the tests—and do we have the right tests? Claude runs tests continuously, but I’m reading the test output and thinking critically about them. A passing test suite doesn’t mean the tests are testing the right things. I often end up writing additional tests based on our business context and other domain-specific areas of expertise that Claude doesn’t have.
The Iteration Loop
A typical exchange looks like this:
Claude: implements feature Me: "This works, but we’re going to hit N+1 queries when users have multiple addresses. Let’s eager load the relationship."
Claude: adds eager loading Me: "Better. But we should cache this, because it’s read-heavy and changes rarely. Add a cache layer with a 1-hour TTL."
Claude: implements caching Me: "Now we need cache invalidation when addresses update. Hook into the model observer we use for other cached relationships."
Claude: adds invalidation Me: "Perfect. Now add a test that verifies cache invalidation actually works."
That’s four iterations on what started as "working code." Each iteration required me to:
- Spot performance issues before it hit production
- Know our caching patterns
- Remember we have model observers for this
- Think about test coverage for the invisible behavior
Claude can’t do that. Not because the AI isn’t smart enough, but because it doesn’t have the context I have from shipping features, debugging production issues, and working in this codebase over time. No matter how much context Claude Code has on your codebase, it will never have as much use-case context or engineering experience as you (which can be equal parts a relief and frustrating).
The Skill Is Knowing What’s Wrong
People who think AI will replace developers miss this: the hard part isn’t writing code that works, it’s writing code that keeps working.
Claude can write a function that passes tests. I’m the one who knows:
- This will cause problems at scale
- This conflicts with an upcoming feature
- This creates a maintenance burden
- This is harder to understand than it needs to be
- This will confuse future developers
That pattern recognition comes from years of making mistakes, fixing production bugs, and refactoring code I wish I’d written differently the first time.
Claude accelerates my implementation speed dramatically. But it doesn’t replace my judgment about what to implement, how to architect it, or when to push back on its suggestions.
Building out a workflow like this can make Claude an incredibly effective collaborator. But you’re still the senior developer in the room, making the calls that require experience and context that AI doesn’t have.
Despite these shortcomings, Claude Code has genuinely made me a more productive developer. The caveats are real, but so is the velocity.
What I’ve Learned
A few things I’ve picked up along the way:
Context is everything. The more specific guidance you give Claude, the better the output. Generic instructions produce generic code. Our skills are hundreds of lines long because details matter.
Claude isn’t a replacement for judgment. Good code isn’t the same as the right code. I still catch performance issues, missing edge cases, and architectural decisions that would bite us later. The iteration loop is where the real work happens.
Hooks are underrated. The automatic skill suggestion has been a game-changer. Without it, I’d forget to load the right skill half the time. Automate the things you’d otherwise forget.
Persistence matters. The task context system seemed like overkill at first, but being able to resume complex work across sessions is incredibly valuable. Claude picks up exactly where it left off.
If you’re using Claude Code and haven’t customized it yet, start with a solid CLAUDE.md. Document your patterns, your preferences, your workflow. Then consider skills for your most common coding domains. The investment pays off quickly.
I’m pretty stoked on how this has evolved. Now excuse me while I go have Claude implement the next feature on my list.