This is the first post in a series in which I’ll write about tightening the loop and improving the output when programming using ai code assist. This year has seen remarkable improvement in the capabilities of AI code assist tools but often this is missed by users who don’t know how to get the best out of them. I hope to share some of my experiences and techniques in this series of posts as I too continue to learn ways to extract value from these emerging tools.
Ignoring quality, not paying off technical debt, and shipping code the moment it works without considering maintenance have contributed to [the big ball of mud](http://www.laputan…
This is the first post in a series in which I’ll write about tightening the loop and improving the output when programming using ai code assist. This year has seen remarkable improvement in the capabilities of AI code assist tools but often this is missed by users who don’t know how to get the best out of them. I hope to share some of my experiences and techniques in this series of posts as I too continue to learn ways to extract value from these emerging tools.
Ignoring quality, not paying off technical debt, and shipping code the moment it works without considering maintenance have contributed to the big ball of mud since well before ai code assist tools came around. Now with vibe coding we can find ourselves arriving at similar states of unmaintainable code even faster. As an insatiable developer, I want the speed and ease of vibe coding but I also want to push back against the entropy that comes with it. Today’s models and code-assist cannot avoid the big ball of mud unguided, and so I’ve been investigating how to steer the vibe towards more maintainable code.
My goto for code-assist is Claude Code and Opus 4.5, so I’ll use terminology that relates to these tools, but the concepts within potentially apply to all.
Enforcing determinism
Our prompts are instructions combined with existing code and forwarded to the LLM to produce new code. These should be thought of as suggestions, not guarantees. How likely these suggestions are followed by the LLM is often based on how relevant and concise the context is. In my experience prompts that define a change against specific files are more likely to produce the desired outcome. Vague prompts that leave too much to interpretation and not enough narrowing of the files to change often mean more thinking, more file reading, more data in the context window and more randomness in the output. Context engineering can reduce the likelihood of our suggestions being ignored, but even the smallest CLAUDE.md is still often overlooked, so we need to find ways to enforce our expectations on the results.
One way is to have Claude Code pass the control of execution to us. Consider the following slash command:
---
description: Commit only relevant files from the session
---
Review the git status and create a commit with only the files that are relevant to the current session. Write a clear, concise commit message that describes what changed and why. Do not reference Claude or any AI assistance in the commit message.
Use a message that is:
- 40 characters or less
- Does not reference Claude
When I first wrote this command, I found that often Sonnet 4.5 would sometimes write longer messages or include the Claude co-author email - requiring me to reset the commit and try again. Rather than perform this manual check, let’s write a script to do it:
import { execSync } from "node:child_process";
const MAX_MESSAGE_LENGTH = 40;
export function commit(message: string): void {
if (message.toLowerCase().includes("claude")) {
console.error("Error: Commit message must not reference Claude");
process.exit(1);
}
if (message.length > MAX_MESSAGE_LENGTH) {
console.error(
`Error: Commit message must be ${MAX_MESSAGE_LENGTH} characters or less (current: ${message.length})`
);
process.exit(1);
}
try {
execSync(`git commit -m "${message.replace(/"/g, '\\"')}"`, {
stdio: "inherit",
});
process.exit(0);
} catch (_error) {
process.exit(1);
}
}
Now, we update the slash command to call our script instead of using git directly. In my case, I put this command and others in a cli tool I wrote so I can share the commands across projects. The updated slash command references this cli tool:
Review the git status and create a commit with only the files that are relevant to the current session. Write a clear, concise commit message that describes what changed and why. Do not reference Claude or any AI assistance in the commit message.
Use `assist commit` with a message that is:
- 40 characters or less
- Does not reference Claude
Instead of using git commit directly, use: `assist commit "your message"`
Now, whenever a commit message is too long or contains a forbidden word, the script exits with an error. Claude Code detects the failure and automatically retries with a corrected message - no manual intervention required.
Reaping the rewards
On its own, this is a trivial example of having our own code enforce rules we might have otherwise seen broken as just prompts. However, I immediately began to miss this command in other projects due to both its versatility and the reduced cognitive load it provided over many commits a day. I use this command in the following ways:
- calling it once I’ve reviewed a change is correct, after which I immediately call
/clear - calling it in another
claudeinstance that made changes to different files. So far it’s very good at only committing the files that were changed in its own session. - calling it in a new session where there are already changes, including directly from the commandline via
claude /commit. In this case it usually will commit all files, but sometimes is able to see related changes and only commits those files, with a second call to/commitcommitting the rest.
I write code across a number of projects and a number of development machines. Iterating on the natural language prompts, the deterministic scripts, and having whatever is the latest version of these across all projects and machines led me to put them in this cli tool. Indeed, most of the commits on that project are the result of this commit command. In my next post, I’ll share how I’m using the verify command of this tool to further steer the vibe towards more maintainable code.