Jan 01, 2026
Pretty much every technology has had its good parts and its bad parts. Vibe coding may be new to the game, but it shares more than a few fundamental characteristics with others that came before it. From an AI skeptic turned cautiously-optimistic vibe-coding engineer, here are a few specific engineering tasks that can be accelerated using AI.
LLM-assisted programming (as with all other things AI) is clearly polarizing, but whether you like it or hate it, you won’t be able to ignore it for long. After having been a skeptic of automatically-generated code for a while, I begrudgingly gave it a try sometime around an year ago. There’s a lot of how-to discussion online about vibe coding for non-engin…
Jan 01, 2026
Pretty much every technology has had its good parts and its bad parts. Vibe coding may be new to the game, but it shares more than a few fundamental characteristics with others that came before it. From an AI skeptic turned cautiously-optimistic vibe-coding engineer, here are a few specific engineering tasks that can be accelerated using AI.
LLM-assisted programming (as with all other things AI) is clearly polarizing, but whether you like it or hate it, you won’t be able to ignore it for long. After having been a skeptic of automatically-generated code for a while, I begrudgingly gave it a try sometime around an year ago. There’s a lot of how-to discussion online about vibe coding for non-engineers, but not much for engineers. (“vibe engineering”?)
But… hear me out
You’ll find a lot of threads on social media about managers forcing engineers to use LLMs, which has the opposite of the intended effect, because nobody likes being dictated to, especially about their own areas of expertise, and especially from people with less perceived expertise in those exact areas.
The output of an LLM may not be perfect, but think of it as taking public transit to a faraway destination instead of walking all the way. You get on the bus, you end up in the general vicinity of where you wanted to go, then you get off the bus and walk the last few steps yourself.
If you don’t walk that last bit, you won’t end up exactly where you wanted to go. But if you insist on walking all the way, you’d take a lot longer to get there.
Conversely, if you keep taking one wrong bus after another, you’ll get completely lost. Don’t just vibe-code without checking everything the LLM generates.
At the end of the day, treat it as the tool it is. Use it for what it does well. And skip using it where it doesn’t add value.
This article is an attempt to catalog my experiences of what I found it to be actually good at. It’s also a path to slowly getting used to LLM-driven engineering, one small step at a time.
This is authored from the perspective of an engineer who knows how to code and is mostly using an LLM to speed things up. If that describes you, great! If you are not familiar with coding or are just beginning to learn, I strongly encourage you to spend the time to understand the language and framework of your choice. Because in the long run, that will get you much farther than an LLM ever can.
My setup
Just so you can contextualize this, here’s my setup. The precise set of tools I used is not critical; the broader discussion applies well no matter what tools you’re using.
For personal projects, I’ve been using GitHub Copilot with Gemini 3 Pro as well as Claude Sonnet/Opus 4.5 in VS Code and in JetBrains IDEs (mostly Android Studio, but also IntelliJ IDEA). Since most of my projects are solo side projects, there’s no code review involved.
At work, I’ve used various internal tools that all rely on models from the Gemini family and integrate with Google’s internal code authoring & code review tools.
Start small
If you’re just getting started, it might feel overwhelming to trust your full workflow to an LLM. The best thing is to start small with low-risk low-impact assistance before you move on to more hands-off Agent Mode tasks.
Generate commit messages
I’ve found automating commit messages to be an excellent way to get your feet wet. You’re still writing all the code, but by letting the LLM draft up commit messages, you can see for yourself how well it understands what you’re trying to do (or not), which may (or may not) eventually convince you to let go and let it drive deeper, more involved tasks independently.
Unexpected change in my behavior: After delegating commit messages to an LLM, I found myself creating smaller commits: E.g. previously, I often updated multiple dependencies at one go: grouped into a single (or a few) commits. Now I typically create one commit per dependency, and the LLM-generated commit message includes details of exactly what changed, and the version each dependency was updated to. Doing this manually would not have been the best use of my time, but with LLMs, the effort I spend is lower than some reasonable threshold, so this has become worthwhile.
(Why not dependabot? Because I like to test each change before committing it, even for minor semver updates).
Automate boilerplate code
One clear win I found for vibe coding was to generate boilerplate: config files, scaffolding, that kind of thing. An LLM by definition is good at pattern recognition and pattern repetition, so boilerplate is something it can accomplish pretty quickly.
(Of course if you can eliminate boilerplate entirely, that should be the first thing you do.)
Generate extremely simple code snippets
When I know the exact logic I want to use for a particular function, vibe coding is like a bicycle for the mind. The kind of thing you’d say to an intern, and they come back a few hours later with code that does what you wanted. Maybe not in the exact same way you’d have done it, but that’s where you come in.
Automated migration from deprecated code
I begrudgingly deal with Gradle for Android development, but that tool is generally a nightmare to deal with, with constantly changing APIs. Every time I update to the latest Gradle version, a bunch of things are deprecated, and a bunch of things stop working. Now I just paste the error message into an LLM, and have it fix it. It generally works.
I have also found a good use case for personal (non-programming) tasks.
While Tesseract & others are high-quality tools of the pre-LLM era, LLMs have more than caught up, and are able to extract info in a better structured way than plain OCR-based tools ever did.
You can paste a screenshot into an LLM as context. Ask it to extract information from it. Then act on it. And paste the processed results into your source code or a data file. All of this is much quicker than massaging that same data by hand over multiple steps.
Taking screenshots of (HTML) tables on the Web and pasting them into a spreadsheet is made much easier by passing it through an LLM. You would think that pre-LLM tools would also be good at this kind of structured yet mechanical copy/paste & format conversion, but LLMs are miles ahead.
Continue on to the next part
(This got too long for a single post, so I split it up into two.)