- 22 Dec, 2025 *
I currently work on a managed database-as-a-service platform for MySQL. Our codebase is extensive, with thousands of lines scattered across various files that contain the functionalities of a typical database service: CRUD workflows for Azure resources, backup/restore logic, high availability (primary/standby setups), telemetry, and so on.
Over the last six months, I’ve integrated GitHub Copilot into my VS Code workflow. This post is about my experience with AI coding assistants in a complex infrastructure environment and how they can help with daily tasks. I do not intend to make claims about whether AI is the "best thing since sliced bread" or if it will replace us.
Managing Build Dependencies
I’m usually the last person to ride a "pop culture wave," so…
- 22 Dec, 2025 *
I currently work on a managed database-as-a-service platform for MySQL. Our codebase is extensive, with thousands of lines scattered across various files that contain the functionalities of a typical database service: CRUD workflows for Azure resources, backup/restore logic, high availability (primary/standby setups), telemetry, and so on.
Over the last six months, I’ve integrated GitHub Copilot into my VS Code workflow. This post is about my experience with AI coding assistants in a complex infrastructure environment and how they can help with daily tasks. I do not intend to make claims about whether AI is the "best thing since sliced bread" or if it will replace us.
Managing Build Dependencies
I’m usually the last person to ride a "pop culture wave," so I ignored Copilot at first. One day, I was working on integrating a library that had a dependency on the latest version of an Azure authentication library. Updating the version caused a cascade of compilation errors across the repository. Since I wanted to save my brain cells for other tasks, I opened the Copilot chat, selected the Claude Sonnet agent, and provided the errors.
In "Agent Mode," the assistant analyzed the build files and implemented a fix: it isolated the updated dependency to my component while retaining the older version for the rest of the repository. The code compiled successfully.
Lessons:
- Pattern-based solving: Managing nested dependencies is tedious for humans but pattern-based for AI coding assistants. If you can provide the error output, the assistant can often find the path of least resistance faster than a manual search.
Writing Tests
A few weeks later, I needed to add unit tests for a new feature. I didn’t want to hunt for the correct directory or boilerplate, so I asked the assistant to generate them.
The assistant went into a hallucination loop. It modified build files and added dozens of tests, but the build failed. It tried to self-correct and failed again. I eventually stepped in, identified that it was placing files in the wrong directory, and guided it: "Can you add the file in src/abc and look at existing tests?" Once I provided that constraint, it wrote the tests correctly. I asked it to generate a code coverage report and saw that the tests had 86% coverage, which met the PR merge policy.
End-to-end (E2E) tests were similar. Initially, the assistant generated hundreds of lines of complex scenarios that failed due to "Resource Not Found" exceptions. I had to reset the prompt: "Write one simple test first that just creates the resource and verifies it exists." Once I had a working test, I manually duplicated the boilerplate to add the code for other scenarios.
Lessons:
- Start small: AI coding assistants often try to do too much at once. Start with a "Hello World" test and iterate.
- Stay in the driver’s seat: They can confidently lie or iterate into a corner. Provide a reference file and keep your instructions granular.
Explaining Code
Our High Availability (HA) logic is complex. The code is scattered across many files and components for health detection, failovers, and primary/standby promotions.
I asked the assistant to explain the orchestration workflow. It provided a structured breakdown: an introduction, a list of key components, and their interactions, along with code pointers for each function. I asked follow-up questions like, "How long until an unhealthy instance is detected?" and it added those explanations to the answer. Finally, I asked it to save the answer to a text file so I could share it with the team.
Lessons:
- Knowledge Retrieval: Code explanation is perhaps the strongest use case for AI coding assistants. Ask specific questions to be added to the explanation to further refine the output.
Refactoring and Documentation
I am a firm believer that "one method should do one thing." When I encountered a monolithic class with a single method containing all the functionality, I asked the assistant: "Can you refactor this method for better readability? The order and functionality must remain the same."
It performed almost perfectly, breaking the method into smaller, logical methods. I noticed a pattern, though: it tended to favor certain styles, like nesting methods or grouping all return conditions.
On another occasion, I wanted to improve the documentation for a new hire. I asked: "We have different guides for different components in this directory. Can you add a guide for a new hire that should be the entry point that references all these other guides?"
It did a marvelous job of writing a guide with an intro, steps, and estimated time for each step while referencing the other guides.
Lessons:
- Refactoring Philosophy: Refactoring is a task these assistants can do perfectly. You might need to advise them on the philosophy you want them to follow while refactoring the code.
- Documentation is Home Ground: Documentation is where AI coding assistants thrive. Always get assistance while creating, improving, and maintaining docs.
Final Thoughts
I’ve come to view AI coding assistants as the "first responder" for my technical questions. Previously, I would have reached out to a colleague or the original code author immediately. Now, I consult the assistant first.
While it doesn’t eliminate the need for human expertise, it has significantly reduced the "interrupt cost" for my team. I still write the main feature code myself and only ask the assistant to improve the code or to write tests.