5 min read3 hours ago
–
Having spent a considerable chunk of my life (15+ years) in software engineering, I can tell you the professional landscape was, for a long time, defined by a predictable constancy. You knew the rules, the rhythms, and the limits.
Then came the new millennium’s equivalent of 3D television: AI. Initially, it was mostly a buzzword, slapped onto product offerings to add an undeserved gravitas that amounted to very little in practice. That phase was, thankfully, short-lived. Our reality changed, and for engineers, the ground continues to shift beneath our feet.
My tiered adoption, which started a couple of years ago, followed a familiar path. The initial move was simple: replacing the endless, soul-crushing scour of Google and Stack Overflow with a more d…
5 min read3 hours ago
–
Having spent a considerable chunk of my life (15+ years) in software engineering, I can tell you the professional landscape was, for a long time, defined by a predictable constancy. You knew the rules, the rhythms, and the limits.
Then came the new millennium’s equivalent of 3D television: AI. Initially, it was mostly a buzzword, slapped onto product offerings to add an undeserved gravitas that amounted to very little in practice. That phase was, thankfully, short-lived. Our reality changed, and for engineers, the ground continues to shift beneath our feet.
My tiered adoption, which started a couple of years ago, followed a familiar path. The initial move was simple: replacing the endless, soul-crushing scour of Google and Stack Overflow with a more direct query to an AI chat.
As any engineer who’s taken that first tentative step can attest, it is far from flawless. I’ve yet to meet a peer who hasn’t received the infamous, completely unwarranted “You’re absolutely right” response, often immediately following a confident presentation of pure, grade-A nonsense. That’s when you’re elbow-deep in the chat quality equivalent of the :thisisfine: meme. AI hallucinations, “dumb” context, all wrapped up in a blanket of unnervingly absolute certainty that is an open invitation to technical debt.
But credit where it’s due, even in those early, chaotic days, AI was a useful shortcut. It drastically reduced the time spent on the inane and often fruitless internet spelunking required to solve some obscure, legacy-code problem. That was my AI origin story.
I soon progressed to what I’d call the Goldilocks Zone of AI usage: leveraging it to efficiently debug, explain unfamiliar logic, or generate proofs-of-concept. Feeding it documentation and a repo, then asking it to analyze or augment, generally worked remarkably well.
The boost to my productivity and my capacity to contribute was undeniable. It became a daily accelerator. Crucially, though, it wasn’t writing everything for me. My hands were still on the keys. It simply lowered the barrier to entry for new languages and tooling by a frankly ridiculous amount, allowing me to build upon what it generated. In terms of efficiency and efficacy, this period was the closest I’ve ever gotten to a ‘chef’s kiss’ in a sprint cycle.
And that brings us to today, January 2026. If you’ve stuck with me this far, you deserve a medal and a quiet place to sit down, because the next evolution is here: Agent-driven AI.
Tools like Claude Code, Codex, and Copilot are no longer novelties; they are front-and-center, with Claude being the current golden child. At LiveLink, we’ve been exploring these for a while, and last week I took the plunge, tasking Claude Code with building a greenfields solution. A generous test, as AI tends to excel where there’s no brownfield baggage to slow it down.
Allowing the agent to take the proverbial wheel in VSCode is, in all honesty, intimidating. Impressive, certainly, but fundamentally intimidating. And this is with relatively basic, single-agent usage. Transitioning to orchestrating multiple, cooperative agents on structured pieces of work. That’s where the mind-blowing (and wallet-draining) potential lies.
It still isn’t perfect. As others have noted, the pendulum swing between “smart” and “dumb” context is a constant source of amusement and frustration. I appreciate that Claude Code allows for interjection, the ability to steer the agent back on course rather than just letting it YOLO the entire thing and ending up with a 404 in production.
Debugging is faster, feedback cycles are tighter, yet even here, you run into the occasional, inefficient “Groundhog Day” loop where the agent iterates over the same flawed approach.
Social media is, of course, littered with doomsday prophecies about how AI has rendered engineers redundant. That we’re months away from being entirely replaced. I’d be lying if I said I didn’t have a little existential crisis myself when the noise became too loud.
But look at it logically: it’s just a tool. It is not a replacement for the Engineer’s role. It’s an accelerator. The critical thinking, the architectural oversight, the systems experience, the product consideration, and the expertise. All these human components still hold tremendous and meaningful value, because without them, these tools are simply going to churn out high-velocity slop.
I don’t see this as the end of engineering. I don’t think it will render us irrelevant. But it has, and will continue to, fundamentally change the way we work.
And this (finally, you might say) brings us to the kicker. The quiet compromise that no one seems to be talking about:
Agent-driven AI took the fun out of engineering.
I am more productive than ever before. I can tackle years’ worth of accumulated side-project ideas that never saw the light of day because of the time/effort/knowledge barrier. The world is my digital oyster, and that is truly awe-inspiring.
But in this world of agent-driven development, my hands are no longer typing out the solutions. I’ve been promoted to a pseudo-role of Architect and Perpetual Code Reviewer. It’s one big, never-ending pull request. And reviewing, for many of us, is one of the least enjoyable parts of this whole glorious mess.
That might be the reality in the near future. Thankfully, we aren’t completely there yet, which gives us time to come to terms with this shift. Our roles are safe, but the core function of the job is transforming.
Perhaps there’ll be a niche carved out for “Artisanal Hand-Typed Logic” in the future, with AI code being the equivalent of highly processed food. Unlikely, but an engineer can dream.
Compromises are rarely enjoyable. They simply are. But the quiet compromises that no-one mentions, the trade-off between pure velocity and the simple satisfaction of solving a problem with your own hands. Those are the ones that bother me the most. We’ve been so concerned about whether the robots would take our jobs, we didn’t stop to consider what the reality of them not taking them looked like. How the work itself would change.
It’s not all negative. We adjust. We evolve. We adapt. Just like the tooling we use and the tech behind it. It just hits a little closer to home this time around…
I can’t say I’m fully on board with the idea. Not yet.
Are you?