This is a submission for the Google AI Agents Writing Challenge: Learning Reflections
I found this writing challenge the day before the deadline. One question: what can you actually learn about AI agents in two days?
To save time, I used NotebookLM for the first time and fed the course materials into it. I didn’t fully grasp everything. I didn’t finish the hands-on labs. Some ideas are still vague. Yet in just two days, my understanding of AI agents and the future role of engineers became clearer.
This post isn’t a technical deep dive. It’s a reflection on what I learned and what I’m still thinking about.
What Even Is an AI Agent?
Before this course, I couldn’t clearly explain what an AI agent actually is. I…
This is a submission for the Google AI Agents Writing Challenge: Learning Reflections
I found this writing challenge the day before the deadline. One question: what can you actually learn about AI agents in two days?
To save time, I used NotebookLM for the first time and fed the course materials into it. I didn’t fully grasp everything. I didn’t finish the hands-on labs. Some ideas are still vague. Yet in just two days, my understanding of AI agents and the future role of engineers became clearer.
This post isn’t a technical deep dive. It’s a reflection on what I learned and what I’m still thinking about.
What Even Is an AI Agent?
Before this course, I couldn’t clearly explain what an AI agent actually is. I used to think agents were basically smarter chatbots. My mental image was vague, like a powerful model, some clever prompts, a bunch of files, and glue code holding it all together. The AI field has been moving so fast that I’ve felt overwhelmed trying to keep up with what’s new versus what’s actually different.
What clicked for me was seeing the agent framed as a system, not just a model. You need a model for reasoning, tools for acting, and orchestration logic for deciding when and how to use them. The model is the brain, but without hands (tools) and a plan (orchestration), it’s just thinking out loud.
Context and Contracts
One concept that stayed with me was “mise en place”, the cooking principle of preparing ingredients before you start. Instead of endlessly tweaking prompts, the course emphasized asking different questions. What information do we give the agent? When do we give it? What do we intentionally leave out? The quality of an agent’s output relies more on the context humans provide than on clever wording.
The Model Context Protocol (MCP) section reinforced this. Standardizing how agents interact with tools sounds dry at first, but tools are how agents act. Someone still has to define the contracts, schemas, and boundaries.
Just like we needed REST to make APIs interoperable, we need protocols like MCP to make agent-tool interactions predictable and safe. Execution can be automated. The rules governing it cannot. We’re responsible for both the input and the outcome.
Trust, Quality, and the Role of Engineers
I have worked in roles where quality, clarity, and trust matter, whether that is in software, documentation, or education. Seeing those same concerns reappear in agent systems made the topic feel less abstract and more personal.
The most memorable part of the discussion was about agent quality and AgentOps. Unlike traditional software testing, where the same input always gives the same output, AI agents don’t work that way. It’s impossible to test an agent’s judgment with a simple pass or fail. The idea that "the trajectory is the truth," meaning we must evaluate not only the final answer but also the steps an agent took, the tools it chose, and how it handled errors. This has completely changed how I view trusting AI systems. It’s easy to build a quick demo agent, but building one that people can truly rely on is much harder.
This made me think about my own role. Technology keeps evolving, but the fundamentals stay the same: understanding architecture, ensuring quality, and maintaining stability. If anything, there’s more to do now, not less. I can’t keep up with AI’s intelligence, and I can’t even outperform many developers. But that’s not the point. The goal is to learn how to use AI effectively, treating it as both a valuable tool and a partner.
Final Thought
For a course I joined with 48 hours left, I came away with new questions, several project ideas, and a clearer sense of where human judgment still matters. That feels like a meaningful outcome.
If AI agents are becoming better at doing, then maybe engineers are becoming more responsible for deciding what should be done, under what constraints, and with what safeguards. I don’t think this role is smaller. If anything, it feels heavier and more interesting.
I’m excited to start building my own ideas with AI agents.