Published 1 minute ago
Beginning his professional journey in the tech industry in 2018, Yash spent over three years as a Software Engineer. After that, he shifted his focus to empowering readers through informative and engaging content on his tech blog – DiGiTAL BiRYANi. He has also published tech articles for MakeTechEasier. He loves to explore new tech gadgets and platforms. When he is not writing, you’ll find him exploring food. He is known as Digital Chef Yash among his readers because of his love for Technology and Food.
NotebookLM has quietly become one of my [most-used tools for learning](https://www.xda-developers.com/used-notebooklm-to-learn-about-python-should-have-sooner…
Published 1 minute ago
Beginning his professional journey in the tech industry in 2018, Yash spent over three years as a Software Engineer. After that, he shifted his focus to empowering readers through informative and engaging content on his tech blog – DiGiTAL BiRYANi. He has also published tech articles for MakeTechEasier. He loves to explore new tech gadgets and platforms. When he is not writing, you’ll find him exploring food. He is known as Digital Chef Yash among his readers because of his love for Technology and Food.
NotebookLM has quietly become one of my most-used tools for learning. I use it to break down long documentation, understand unfamiliar concepts, and explore topics without jumping between tabs or losing context. It helps me think through things, not just look them up. Over time, I started wondering what would happen if I used the same approach on something deeply familiar, i.e., my own code. Could it help me reason better, or learn something new? That curiosity led me to try an unusual experiment: giving NotebookLM full access to a codebase and treating it like a human teammate. The results were more interesting than I had expected.
Knowing the code, but not knowing the code
I wrote this, but why did I write it?
This is one of the most common challenges that every coder faces. I know this codebase. I wrote it myself. I remember the features, the structure, and most of the decisions that went into it. And yet, every time I come back to it after a few weeks, I feel that familiar friction. I know where things live, but not always why they’re there. I remember what the code does, but not every assumption it depends on.
That’s where many long-running projects end up. The code feels familiar, but it no longer feels fresh. Even small changes start taking more time than they should. I often have to open multiple files just to follow the basic flow of my old projects. Sometimes I pause before changing a function, not because it’s hard to understand, but because I’m unsure how far its impact might reach.
That’s what pushed me to try this experiment. I wasn’t looking to replace my thinking or let an AI make decisions for me. I just wanted a faster way to get back into the code, remember why things were written a certain way, and understand the impact of changes without mentally reloading the entire project each time. To test this properly, I created a small Python project and fed the whole codebase into NotebookLM. I wanted to see how it feels to have the full context available on demand, and whether it could actually reduce that re-learning gap and make working with my own code smoother again.
The experiment to feed my codebase into NotebookLM
Giving the AI full context, not just snippets
For this experiment, I didn’t want to use a half-baked example or paste random code blocks. I wanted something that felt real. So I created a small but complete order processing system in Python with validation logic, pricing, inventory handling, utilities, and basic tests. Nothing fancy, but realistic enough to behave like an actual project.
Once the project was ready, I fed the entire codebase into NotebookLM. I uploaded all the .py files as plain text files, along with the README and a simple folder structure overview. The idea was to give NotebookLM the same context a junior developer would get on day one: the code, the structure, and a bit of documentation.
I didn’t explain anything manually or add special prompts upfront. I just uploaded everything and let it read the project end-to-end. From that point on, every question I asked was based on the assumption that it already knew the code, because it actually did.
Related
I call it my best junior developer
Not replacing me, just thinking alongside me
Once the entire codebase was inside NotebookLM, I started treating it like a junior developer who had just joined the project. The questions I asked were the same ones I’d ask a new team member. I’d start with things like, “Pretend you just joined this project, what’s your understanding of the system?” The response included a thoughtful onboarding summary, core workflow, and system overview.
I also tried to use it the way I’d use a junior dev during debugging. I’d say, “Inventory errors are showing up inconsistently. Based on the code, where could this be happening?” Instead of guessing, it walked through the inventory logic, highlighted shared state, and pointed out where things could break.
I also asked architectural questions: whether validation, pricing, and inventory were too tightly coupled, and where logic could be extracted safely. It didn’t rush into refactoring; it explained risks first. Even when I asked about missing test cases or edge scenarios, the answers stayed grounded in the actual code. That’s when it clicked for me. NotebookLM wasn’t making decisions for me. It was helping me think, exactly like a good junior developer would.
It’s not perfect, but it’s definitely better than I expected
Useful within limits, not a silver bullet
This was very much an experiment, and a manual one at that. I uploaded the code myself, and any changes in the project would need to be reflected in NotebookLM again. That’s clearly not practical for large, fast-moving codebases where things change every day. It’s also not something you’d want to do for private or confidential projects, where sharing code externally isn’t an option.
I went into this as a blind shot, without expecting much. But for a small project, the experience was surprisingly decent. Once the code was in, NotebookLM stayed consistent, helpful, and grounded in the actual logic. It felt stable enough to reason with, even if it wasn’t perfect.
Where this really makes sense is with older or smaller projects like side projects, paused work, or codebases that don’t change often. In those cases, feeding the full context once and using it to regain understanding can save a lot of time. It’s not a replacement for good practices, but as an experiment, it worked better than I expected.
Related
Productivity without turning off your thinking
This whole experiment wasn’t about finding a perfect workflow or changing how I code. I simply wanted to see if having the full context available on demand could make working with my own code feel easier.
What it really did was reduce friction. I spent less time reopening old files and more time actually thinking through changes. Instead of re-learning the project every time, I could focus on improving it. NotebookLM helped me reconnect with logic I already knew but had slowly forgotten.
This isn’t about depending on AI or letting it make decisions for you. It’s about using it as a support tool. For smaller, older, or stable projects, it can act like a helpful second brain. Based on this experience, I’ll definitely do more experiments in this direction, because, when used thoughtfully, this kind of setup supports your thinking rather than replacing it, which makes it worth exploring further.