Picture this: it’s Thanksgiving week. In the kitchen, I’m basting a turkey. At the kitchen table, I’m helping my 5-year-old draft a high-stakes letter to Santa. And on my laptop? I’m deploying a multi-agent AI system to Google Cloud’s Vertex AI.
If you saw me in the grocery store, you’d see a stay-at-home mom wrestling a 30-pound dog food bag and managing the holiday chaos. You probably wouldn’t guess that I also hold a PhD in Computer Science and an MS in Bioinformatics.
Moreover, I just finished the 5-Day AI Agents Intensive Course with Google and Kaggle, and I want to share how I went from "mom mode" to deploying clinical-grade reasoning engines and why the "gap" in my resume was actually my biggest asset.
Before I get into the story, here is what I built:
- **Framework:*…
Picture this: it’s Thanksgiving week. In the kitchen, I’m basting a turkey. At the kitchen table, I’m helping my 5-year-old draft a high-stakes letter to Santa. And on my laptop? I’m deploying a multi-agent AI system to Google Cloud’s Vertex AI.
If you saw me in the grocery store, you’d see a stay-at-home mom wrestling a 30-pound dog food bag and managing the holiday chaos. You probably wouldn’t guess that I also hold a PhD in Computer Science and an MS in Bioinformatics.
Moreover, I just finished the 5-Day AI Agents Intensive Course with Google and Kaggle, and I want to share how I went from "mom mode" to deploying clinical-grade reasoning engines and why the "gap" in my resume was actually my biggest asset.
Before I get into the story, here is what I built:
- Framework: Google Agent Development Kit (ADK)
- Infrastructure: Google Cloud Vertex AI
- Language: Python
- Architecture: Multi-Agent System
In the tech world, there is a pervasive fear that if you step away, you become obsolete. I’ve had the privilege of focusing exclusively on my family recently, but I learned something crucial: you don’t stop being a scientist just because you leave the lab.
I didn’t press pause on my scientific mind, I expanded my dataset. While I stayed current with tech trends, I gained something academia couldn’t give me: deep, ethnographic exposure to real-world stories. I listened to countless women navigate the healthcare system.
I realized that this wasn’t "time off", it was field research. I took this course not to "catch up," but to get the tooling I needed to solve the problems I had observed.
My capstone project, LUCIA (Language Understanding for Clinical Insight & Analysis), wasn’t born out of frustration. It was born out of a specific scientific observation. Researching women’s health (specifically postpartum, menopause and longevity), I noticed a pattern: women’s physiological symptoms are often complex and mimic mental health conditions like anxiety. Clinicians are brilliant, but they are time-poor. They often lack the tools to quickly parse these nuanced biological signals. I didn’t want to blame the system, I wanted to enrich it. I wanted to build a "scientific second chair" for doctors, patients and researchers.
Initially, I thought I needed one giant model to do everything through comprehensive prompts. But the course taught me the power of separation of concerns. This was my "Aha!" moment. I architected LUCIA as a dual-engine system because a single agent can’t effectively be both an empathetic listener and a cold, hard logician at the same time.
The system processes the patient’s subjective narrative through four distinct stages to create a structured clinical asset.
Ingest & Map: The Digital Scribe
Agent 1 symptom_mapper: Acting as a scribe for overloaded clinicians, this agent ingests the user’s narrative and translates emotional history into a structured Review of Systems (ROS).
Action: It updates the symptomMapping state in memory (e.g., mapping "brain fog" to neurological clusters), allowing the doctor to skip data entry and focus on diagnosis.
Audit: Clinical Decision Support
Agent 2 bias_analyzer : Acting as a non-judgmental "second opinion," this agent audits the narrative for cognitive traps like premature closure or attribute substitution.
Tool get_bias_implications : The agent queries the AXIOM Engine, a validated dictionary of bias implications, to ground its insights in external, controlled facts rather than LLM weights. The AXIOM Engine concept aims to repair the foundation of medical knowledge. While simulated for this capstone, the architectural vision is to bridge research gaps by scanning PubMed via MCP to flag systemic bias in literature and collect patient narratives to fill medical "Data Voids" with real-world evidence.
Action: It asynchronously updates the biasAwareness state, framing potential biases not as errors, but as diagnostic pivot points.
Advocacy: The Patient Prep Engine
Agent 3 advocacy_generator: Recognizing that a prepared patient is a partner, this agent transforms anxiety into a structured agenda.
Action: Based on the symptomMapping and biasAwareness states, it generates structuredAdvocacy - a list of differential diagnosis requests (e.g., "Given symptoms X and Y, should we check thyroid function?") - to focus the conversation on clinical investigation.
Structure: The Clinical Handoff
Agent 4 report_formatter: This agent compiles the final output into a professional Consultation Brief.
Action: It generates the final report using a standard medical note layout (Subjective -> Assessment -> Plan), ensuring the output is instantly scannable for the provider and positions the patient and doctor on the same side of the table.
I designed a pipeline that operates in a parallel layer where a symptom_mapper extracts patient sensations and a bias_analyzer audits for specific bias markers, followed by a sequential layer where an advocacy_generator and report_formatter synthesize these findings into a final Patient Advocacy & Consultation Aid.
I didn’t just want LUCIA to chat, I wanted her to reason. To test this, I ran a series of diverse patient scenarios ranging from standard check-ups to complex, ambiguous presentations of hormonal imbalances. LUCIA successfully flagged potential biases in diagnostic logic, ensuring that physical symptoms weren’t prematurely dismissed as psychosomatic. In test cases where a standard symptom checker might suggest "reduce stress," LUCIA was able to suggest specific biomarker panels to rule out physiological root causes.
Building LUCIA proved to me that the barrier to entry for impactful AI is lower than ever. The tools are intuitive enough that you don’t need an enterprise team to build them.
I’m leaving this course with a renewed fire to pursue my passion for women’s mental health research. We now have the tools to ensure science keeps up with the complexity of our biology.
And honestly, if I can build a clinical-grade reasoning engine between school runs and writing letters to and from Santa, just imagine what you can build.
This is a submission for the Google AI Agents Writing Challenge: Learning Reflections