The Problem: AI Noise
Executives don’t need more chatbots. They need high-fidelity intelligence. Most LLM applications today focus on "chatting with data," but for a C-Suite leader, the conversation is often friction. They want the answer, the ROI impact, and the proof.
That’s why I built Lighthouse 3 for the Gemini 3 Global Hackathon.
The Architecture: High-Reasoning & Grounding
Lighthouse 3 isn’t just a wrapper; it’s an autonomous research agent built on a "Lean Executive" architecture.
1. The Brain: Gemini 3 Pro
I utilized Gemini 3 Pro Preview with the thinking_level=HIGH configuration. This was the "secret sauce." It allows the model to perform multi-step reasoning to find "Hidden Connections"—linking disparate …
The Problem: AI Noise
Executives don’t need more chatbots. They need high-fidelity intelligence. Most LLM applications today focus on "chatting with data," but for a C-Suite leader, the conversation is often friction. They want the answer, the ROI impact, and the proof.
That’s why I built Lighthouse 3 for the Gemini 3 Global Hackathon.
The Architecture: High-Reasoning & Grounding
Lighthouse 3 isn’t just a wrapper; it’s an autonomous research agent built on a "Lean Executive" architecture.
1. The Brain: Gemini 3 Pro
I utilized Gemini 3 Pro Preview with the thinking_level=HIGH configuration. This was the "secret sauce." It allows the model to perform multi-step reasoning to find "Hidden Connections"—linking disparate events like infrastructure pivots to secondary energy market shifts.
2. The Grounding Engine
Using the google-genai SDK, I integrated Google Search Grounding. This ensures the agent isn’t hallucinating on old training data, but reacting to market shifts from this morning.
3. The "Thought Signature"
To solve the "Black Box" problem, I implemented a Thought Signature. Every report includes a transparent log of the agent’s internal reasoning path so executives can trust how the AI arrived at its strategic advice.
🛠 The Engineering Hurdle: Cloud Run & The Missing Files
Infrastructure is just as important as model logic. I deployed the portal using Docker on Google Cloud Run, but I quickly hit a fascinating hurdle: Deployment Data Persistence.
In the serverless environment of Cloud Run, my generated Markdown briefings were initially "vanishing" due to container lifecycle resets and deployment sync issues.
The Fix:
- Absolute Pathing: I moved away from relative paths that behaved inconsistently within the container environment.
- The
.gcloudignorePrecision: I engineered a custom.gcloudignorefile to filter out virtual environment bloat while "force-allowing" the mission-critical/reportsdirectory.
"In the cloud, your infrastructure defines the reliability of your intelligence."
Watch the Demo
What’s Next?
Lighthouse 3 is just the beginning. The roadmap includes:
- Dynamic Persistence: Moving from the container filesystem to Google Cloud Storage (GCS) for instant, "no-deploy" updates.
- Enterprise Integration: Direct delivery to encrypted executive Slack channels.
- Long-Term Memory: Tracking strategic predictions over time to create a feedback loop for the model’s own reasoning.
Links
- Live Portal: lighthouse-portal-91439230830.us-central1.run.app
- GitHub Repository: https://github.com/groundhog-21/gemini_3_hackathon
- Devpost Submission: https://devpost.com/software/westmarch-research-house
Thanks for reading!