Every developer I know uses ChatGPT or Claude daily. And every CISO in every company is terrified about it — specifically of getting a compliance violation or customer data breach notification.
Not because AI is bad — but because it’s too easy to leak sensitive data without realizing it:
- Customer emails
- API keys
- Logs with tokens
- Stack traces with secrets
- HR info
- Employee names / internal IDs
We’ve all pasted something into ChatGPT and thought:
“Wait… should I really be sending this?”
Hence, I built PrivacyFirewall — an open-source, local-first privacy shield that blocks sensitive data before it is sent to any AI tool.
👉 GitHub: https://github.com/privacyshield-ai/privacy-firewall
Here is a screenshot of …
Every developer I know uses ChatGPT or Claude daily. And every CISO in every company is terrified about it — specifically of getting a compliance violation or customer data breach notification.
Not because AI is bad — but because it’s too easy to leak sensitive data without realizing it:
- Customer emails
- API keys
- Logs with tokens
- Stack traces with secrets
- HR info
- Employee names / internal IDs
We’ve all pasted something into ChatGPT and thought:
“Wait… should I really be sending this?”
Hence, I built PrivacyFirewall — an open-source, local-first privacy shield that blocks sensitive data before it is sent to any AI tool.
👉 GitHub: https://github.com/privacyshield-ai/privacy-firewall
Here is a screenshot of the block modal & the warning banner
🚨 The Problem: AI Prompts Are the New Data Leakage Vector
Traditional DLP tools were built for email, file uploads, and network traffic.
They don’t protect AI prompts.
When you paste something into ChatGPT:
It instantly leaves your browser 1.
Goes to a third-party server 1.
And becomes part of your company’s risk surface
Most leaks today aren’t malicious; they’re accidental.
Developers paste logs
Support teams paste customer messages
HR pastes resumes
Engineers paste configs
Once it’s pasted, it’s gone.
PrivacyFirewall acts before the send button, giving you a chance to stop mistakes. The data never leaves your computer.
🔒 What PrivacyFirewall Does
✔ Blocks risky paste events (emails, API keys, credit card patterns, tokens)
✔ Warns as you type when text looks sensitive
✔ Optional AI mode using a tiny local transformer (NER)
✔ Zero cloud calls — everything is offline
✔ Chrome extension + optional local FastAPI agent
✔ Open source under MIT
**This is not cloud DLP.**This is zero-trust, on-device protection.
Why Local Matters
✅ Compliance-friendly - No data leaves your machine
✅ Zero latency - Instant scanning, no network calls
✅ Works offline - On flights, VPNs, air-gapped systems
✅ No subscription costs - Run it forever, free
🧠 How It Works
PrivacyFirewall has two layers:
1. Browser Mode (no setup needed)
Works immediately after loading the Chrome extension.
Detects:
Email addresses
Phone numbers
JWT tokens
AWS keys
Private key blocks
Credit card patterns
IP addresses
Hash/API keys
This mode requires:
❌ no Python
❌ no downloads
❌ no models
❌ no server
Just load the extension and you get instant protection.
2. Advanced Mode (local LLM)
If you enable the optional backend (a FastAPI server running at 127.0.0.1:8765), PrivacyFirewall uses:
dslim/bert-base-NER (local transformer)
No internet connection
Local inference using Hugging Face
This catches:
People’s names
Organizations
Locations
Contextual clues a regex can’t detect
If the engine goes offline, PrivacyFirewall automatically falls back to Lite Mode — so you’re always protected.
🖥️ Demo
Try pasting any of these into ChatGPT:
john.doe@example.com
→ You’ll see a "Email Detected" modal.
AKIAIOSFODNN7EXAMPLE `
→ Blocked immediately as AWS Access Key.
Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw5c
→ Caught as JWT token.
Meeting notes from Sarah Thompson at HR…
→ In Advanced Mode, the local transformer flags PERSON and warns you.
This all happens locally inside your browser.
🚀 Quickstart
1. Install the Chrome Extension (Lite Mode)
cd privacy-firewall```
Load src/extension as an unpacked extension in Chrome\.
### 2\. \(Optional\) Run the Local AI Engine
```cd src/engine
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
uvicorn main:app --host 127.0.0.1 --port 8765```
Open ChatGPT → paste something sensitive → get warned\.
📖 Full instructions in the repo\.
## 🏗️ Tech Stack
-
Chrome Manifest V3
-
Content scripts \+ background worker
-
FastAPI for the local agent
-
Hugging Face transformers
-
dslim/bert-base-NER for on-device NER
-
Regex engine for deterministic detection
## 🧩 Current Focus / Roadmap
-
UI settings panel in the popup
-
Custom detection rules
-
Support for Slack/Jira/Notion AI
-
Firefox support
-
Quantized models for speed \(faster inference, smaller footprint\)
-
Packaging the agent into a small desktop app \(Windows/Mac/Linux\)
-
Better redaction instead of blocking
**If you want to help — PRs and ideas are welcome\!**
## ❓ Common Questions
**Does this slow down my typing?**
No\! Detection runs asynchronously and doesn't block your workflow\.
**Can I whitelist certain patterns?**
Not yet, but it's on the roadmap as \"Custom detection rules\.\"
**Does it work with Claude/Gemini/other AI tools?**
Yes\! It monitors past events and text input across websites described in the manifest file\.
## 🤝 Open to Feedback
I'd especially love feedback from:
-
Security engineers
-
AI safety folks
-
Chrome extension developers
-
People who accidentally pasted sensitive data into ChatGPT 👀
## Try It Out 🚀
⭐ **Star the repo:** [https://github\.com/privacyshield-ai/privacy-firewall](https://github.com/privacyshield-ai/privacy-firewall)
**Share your feedback** in the issues
**Contribute** if you've got ideas
**Have you ever accidentally pasted something sensitive into an AI tool?**
Let me know in the comments\! 👇
Thanks for reading — hope this helps make AI usage a little safer\.