I’ve been pretty cavalier with using AI. I think once I got used to not fully trusting it’s truthfulness, and instead using it like a teacher that I question and verify. But this past month I’ve been getting more uncomfortable with the answers. Especially ones that I can see are digging up little nuggets of personal information I dropped in over the past year:
Prompt Question: “Can I trust tailscale?”

This is something I’ve looked up half a dozen times. I’ve never used it, debated with friends multiple times it’s usefulness vs SSH. So when I put in the short prompt, I was more just wanting to revist the main talking points in the Tailscale vs SSH debate I’ve had in my head.
After the main response, it …
I’ve been pretty cavalier with using AI. I think once I got used to not fully trusting it’s truthfulness, and instead using it like a teacher that I question and verify. But this past month I’ve been getting more uncomfortable with the answers. Especially ones that I can see are digging up little nuggets of personal information I dropped in over the past year:
Prompt Question: “Can I trust tailscale?”

This is something I’ve looked up half a dozen times. I’ve never used it, debated with friends multiple times it’s usefulness vs SSH. So when I put in the short prompt, I was more just wanting to revist the main talking points in the Tailscale vs SSH debate I’ve had in my head.
After the main response, it provides this personalized summary and drops this little nugget of my personal life in, that I do work on my parent’s solar powered off-grid home where I visit a couple times a year.
I can’t put my finger on why this bothered me so much. I’m proud of my parents house, I’ll tell anyone about it. I’ve certainly mentioned this to ChatGPT, I definitely used it last year when I built a new solar array for my parents. You can see the picture below building the new one with the older panels I built 12 years ago in the back.


So why would it bother me so much? Was it the cognitive dissonance? I’m thinking about tailscale, and it is talking incorrectly about my parents who I miss? Is it that it dug up information about me from a year ago that I forgot, or never really thought about, that it would remember?
Prompt Question: “What’s a good router for a homelab?”

I mean obviously, I’m on their website, they have my IP. But ChatGPT brings up my location like this fairly often, I think any time I mention a prompt about a product, which I do oftenish as I’ve been curious about how they’ll handle the advertising / product placements.
That being said, something about the way it brings up the location, again feels off putting. DuckDuckGo and Google will use IP based location all the time and I’ve never been too bothered by it. But there’s something about the way ChatGPT brings it up, oddly mixing “look up pricing in” with the later “here” as if it’s here with me. Just definitely getting bad vibes.

ChatGPT logs everything
Chunks of code I copy paste into a git repo is like a little fingerprint that can always tie that code back to a moment in time I talked to that instance of OpenAI’s ChatGPT. Little chunks of me that I type into the background of my prompts tie more of my life to ChatGPT, and in ways that it will never forget.
I’m not sure what the answer is yet. Maybe OpenAI will smooth out the awkwardness of how it will always remember, if it wants, everything you’ve ever typed to it.
Local models
My hope is that open local models will become efficient enough to run locally on laptops or small home PCs and deliver private AI chats, but that seems like it’s far off for small budgets.