Published on January 6, 2026 3:50 PM GMT
I like being open on the internet. But I am getting increasingly worried (have been for the last few years, but never really acted on it) that public posting is dangerous because in the future your public corpus will expose you to increased risk of persuasion, extortion, and general dangers from AIs that are good at understanding and exploiting cues and information deduced from your writing.
This is scary to think about because at least for me the implications are very negative – I value being able to post and share without concern, but I would like to see more engagement and discussion on this issue, and hear people’s opinions
For examples of risks:
- AIs doing continual learning against your corpus to understand yo…
Published on January 6, 2026 3:50 PM GMT
I like being open on the internet. But I am getting increasingly worried (have been for the last few years, but never really acted on it) that public posting is dangerous because in the future your public corpus will expose you to increased risk of persuasion, extortion, and general dangers from AIs that are good at understanding and exploiting cues and information deduced from your writing.
This is scary to think about because at least for me the implications are very negative – I value being able to post and share without concern, but I would like to see more engagement and discussion on this issue, and hear people’s opinions
For examples of risks:
- AIs doing continual learning against your corpus to understand your preferences, what you care about, and how this can be used to interact with you in a way that will make you sympathetic and vulnerable
- by leveraging hidden patterns that humans are not as good as noticing
- this can be done by fully malicious actors, but also just people you interact with IRL - very hard to prevent if your data is literally public
- being manipulated and scammed in very personalized ways, even to start things like basic advertising but also illegal stuff (cf recent voice impersonating scams)
This data can be used to a) replace literal individuals and b) manipulate them, and c) impersonate them.
You can defend yourself by trying to block any kind of unwanted outreach, but even then, your information could be exploited and used by the people you interact with in unnatural and harmful ways.
Should you be worried about this? I am curious about people’s opinions. This would mean closing off your public internet presence and maybe designing and using communities that have defense mechanisms (for eg can’t be scraped).
For being worried:
- persuasion and manipulation seems pretty trainable via using personas and user data
- models seem to be getting much better at it
- pretraining as an objective is very aligned with being able to infer and understand many things from writing
Against:
- the cost to the discourse is very high
- on the other hand, the AIs will understand you better, meaning they can safeguard your interests and interact with you in much more useful ways
- your values will also be more represented the easier they are to access
- if you’ve already been posting, maybe it is already too late and the data is already enough
I value being able to post and engage openly so much, such that even walled gardens feel like a middling compromise. But there is a big problem here if deep patterns about individuals, their weaknesses, vulnerabilities etc can be read off easily. Even in terms of general societal robustness this has big implications for example for more advanced espionage, political maneuvering, etc… so are we cooked? what kinds of solutions can solve this given malicious actors? the law? Subtle effects of this seem very hard to check.
I would also like to see more research effort on quantifying and understanding the upper bound for persuasion and what kinds of patterns can already be detected from user personas.
Discuss