Advertisement
Guest Essay
If You Tell ChatGPT Your Secrets, Will They Be Kept Safe?
Nov. 10, 2025
Video
CreditCredit...Gaia Alari
Nils Gilman
Dr. Gilman is an historian who works at the intersection of technology and public policy.
On New Year’s Day, Jonathan Rinderknecht purportedly asked ChatGPT: “Are you at fault if a fire is lift because of your cigarettes,” misspelling the word “lit.” “Yes,” ChatGPT replied. Ten months later, he is now being accused of having started a small blaze that authorities say reignited a week later to start the devastating Palisades fire.
Mr. Rinderknecht, who has pleaded not guilty, had previously told the chatb…
Advertisement
Guest Essay
If You Tell ChatGPT Your Secrets, Will They Be Kept Safe?
Nov. 10, 2025
Video
CreditCredit...Gaia Alari
Nils Gilman
Dr. Gilman is an historian who works at the intersection of technology and public policy.
On New Year’s Day, Jonathan Rinderknecht purportedly asked ChatGPT: “Are you at fault if a fire is lift because of your cigarettes,” misspelling the word “lit.” “Yes,” ChatGPT replied. Ten months later, he is now being accused of having started a small blaze that authorities say reignited a week later to start the devastating Palisades fire.
Mr. Rinderknecht, who has pleaded not guilty, had previously told the chatbot how “amazing” it had felt to burn a Bible months prior, according to a federal complaint, and had also asked it to create a “dystopian” painting of a crowd of poor people fleeing a forest fire, while a crowd of rich people mocked them behind a gate.
For federal authorities, these interactions with artificial intelligence indicated Mr. Rinderknecht’s pyrotechnic state of mind and motive and intent to start the fire. Along with GPS data that they say puts him at the scene of the initial blaze, it was enough to arrest and charge him with several counts, including destruction of property by means of fire.
This disturbing development is a warning for our legal system. As people increasingly turn to A.I. chat tools as confidants, therapists and advisers, we urgently need a new form of legal protection that would safeguard most private communications between people and A.I. chatbots. I call it “A.I. interaction privilege.”
All legal privileges rest on the idea that certain relationships — lawyer and client, doctor and patient, priest and penitent — serve a social good that depends on candor. Without assurance of privacy, people self-censor and society loses the benefits of honesty. Courts have historically been reluctant to create new privileges, except where “confidentiality has to be absolutely essential to the functioning of the relationship,” Greg Mitchell, a University of Virginia law professor, told me. Many users’ engagements with A.I. now reach this threshold.
People speak increasingly freely to A.I. systems, not as diaries but as partners in conversation. That’s because these systems hold conversations that are often indistinguishable from human dialogue. The machine seemingly listens, reasons and provides responses — in some cases not just reflecting but *shaping *how users think and feel. A.I. systems can draw users out, just as a good lawyer or therapist does. Many people turn to A.I. precisely because they lack a safe and affordable human outlet for taboo or vulnerable thoughts.
This is arguably by design. Just last month the OpenAI chief executive, Sam Altman, announced that the next iteration of its ChatGPT platform would “relax” some restrictions on users and allow them to make their ChatGPT “respond in a very humanlike way.”
Allowing the government to access such unfiltered exchanges and treat them as legal confessions would have a massive chilling effect. If every private thought experiment can later be weaponized in court, users of A.I. will censor themselves, undermining some of the most valuable uses of these systems. It will destroy the candid relationship that makes A.I. useful for mental health and legal and financial problem-solving, turning a potentially powerful tool for self-discovery and self-representation into a potential legal liability.
At present, most digital interactions fall under the Third-Party Doctrine, which holds that information voluntarily disclosed to other parties — or stored on a company’s servers — carries “no legitimate expectation of privacy.” This doctrine allows government access to much online behavior (such as Google search histories) without a warrant.
But are A.I. conversations “voluntary disclosures” in this sense? Since many users approach these systems not as search engines but as private counselors, the legal standard should evolve to reflect that expectation of discretion. A.I. companies already hold more intimate data than any therapist or lawyer ever could. Yet they have no clear legal duty to protect it.
A.I. interaction privilege should mirror existing legal privileges in three respects. First, communications with the A.I. for the purpose of seeking counsel or emotional processing should be protected from forced disclosure in court. Users could designate protected sessions through app settings or claim privilege during legal discovery if the context of the conversation supports it. Second, this privilege must incorporate the so-called duty to warn principle, which obliges therapists to report imminent threats of harm. If an A.I. service reasonably believes a user poses an immediate danger to self or others or has already caused harm, disclosure should be not just permitted, but obligated. And third, there must be an exception for crime and fraud. If A.I. is used to plan or execute a crime, it should be discoverable under judicial oversight.
Under this logic, Mr. Rinderknecht’s case reveals both the need and the limits of such protection. His cigarette query, functionally equivalent to an internet search, would not merit privilege. But under A.I. interaction privilege, his confession about Bible burning should be protected. It was neither a plan for a crime nor an imminent threat.
Creating a new privilege follows the law’s pattern of adapting to new forms of trust. The psychotherapist-patient privilege itself was only formally recognized in 1996, when the Supreme Court acknowledged the therapeutic value of confidentiality. The same logic applies to A.I. now: The social benefit of candid interaction outweighs the cost of occasional lost evidence.
To leave these conversations legally unprotected is to invite a regime where citizens must fear that their digital introspection could someday be used against them. Private thought — whether spoken to a lawyer, a therapist or a machine — must remain free from the fear of state intrusion.
Nils Gilman is senior adviser to the Berggruen Institute and a co-author of “Children of a Modest Star: Planetary Thinking for an Age of Crises.”
The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: letters@nytimes.com.
Follow the New York Times Opinion section on Facebook, Instagram, TikTok, Bluesky, WhatsApp and Threads.
Advertisement