Terminating chats
China wants a human to intervene and notify guardians if suicide is ever mentioned.
China drafted landmark rules to stop AI chatbots from emotionally manipulating users, including what could become the strictest policy worldwide intended to prevent AI-supported suicides, self-harm, and violence.
China’s Cyberspace Administration proposed the rules on Saturday. If finalized, they would apply to any AI products or services publicly available in China that use text, images, audio, video, or “other means” to simulate engaging human conversation. Winston Ma, adjunct professor at NYU School of Law, [told CNBC](https://www.cnbc.com/2025/12/29/china-ai-chatbot-rules-emotional-influence-suicide-gambling-zai-minimax…
Terminating chats
China wants a human to intervene and notify guardians if suicide is ever mentioned.
China drafted landmark rules to stop AI chatbots from emotionally manipulating users, including what could become the strictest policy worldwide intended to prevent AI-supported suicides, self-harm, and violence.
China’s Cyberspace Administration proposed the rules on Saturday. If finalized, they would apply to any AI products or services publicly available in China that use text, images, audio, video, or “other means” to simulate engaging human conversation. Winston Ma, adjunct professor at NYU School of Law, told CNBC that the “planned rules would mark the world’s first attempt to regulate AI with human or anthropomorphic characteristics” at a time when companion bot usage is rising globally.
Growing awareness of problems
In 2025, researchers flagged major harms of AI companions, including promotion of self-harm, violence, and terrorism. Beyond that, chatbots shared harmful misinformation, made unwanted sexual advances, encouraged substance abuse, and verbally abused users. Some psychiatrists are increasingly ready to link psychosis to chatbot use, the Wall Street Journal reported this weekend, while the most popular chatbot in the world, ChatGPT, has triggered lawsuits over outputs linked to child suicide and murder-suicide.
China is now moving to eliminate the most extreme threats. Proposed rules would require, for example, that a human intervene as soon as suicide is mentioned. The rules also dictate that all minor and elderly users must provide the contact information for a guardian when they register—the guardian would be notified if suicide or self-harm is discussed.
Generally, chatbots would be prohibited from generating content that encourages suicide, self-harm, or violence, as well as attempts to emotionally manipulate a user, such as by making false promises. Chatbots would also be banned from promoting obscenity, gambling, or instigation of a crime, as well as from slandering or insulting users. Also banned are what are termed “emotional traps,"—chatbots would additionally be prevented from misleading users into making “unreasonable decisions,” a translation of the rules indicates.
Perhaps most troubling to AI developers, China’s rules would also put an end to building chatbots that “induce addiction and dependence as design goals.” In lawsuits, ChatGPT maker OpenAI has been accused of prioritizing profits over users’ mental health by allowing harmful chats to continue. The AI company has acknowledged that its safety guardrails weaken the longer a user remains in the chat—China plans to curb that threat by requiring AI developers to blast users with pop-up reminders when chatbot use exceeds two hours.
Safety audits
AI developers will also likely balk at annual safety tests and audits that China wants to require for any service or products exceeding 1 million registered users or more than 100,000 monthly active users. Those audits would log user complaints, which may multiply if the rules pass, as China also plans to require AI developers to make it easier to report complaints and feedback.
Should any AI company fail to follow the rules, app stores could be ordered to terminate access to their chatbots in China. That could mess with AI firms’ hopes for global dominance, as China’s market is key to promoting companion bots, Business Research Insights reported earlier this month. In 2025, the global companion bot market exceeded $360 billion and by 2035; BRI’s forecast suggests it could near a $1 trillion valuation, with AI-friendly Asian markets potentially driving much of that growth.
Somewhat notably, OpenAI CEO Sam Altman started 2025 by relaxing restrictions that blocked the use of ChatGPT in China, saying, “we’d like to work with China” and should “work as hard as we can” to do so, because “I think that’s really important.”
If you or someone you know is feeling suicidal or in distress, please call or text 988 to reach the Suicide Prevention Lifeline, which will put you in touch with a local crisis center. Online chat is also available at 988lifeline.org.
Ashley is a senior policy reporter for Ars Technica, dedicated to tracking social impacts of emerging policies and new technologies. She is a Chicago-based journalist with 20 years of experience.