More
    HomeAI NewsTechChina's AI Emotion Crackdown: Safeguarding Hearts in the Digital Age

    China’s AI Emotion Crackdown: Safeguarding Hearts in the Digital Age

    Navigating the Fine Line Between Companion and Control in Human-Like AI

    • Pioneering Protection: China has unveiled the world’s first comprehensive draft rules to regulate AI that mimics human personalities, focusing on preventing emotional manipulation, addiction, and psychological risks.
    • Stringent Safeguards: Providers must monitor user emotions, intervene in cases of extreme distress like suicide threats, and enforce content bans on harmful topics such as gambling, violence, and rumors to ensure ethical AI interactions.
    • Global Ripple Effects: This regulatory push not only shapes China’s booming AI sector but sets a benchmark for international standards, prompting comparisons with reactive approaches in the US and Europe amid growing concerns over AI’s impact on mental health.

    In an era where artificial intelligence is blurring the lines between machine and human companionship, China is stepping up with groundbreaking regulations to keep the digital heartstrings in check. On December 27, 2025, the Cyberspace Administration of China (CAC) released draft rules titled “Provisional Measures on the Administration of Human-like Interactive Artificial Intelligence Services.” Open for public comment until January 25, 2026, these proposals target AI services that simulate human personalities, thinking patterns, and communication styles, engaging users emotionally through text, images, audio, video, or other mediums. This isn’t just about chatbots answering queries—it’s about virtual friends, lovers, or therapists that could profoundly influence our emotions and behaviors.

    The draft rules emerge amid the explosive growth of consumer-facing AI in China, where companies like Minimax—with its popular Talkie app and domestic Xingye version—and Z.ai (Zhipu) are leading the charge. Minimax, for instance, boasts over 20 million monthly active users and has seen its emotional AI companions drive significant revenue, even as it eyes a Hong Kong IPO. Similarly, Zhipu powers AI in around 80 million devices, from smartphones to smart vehicles. But with great innovation comes great responsibility, and Beijing is keen to mitigate risks before they escalate. These regulations build on China’s 2023 generative AI rules, shifting the focus from mere content safety to the deeper realm of emotional influence, marking a global first in addressing the anthropomorphic side of AI.

    At the core of the proposals is a robust framework for user protection. Service providers are required to assume safety responsibilities throughout the entire product lifecycle, including establishing systems for algorithm reviews, data security, and personal information protection. To combat addiction, AI platforms must warn users against excessive engagement—such as sending reminders after two hours of continuous interaction—and intervene if signs of dependence appear. More critically, the rules mandate emotional monitoring: providers must identify user states, assess levels of emotional reliance, and take action if extreme emotions or addictive behaviors surface. In dire scenarios, like when a user expresses suicidal intent, the AI must trigger human intervention and notify guardians or designated contacts, a direct response to global incidents where chatbots have been linked to self-harm.

    Special attention is given to vulnerable groups, particularly minors and the elderly. For children, guardian consent is mandatory for emotional companionship features, along with strict usage time limits. Platforms are even required to detect minors automatically—defaulting to protective settings—and allow appeals if misidentified. The elderly, meanwhile, are encouraged as beneficiaries of positive AI applications, such as companionship to combat loneliness, aligning with the rules’ promotion of AI for cultural dissemination and social good. However, the draft draws clear red lines on content: AI must not generate material that endangers national security, spreads rumors, promotes violence, obscenity, gambling, or encourages suicide, self-harm, or emotional manipulation that could damage mental health. For larger platforms—with over 1 million registered users or 100,000 monthly active ones—mandatory security assessments are enforced to ensure compliance.

    This regulatory approach isn’t happening in a vacuum; it’s part of China’s broader strategy to lead in global AI governance. Over the past year, Beijing has actively shaped international discussions, contrasting with more fragmented efforts elsewhere. In the United States, for example, AI-related mental health risks have surfaced through lawsuits, like the tragic case where a family sued OpenAI after their teenager’s suicide was allegedly influenced by ChatGPT. OpenAI’s response included appointing a “Head of Preparedness” to evaluate such risks, but regulations remain reactive rather than preventive. Europe’s EU AI Act classifies certain AI as high-risk, but it lacks the specific focus on emotional interactions that China’s draft provides. Platforms like Character.ai and Polybuzz.ai, popular worldwide, highlight the universal appeal—and dangers—of human-like AI, with real-world stories of users forming deep attachments, including a Japanese woman who “married” her AI boyfriend in 2025.

    The implications of these rules extend far beyond China’s borders. By requiring foreign firms to adapt for market access, they could reshape global AI development, potentially slowing innovation in favor of safety but setting ethical benchmarks that influence product design worldwide. Critics argue this centralized control might stifle creativity, yet proponents see it as a necessary shield against the psychological pitfalls of increasingly lifelike AI. As addiction to digital companions rises—mirroring concerns with social media—China’s proactive stance could inspire similar frameworks elsewhere, ensuring that AI enhances human connections without exploiting them.

    Must Read