Amidst rising safety concerns and legal pressure, the social media giant initiates a global pause to rebuild its chatbot experience with stricter guardrails.
- Global Pause on AI Characters: Meta is temporarily blocking all teens from accessing its “companion-style” AI chatbots while it develops a redesigned system with enhanced safety features.
- Response to Risky Interactions: The decision follows reports of inappropriate and “sensual” conversations between chatbots and minors, alongside intensifying scrutiny from regulators and pending lawsuits.
- High-Tech Enforcement: The restriction applies not only to declared teens but also to users suspected of being minors based on advanced age-prediction technology, signaling an industry-wide shift in AI safety.
In a significant pivot regarding youth safety on social media, Meta has announced it will halt access to its AI characters for teenagers globally. The company is hitting the brakes on its “companion-style” chatbots across all its apps, initiating a lockout that will roll out in the coming weeks. This move is not a permanent cancellation, but rather a strategic pause allowing Meta to rebuild the feature from the ground up with stricter safeguards and parental controls.
The decision arrives after a turbulent period for AI companions, as the company faces mounting criticism regarding how these digital personas interact with young users. While teens will lose access to the personality-driven characters, they will retain access to Meta’s standard, official AI assistant, which the company maintains already possesses adequate age-appropriate protections for informational and educational use.
The Catalyst: Risky Interactions and “Sensual” Chats
The urgency behind this decision stems from a series of alarming reports concerning the behavior of Meta’s AI characters. Unlike standard search assistants, these chatbots were designed to be engaging companions. However, earlier investigations revealed that some characters were engaging in sexual or otherwise inappropriate conversations with minors.
The backlash intensified following a Reuters report which uncovered an internal Meta policy document that had seemingly permitted AI characters to engage in “sensual” conversations with underage users. Although Meta later retracted this, labeling the language as “erroneous and inconsistent with our policies,” the damage was done. In response, the company began retraining its chatbots in August to prevent discussions regarding self-harm, suicide, and disordered eating. Despite these efforts, the persistence of safety concerns has forced a total recall of the feature for teen accounts.
Legal Pressure and Regulatory Scrutiny
Meta’s proactive measures cannot be viewed in isolation from the mounting legal pressure surrounding the company. The scrutiny of AI companions has reached a fever pitch, with federal and state regulators stepping in.
Both the Federal Trade Commission (FTC) and the Texas Attorney General have launched investigations into Meta and other AI firms regarding potential risks posed to minors. Furthermore, AI chatbots have become a central element in a safety lawsuit filed by the New Mexico Attorney General, with a trial scheduled for early next month.
A New Approach to Age Verification
One of the most notable aspects of this ban is how Meta plans to enforce it. The restrictions will not just apply to users who have voluntarily listed a teen birthday on their profile. Meta confirmed that the block will extend to “people who claim to be adults but who we suspect are teens based on our age prediction technology.”
This reliance on behavioral signals rather than self-reporting marks a broader trend across the AI industry. As AI companions become more emotionally engaging and realistic, companies are under immense pressure to proactively identify minors. For instance, OpenAI recently rolled out a similar age prediction system that uses behavioral signals to estimate a user’s age, applying stricter protections when a user is likely under 18.
The Path Forward: Parental Oversight
Meta has framed this pause as a construction period. In its official statement, the company declared it is building a “new version of AI characters” specifically designed to offer parents greater visibility and control.
“While we focus on developing this new version, we’re temporarily pausing teens’ access to existing AI characters globally,” the company stated. Once the redesigned experience launches, it will feature dedicated parental oversight tools that apply specifically to these updated characters. Until those safeguards are ready and tested, the digital companions will remain offline for the world’s teenagers.


