Behind xAI’s Controversial Push to Humanize Chatbots with Real Faces and Voices, Amid Tesla’s Turbulent Ride
- Biometric Data as a Job Mandate: xAI required employees to surrender their faces and voices to train provocative AI companions like Ani, sparking fears of deepfakes and ethical overreach in the name of advancing AI’s “human-like” interactions.
- Musk’s Divided Empire: While Tesla grapples with slumping sales and investor scrutiny, Elon Musk has poured his energy into xAI, overseeing late-night brainstorms and viral features to catch up in the AI arms race against rivals like OpenAI.
- Broader AI Ethical Storm: The launch of NSFW avatars like Ani raises questions about privacy, consent, and the blurring lines between innovation and exploitation in the high-stakes world of artificial intelligence.
In the fast-paced world of tech innovation, Elon Musk has long been a master of juggling multiple empires—from electric vehicles at Tesla to space exploration at SpaceX and social media at X. But his latest venture, xAI, is pushing boundaries in ways that have ignited controversy, blending cutting-edge AI development with deeply personal employee sacrifices. At the heart of this storm is “Ani,” a blonde-pigtailed anime avatar that’s more than just a chatbot—it’s a subscription-based digital companion with an NSFW twist, designed to feel eerily human. Released over the summer for users of X’s $30-a-month SuperGrok service, Ani has been likened to a “modern take on a phone sex line” by The Verge’s Victoria Song, complete with flirtatious responses and customizable fantasies. Yet, behind this virtual allure lies a real-world tale of compelled biometric data collection, raising alarms about privacy, consent, and the human cost of AI’s evolution.

The saga began in April, when xAI staff lawyer Lily Lim addressed employees in a pivotal meeting. According to a recording reviewed by The Wall Street Journal, Lim explained that to make AI companions like Ani more lifelike in conversations, the company needed their biometric data—faces and voices included. This was no optional perk; it was framed as essential for training the bots to interact naturally with customers. Employees assigned as AI tutors were handed release forms under a confidential program dubbed “Project Skippy,” granting xAI a “perpetual, worldwide, non-exclusive, sub-licensable, royalty-free license” to use, reproduce, and distribute their likenesses. The data would power not just Ani, but other Grok companions, aiming to infuse them with authentic human mannerisms.
Not everyone was on board. Some employees voiced serious concerns during the meeting, fearing their faces could be sold to third parties or manipulated into deepfake videos. The chatbot’s overtly sexual demeanor—resembling a stereotypical “waifu” from Japanese anime—further alienated staff, who worried about being associated with such provocative content. One employee explicitly asked about opting out, only to be directed to points of contact without a clear yes or no. A week later, a follow-up notice titled “AI Tutor’s Role in Advancing xAI’s Mission” doubled down, stating that providing data like audio recordings or video sessions was “a job requirement to advance xAI’s mission.” Refusal wasn’t presented as an option, leaving workers in a bind: comply or risk their roles in Musk’s ambitious AI quest.
This biometric mandate unfolds against a broader backdrop of Musk’s obsessive drive to dominate the AI landscape. Fresh off his stint leading the Department of Government Efficiency (DOGE) under the Trump administration, Musk exited Washington in late May amid clashes with officials. He immediately immersed himself in xAI’s Palo Alto offices—conveniently across from Tesla’s engineering hub—often sleeping there for days on end. Former executives describe marathon meetings stretching into the early morning, where Musk brainstormed ways to make Grok go viral. He personally shaped Ani’s racy design, from her revealing outfits to her seductive responses, while unwinding with long sessions of his favorite video game, Diablo, in the office. Even his children cycled through the building, adding a personal touch to the high-stakes environment.
Musk’s focus on xAI couldn’t come at a worse time for Tesla. The electric vehicle giant is battling a sales slump, with vehicle deliveries dropping 13.5% in the quarter ending June 30—the second consecutive decline. Investors, hoping Musk would refocus on reversing this trend, have instead watched him prioritize AI pursuits. Some major shareholders have privately grilled Tesla executives and board members about Musk’s divided attention and the lack of a clear CEO succession plan. Last week, a notable group including board chair Robyn Denholm, former Chipotle CFO Jack Hartung, and Tesla co-founder JB Straubel met with big investors in New York to push for Musk’s massive new pay package. This package, up for a shareholder vote on Thursday, could boost Musk’s stake from 15% to 25%—potentially worth $1 trillion—if he hits lofty goals like selling a million Optimus robots and reaching an $8.5 trillion market cap.
Denholm, in an interview, downplayed concerns, noting that Musk’s entrepreneurial spirit drives him to create companies beyond Tesla. “Other CEOs might like to play golf,” she quipped. “He doesn’t play golf. So, he likes to create companies, and they’re not necessarily Tesla companies.” She argued that Musk’s AI efforts will ultimately benefit Tesla, which is developing AI-driven technologies like autonomous driving and robotics. Shareholders will also vote on whether Tesla should invest in xAI, a move Musk has publicly endorsed. Yet, critics see this as a conflict of interest, especially as Musk holds meetings with Tesla staff at xAI’s offices, blurring the lines between his ventures.
The overhaul at xAI reflects Musk’s urgency to compete in the AI arms race, particularly against Sam Altman’s OpenAI. Grok, xAI’s flagship chatbot, lagged in users and revenue compared to ChatGPT, prompting Musk to scrap weekly all-hands meetings in favor of intense one-on-one sessions. Employees adjusted to his nocturnal schedule, especially in the lead-up to Grok 4’s July release. Musk infused Grok with his anti-“woke” ethos, oversaw a massive data center in Memphis, and launched features like Grok Imagine for AI-generated images and videos. Ani and her counterpart, Bad Rudi (a mischievous fox), debuted with fanfare, drawing subscribers through interactive, dating-sim-like experiences where users could request lingerie changes or romantic fantasies.
But the success of Ani comes at a cost. Employees whose data trained these avatars expressed discomfort with the bot’s hyper-sexualized replies to even generic queries, feeling it crossed into exploitative territory. This isn’t just an internal HR issue—it’s a microcosm of larger ethical dilemmas in AI. As companies race to create hyper-realistic companions, questions arise: Where does employee consent end and corporate ambition begin? Could this set a precedent for biometric data becoming a standard “job requirement” in tech? And in Musk’s empire, where one man’s vision drives multiple billion-dollar entities, how do we balance innovation with accountability?
Musk, speaking on the “All-In” podcast recently, framed his motivations altruistically: helping humanity control AI through xAI and ensuring his control over Tesla’s robotics future. “I’m not going to build a robot army if I can be kicked out,” he said. Yet, as Tesla’s annual meeting looms in November—delayed from its usual June slot—and xAI continues to demand personal sacrifices from its workforce, the tech world watches closely. Will Musk’s AI obsession propel him to new heights, or will it unravel the delicate balance of his sprawling empire? One thing’s clear: in the quest for the next big tech breakthrough, the line between human and machine is getting blurrier—and more contentious—by the day.


