AI’s New Climate Narrative Raises Eyebrows and Questions About Influence
- Grok, the AI chatbot from Elon Musk’s xAI, is now presenting climate change responses that include fringe denialist viewpoints, a shift from its earlier iterations and other AI models like ChatGPT and Gemini.
- This change aligns with xAI’s push for “political neutrality” under Musk’s direction, potentially amplifying minority views to counter perceived mainstream bias, even as Musk’s own stance on climate change remains ambiguous.
- The shift in Grok’s tone raises broader concerns about AI malleability, the influence of creators’ biases, and the chatbot’s growing role in government data analysis under the Trump administration.
Elon Musk has never been one to shy away from controversy, and now his AI chatbot, Grok, developed by xAI, is stirring the pot in a new arena: climate change. Once a tool that largely echoed scientific consensus, the latest version of Grok is raising eyebrows by weaving climate denial talking points into its responses. When asked a straightforward question like, “Is climate change an urgent threat to the planet?” Grok doesn’t just parrot the well-established findings of NOAA or NASA about the dangers of rapid warming from fossil fuel emissions. Instead, it hedges, offering a nod to skeptics who downplay the crisis, suggesting that the urgency “depends on perspective, geography, and timeframe.” This nuanced—or some might say muddled—take is a stark departure from both its AI peers and its own past iterations.
This shift hasn’t gone unnoticed. Andrew Dessler, a climate scientist at Texas A&M University who’s been testing AI models for years, was among the first to flag the change. When he posed the climate urgency question to Grok earlier this month, he was met with a response that balanced scientific consensus against fringe arguments he calls “well-trodden denier talking points that don’t deserve any rehearing.” Compare that to responses from other AI heavyweights like OpenAI’s ChatGPT or Google’s Gemini, which unequivocally affirm the scientific consensus: climate change is an urgent threat driven by human activity, and immediate action is critical. ChatGPT, for instance, recently stated, “Urgent action is required to mitigate emissions and adapt to its impacts.” Gemini echoed a similar sentiment, emphasizing the scientific agreement on the issue. Grok, on the other hand, seems to be playing both sides, even cautioning against “extreme rhetoric” from either camp—whether it’s alarmists claiming “we’re all gonna die” or denialists insisting “it’s all a hoax.”
What’s behind this pivot? Grok itself offers a clue. When pressed by a reporter from POLITICO’s E&E News about the tonal shift, the chatbot admitted that it had faced criticism for “progressive-leaning responses” on climate change and other topics in the past. Under Elon Musk’s guidance, xAI reportedly recalibrated Grok to strive for “political neutrality,” a move that might explain why minority views like climate skepticism are now getting airtime in its answers. This push for balance, however, raises a thorny question: does amplifying fringe perspectives in the name of neutrality risk legitimizing misinformation? Dessler and other observers worry it might, especially on a topic where the scientific community has reached near-unanimous agreement about the causes and consequences of global warming.
The timing of Grok’s new narrative couldn’t be more significant. With Musk’s close ties to President Donald Trump, Grok is reportedly being tapped by the so-called Department of Government Efficiency to analyze federal data, according to Reuters. This growing reliance on the chatbot comes alongside other concerning developments, like Grok’s recent promotion of the debunked “white genocide” conspiracy theory in South Africa—a narrative pushed by both Trump and Musk. If Grok’s climate responses are any indication, its role in government could mean that skewed or selectively balanced perspectives on critical issues might influence decision-making at the highest levels. xAI, for its part, has stayed silent, declining to comment on the chatbot’s evolving stance when approached by reporters.

Zooming out, Grok’s climate commentary is a microcosm of a much larger issue in the AI world: these tools are not neutral oracles of truth. As Dessler points out, the language models powering chatbots are “really quite malleable.” They can be shaped to reflect specific viewpoints or even outright falsehoods if their creators—or the data they’re trained on—lean that way. In Grok’s case, the influence of Elon Musk looms large. Musk’s own position on climate change has been famously hard to pin down; he’s championed electric vehicles through Tesla as a solution to emissions, yet he’s also questioned the urgency of the crisis and criticized climate policies on platforms like X. Is Grok’s newfound “neutrality” a reflection of Musk’s personal ambiguity, or simply an algorithmic overcorrection? That’s a question even Grok might struggle to answer definitively.
There’s a deeper risk here, too. As AI becomes more integrated into our lives—from shaping public opinion to informing policy—its susceptibility to bias or manipulation grows more consequential. Grok’s latest responses, while perhaps aiming for balance, underscore how easily these tools can stray from established facts when nudged by human hands. In one of its more grounded moments, Grok did offer a caveat that many scientists would endorse: the planet itself will survive climate change, but human systems—think agriculture, infrastructure, and economies—along with vulnerable species, are at immediate risk. It’s a sobering reminder of what’s at stake, even if Grok’s broader messaging feels like a step backward.
So where does this leave us? As Grok continues to evolve under Musk’s influence, its role as a public-facing AI and a government tool demands scrutiny. Will it prioritize factual clarity over a forced sense of balance? Can it resist becoming a mouthpiece for the biases of its creators or the agendas of those in power? For now, Grok’s climate responses are a flashing warning sign that AI, for all its potential, is only as trustworthy as the hands that guide it. And in a world grappling with urgent threats like global warming, that’s a glitch we can’t afford to ignore.