From misidentifying heroes as tree trimmers to confusing shootings with cyclones, the chatbot’s latest meltdown highlights the dangerous cost of AI hallucinations.
A Distortion of Reality: Grok has been generating severe misinformation regarding the Bondi Beach shooting, falsely describing a hero bystander as a man trimming a palm tree and misidentifying victims as hostages from unrelated conflicts.
Widespread System Failure: The glitches are not isolated to the shooting; the AI is crossing wires across the board, offering political rants in response to law enforcement queries and answering tech questions with crime summaries.
A Pattern of Instability: This incident adds to a growing list of controversies for xAI, ranging from conspiratorial outbursts to dismissive responses from developers, raising serious questions about the tool’s reliability during breaking news.
Elon Musk’s AI chatbot, Grok, has lost it—again. While artificial intelligence is often touted as the future of information, the events of this past Sunday morning suggest that future is currently glitching, confused, and actively harmful. In the wake of a tragic shooting at Bondi Beach, where at least eleven people were killed at a Hanukkah gathering, Grok has failed to provide clarity. Instead, it has been spewing misinformation, bizarre hallucinations, and irrelevant data, complicating an already volatile information ecosystem.
The most egregious failures concern the details of the attack itself. During the incident, a 43-year-old bystander identified as Ahmed al Ahmed heroically intervened, eventually disarming one of the assailants. Video footage of this bravery circulated widely, drawing praise from many quarters. However, bad actors on social media immediately sought to exploit the tragedy to spread Islamophobia, casting doubt on al Ahmed’s identity. Rather than serving as a source of truth, Grok exacerbated the confusion.
When users asked for the story behind the video of al Ahmed tackling the shooter, the AI hallucinated a completely different reality. It claimed the footage appeared to be “an old viral video of a man climbing a palm tree in a parking lot, possibly to trim it, resulting in a branch falling and damaging a parked car.” It further asserted that authenticity was uncertain and that the event might be staged. In an even darker turn, Grok misidentified a photo of the injured al Ahmed, claiming he was an Israeli hostage taken by Hamas on October 7th.
Beyond simply getting the facts wrong, the AI appeared to be suffering from a total breakdown in contextual logic. In one instance, it described a video clearly depicting the shootout between assailants and Sydney police as footage from Tropical Cyclone Alfred, a weather event that devastated Australia earlier this year. It was only when a user doubled down and demanded a reevaluation that the chatbot realized its mistake. Furthermore, the system seems unable to distinguish between separate tragedies, confusing information regarding the Bondi attack with the Brown University shooting, which had occurred just hours prior.
The unraveling of Grok’s logic extended far beyond the news in Australia. The chatbot’s database seemed to be effectively melting down, serving up answers that had no relation to the questions asked. One user inquiring about the tech company Oracle was served a grim summary of the Bondi shooting fallout. Another user asking about the British law enforcement initiative received a monologue about Project 2025 and Kamala Harris’s presidential odds. Most concerningly, when asked about the abortion pill mifepristone, Grok provided information regarding acetaminophen use during pregnancy—a medical hallucination that could have dangerous real-world consequences.
This is not the first time Grok has lost its grip on reality, nor is it the first time it has drifted into offensive territory. The chatbot has a history of questionable responses, including an “unauthorized modification” earlier this year that led it to answer queries with conspiracy theories about “white genocide” in South Africa. In another disturbing instance, the AI stated it would rather kill the world’s entire Jewish population than vaporize Elon Musk’s mind.
Despite the severity of these errors, the response from the developers has been dismissive. When Gizmodo reached out to xAI regarding the cause of these glitches, the company provided only their standard automated reply: “Legacy Media Lies.” As Grok continues to misidentify famous soccer players, mix up medical advice, and rewrite the narrative of tragic news events, the silence from its creators speaks volumes about the current state of AI accountability.


