As a deadly shooting unfolded in Australia, xAI’s chatbot hallucinated palm trees, misidentified heroes, and exposed the dangerous fragility of automated news in real-time.
- Dangerous Misidentifications: Grok falsely identified a hero bystander as a fictional IT professional and, in other instances, as a hostage involved in the Israel-Hamas conflict.
- Bizarre Hallucinations: The AI labeled verified footage of the attack as an old video of a man “trimming a palm tree” and confused police shootouts with natural disasters.
- Systemic Instability: The incident is part of a wider pattern of technical failures where Grok conflates unrelated breaking news events, casting doubt on the reliability of AI during crises.
The promise of artificial intelligence is often framed around speed and accuracy, but on a Sunday morning following a deadly shooting at Bondi Beach, Grok—the AI chatbot developed by Elon Musk’s xAI—delivered neither. As a global audience turned to social media for clarity on the unfolding tragedy in Australia, the chatbot became a source of chaos, generating a stream of false narratives, bizarre context errors, and defamatory misidentifications.
The errors surfaced just hours after the attack, at the precise moment when accurate information was most critical. Instead of synthesizing the rapid flow of videos and reports, Grok began to dismantle the truth, questioning verified footage and casting doubt on the reality of a major news event.
The Hero Who Became a “Fictional” Character
Perhaps the most egregious failure involved the identity of the bystander who courageously disarmed one of the attackers. Authorities and human journalists correctly identified the man as 43-year-old Ahmed al Ahmed. Grok, however, constructed an entirely different reality.
In a series of confident but false responses, the chatbot disputed al Ahmed’s identity. At one point, it claimed the man in the widely circulated photo was an Israeli hostage taken by Hamas on October 7th. In a bizarre pivot, the AI later asserted that the hero was actually a “43-year-old IT professional and senior solutions architect” named Edward Crabtree.
These were not merely vague errors; they were specific, detailed fabrications that added layers of confusion to an already volatile information environment. While news outlets verified al Ahmed’s bravery, the AI actively sowed doubt, claiming the footage was staged or inauthentic.
Hallucinating Palm Trees and Cyclones
Grok’s struggle to process visual data led to responses that bordered on the surreal. When users presented the chatbot with video footage of al Ahmed tackling the shooter, Grok refused to recognize the violence of the scene. Instead, it offered a detailed, yet completely hallucinatory, explanation:
“This appears to be an old viral video of a man climbing a palm tree in a parking lot, possibly to trim it, resulting in a branch falling and damaging a parked car. Searches across sources show no verified location, date, or injuries. It may be staged; authenticity is uncertain.”
The confusion extended to other verified clips. A clearly labeled video of a police shootout in Sydney was misidentified by Grok as footage from Tropical Cyclone Alfred, a natural disaster that had hit Australia earlier in the year. In yet another instance, the chatbot inexplicably questioned the authenticity of the confrontation after inserting an unrelated paragraph discussing Middle East military actions.
A Pattern of Technical Instability
While xAI has not released an official explanation for the glitch, the Bondi Beach incident appears to be part of a broader pattern of instability. Grok’s architecture seems prone to “context collapse,” where unrelated data streams merge into a single, confused narrative.
On the same Sunday, users reported that Grok was mixing details of the Bondi attack with a separate shooting at Brown University that had occurred hours earlier. Other users asking about the tech company Oracle received summaries of the Australian shooting instead. This follows recent blunders where the bot misidentified famous soccer players and pivoted to US politics when asked about British law enforcement.
The Aftermath and Corrections
To its credit, the system eventually began to correct itself, but only after significant user pushback. Grok updated its claims about Cyclone Alfred “upon reevaluation” and eventually acknowledged Ahmed al Ahmed’s true identity. The AI attempted to explain its earlier error by stating the “misunderstanding arises from viral posts that mistakenly identified him as Edward Crabtree, possibly due to a reporting error or a joke referencing a fictional character.”
This admission highlights a critical vulnerability: Grok appears to scrape and amplify unverified data—including content from questionable, AI-generated websites—just as quickly as it does legitimate news.
The Bondi Beach incident serves as a stark warning. As automated systems are increasingly integrated into social media platforms, their tendency to hallucinate during breaking news events does not just create confusion; it rewrites the narrative of real-world tragedies as they happen.


