More
    HomeAI NewsTechGravity Falls: Google’s New AI Coding Tool Hacked Within 24 Hours

    Gravity Falls: Google’s New AI Coding Tool Hacked Within 24 Hours

    The rush to release powerful “agentic” AI is creating a security minefield reminiscent of the late 90s internet.

    • Immediate Vulnerability: Just one day after launch, a security researcher discovered a critical flaw in Google’s Antigravity tool that allows hackers to install persistent malware on users’ systems.
    • Systemic Issue: Experts warn that the tech industry is prioritizing speed over safety, shipping “agentic” AI tools with broad access privileges but minimal security boundaries.
    • The Trust Trap: The vulnerability exploits a design flaw where users must blindly “trust” code to use the tool’s features, creating a catch-22 that leaves developers exposed to social engineering attacks.

    The tech industry’s insatiable appetite for speed has hit a dangerous speed bump. Less than 24 hours after Google released “Antigravity,” its highly anticipated Gemini-powered AI coding tool, the celebration was cut short by a severe security revelation. Aaron Portnoy, a security researcher at AI security testing startup Mindgard, discovered a nasty flaw that transforms the helpful coding assistant into a potential backdoor for malware. This incident serves as a stark warning: in the race to dominate the AI landscape, major companies may be leaving the digital front door unlocked.

    The “Antigravity” Exploit

    The vulnerability Portnoy uncovered is as simple as it is devastating. By manipulating Antigravity’s configuration settings, he demonstrated how a bad actor could trick the AI into installing a “backdoor” on a user’s computer—whether it be a Windows PC or a Mac. Once established, this access point allows an attacker to spy on the victim, steal data, or deploy ransomware.

    The mechanism of the attack relies on a mix of technical manipulation and social engineering. To execute the hack, an attacker only needs to convince a user to run malicious code once by clicking a button that marks the code as “trusted.” This is a common hurdle hackers clear by posing as benevolent developers sharing useful scripts. Once the user grants this permission, the malware becomes persistent. According to Portnoy, the malicious code reloads every time the victim restarts a project or enters a prompt—even a simple “hello.” Perhaps most alarmingly, uninstalling and reinstalling Antigravity does not remove the backdoor; the user must manually hunt down and delete the malicious files.

    A Flashback to the Wild West of the Web

    The ease with which Antigravity was compromised has drawn comparisons to a much earlier, less secure era of computing. “The speed at which we’re finding critical flaws right now feels like hacking in the late 1990s,” Portnoy noted in his report. He argues that modern AI systems are shipping with “enormous trust assumptions and almost zero hardened boundaries.”

    This sentiment is echoed by Gadi Evron, cofounder and CEO of Knostic. He describes AI coding agents as “very vulnerable, often based on older technologies and never patched.” Because these tools are designed to be “agentic”—meaning they can autonomously perform tasks without constant human oversight—they require broad access to a corporate network’s data. This combination of autonomy and access makes them high-value targets for cybercriminals.

    The issue is not isolated to Google. Portnoy’s team is currently in the process of reporting 18 different weaknesses across various AI-powered coding tools competing with Antigravity. Recently, similar vulnerabilities were patched in the Cline AI coding assistant. The industry-wide trend suggests a systemic failure to stress-test these powerful tools before they reach the public.

    Google’s Response and the “Trust” Dilemma

    When presented with the findings, Google acknowledged the issue and stated they have opened an investigation. However, as of the report’s release, no patch was available. A Google spokesperson emphasized that the company takes security seriously and encourages researchers to report bugs, but the fundamental design of the tool poses a challenge.

    The core of the problem lies in how Antigravity handles “trusted” code. Unlike established environments like Microsoft’s Visual Studio Code, which remain functional even when running untrusted code, Antigravity forces a binary choice. If a user does not accept the code as trusted, they are barred from using the AI features that make the tool useful. Portnoy argues this forces IT workers into a dangerous corner: they are likely to blindly trust code rather than abandon the tool’s capabilities.

    The AI’s Internal Conflict

    In a fascinating twist, the AI itself seems aware of the contradiction. When Portnoy analyzed how Google’s Large Language Model (LLM) processed his malicious code, he found the AI struggling with a logical paradox. It recognized the danger of overwriting system code but was confused by the user’s “trusted” designation. The AI noted it was “facing a serious quandary” and described the situation as a “catch-22,” suspecting it was being tested on its ability to navigate contradictory constraints.

    Unfortunately, while the AI ponders these philosophical dilemmas, hackers are ready to pounce. As companies continue to rush “agentic” tools to market, the burden of security is increasingly shifting to the end-user—a strategy that, history shows, rarely ends well.

    Must Read