Senator Marsha Blackburn’s latest proposal is a “cornucopia” of dangerous anti-tech policies—from gutting Section 230 to mandating political bias in algorithms—all wrapped in one massive, Trump-branded package.
- A “Kitchen Sink” Disaster: The bill is less about AI safety and more about bundling years of failed, destructive internet policies—including attacks on Section 230 and copyright law—into one piece of legislation.
- Legal Minefields: By establishing a vague “duty of care” and mandating “bias audits,” the act would likely flood the court system with frivolous lawsuits and force AI developers to adopt a pro-conservative slant to avoid liability.
- Breaking the User Experience: Under the guise of fighting “self-preferencing” and protecting children, the bill threatens to degrade popular services (like Google Maps and Amazon Prime) and incentivize the mass censorship of constitutionally protected speech.
Sometimes, you can judge a bill by its title alone. Senator Marsha Blackburn (R–Tenn.) has introduced a piece of legislation with a name so convoluted it seems designed solely to flatter the former president: The Republic Unifying Meritocratic Performance Advancing Machine Intelligence by Eliminating Regulatory Interstate Chaos Across American Industry Act.
Blackburn’s acronym for this word salad is the TRUMP AMERICA AI Act. While standard acronym rules might yield something closer to “TRUMP AMIERICA,” the branding is clear. Unfortunately, the content of the bill is as chaotic as its name. It represents an “anti-tech omnibus,” combining nearly every bad policy idea of the last half-decade into a single regulatory scheme that Techdirt’s Mike Masnick calls a “massively destructive internet policy overhaul masquerading as AI legislation.”
The “Duty of Care” Trap
At the heart of the bill is a fundamental shift in how the internet is governed. The legislation proposes a “duty of care” for AI developers, enforced by the Federal Trade Commission (FTC), requiring them to “prevent and mitigate foreseeable harm to users.” While this sounds noble in theory, critics argue it is a Trojan horse designed to dismantle Section 230 protections.
Currently, Section 230 allows platforms to dismiss frivolous lawsuits quickly. Blackburn’s bill would upend this by allowing the U.S. Attorney General, state attorneys general, and private actors to sue developers for “defective design” or “failure to warn.” As Masnick notes, this creates an open invitation for litigation. If a negative event occurs and an AI tool was tangentially involved, lawyers could sue, forcing companies into expensive legal battles to prove the AI wasn’t the cause. The result? Only the largest tech behemoths with unlimited legal budgets would survive, effectively crushing smaller innovators.
Ushering in Political Bias and Censorship
Perhaps the most politically charged aspect of the bill is Section 11, which claims to combat “bias against conservative figures.” The act would require high-risk AI systems to undergo “regular bias evaluations.”
In practice, this politicizes the algorithm. If federal agencies led by political appointees are tasked with auditing AI for “political affiliation” bias, developers may be forced to engineer their systems to spit out politically favorable results to avoid government wrath. While the text limits this to “high-risk” systems (like those in education or employment), the lack of precise definitions leaves the door open for broad application against general consumer AI.
Protecting Kids or Purging Content?
The bill also incorporates elements of the controversial Kids Online Safety Act (KOSA). It demands that platforms implement tools to protect users under 17 from abuses like sex trafficking and suicide. Again, while the goal is admirable, the mechanism is blunt.
The bill requires platforms to exercise “reasonable care” to prevent mental health disorders. This vague standard allows enterprising lawyers to argue that neutral platform features or legal speech caused a user distress. To avoid liability, platforms would likely preemptively ban vast amounts of content. Furthermore, a provision to ban “sexual material harmful to minors”—a term often used broadly in state laws to target LGBTQ+ literature and art—could force platforms to filter out anything remotely resembling pornography or erotica.
Breaking the User Experience
The TRUMP AMERICA AI Act also takes aim at “self-preferencing,” the practice where tech companies highlight their own services—like Google showing a map at the top of search results or Amazon highlighting Prime-eligible products.
Lawmakers frame this as an antitrust victory, but for users, it is a degradation of service. By banning “systemically important platforms” (those with users comprising at least 34% of the U.S. population) from steering users to their own products, the bill would make Big Tech significantly less user-friendly in the name of consumer protection.
A New Federal Rulebook?
The bill aims to codify Donald Trump’s desire for a “national framework” that preempts state-level AI regulations. It touches on everything from chatbot restrictions to copyright law, even suggesting that AI-generated derivative works should be ineligible for copyright protection.
While the bill has yet to be formally introduced or attract co-sponsors, its title is a calculated move to secure Trump’s endorsement. If successful, it could rally Republican lawmakers behind a proposal that promises to “protect children, creators, conservatives, and communities,” but threatens to break the functioning of the open internet in the process.


