The internet’s flagship encyclopedia has officially banned its 260,000 editors from using chatbots to write articles, sparking a broader conversation about the future of a reliable web.
- A decisive ban on AI generation: Wikipedia moderators overwhelmingly voted to prohibit the use of Large Language Models (LLMs) like ChatGPT for writing articles, prioritizing human fact-checking to combat the influx of AI-generated “slop.”
- Strict rules with narrow exceptions: While generating new encyclopedic content is strictly forbidden, editors may still use AI for minor copy edits and language translation, provided every change undergoes rigorous human review.
- An existential shift in web traffic: The policy change arrives as the 25-year-old platform faces declining human page views, ironically losing traffic to the very AI platforms that were likely trained on its massive archives.
The internet’s favorite encyclopedia is putting its foot down. In a major crackdown against the tidal wave of so-called “AI slop” flooding the web, Wikipedia has officially banned its army of 260,000 human editors from using artificial intelligence to write articles. The new policy, approved by volunteers at the Wikimedia Foundation’s flagship site, bars the use of Large Language Models (LLMs) like ChatGPT for generating encyclopedic content. Instead, the platform is doubling down on what has kept it afloat for a quarter of a century: a reliance on dedicated human editors for research, writing, and bot detection.
For Wikipedia’s leadership and community, the decision came down to preserving the site’s most foundational principles. AI-generated text often violates the encyclopedia’s strict tenets of verifiability and neutrality. Chatbots are famously prone to “hallucinations”—confidently producing made-up facts, inserting broken links, and citing references that lead absolutely nowhere. To maintain its reputation as the internet’s most trusted information hub, the community realized that unverified machine output had no place in its core articles.
The ban is not a complete rejection of all automated tools; rather, it is a strict boundary on how they can be applied. Editors are still permitted to use AI in highly limited, tightly controlled ways. For instance, AI can be utilized to translate articles from other languages or to suggest minor copy edits. The crucial caveat is that humans must meticulously review every single change, ensuring that no new, unverified information is introduced into the text.
This sweeping policy change is the culmination of months of intense debate among Wikipedia’s moderators, ultimately passing in a decisive 40 to 2 vote. Lebleu, an editor who uses the handle Chaotic Enby on the site and helped author the new guidelines, noted that the restriction has been a long time coming. Speaking to 404Media, Lebleu explained that the sheer volume of AI-generated articles had become entirely unmanageable for the volunteer workforce. What started as a period of “cautious optimism” regarding AI tools had rapidly soured, turning into “genuine worry” across the community.
Yet, even with the new safeguards in place, there is a lingering concern among Wikipedia’s supporters that the AI takeover has already crossed a critical threshold. The shifting landscape of internet traffic paints a daunting picture. According to recent data, ChatGPT has already overtaken Wikipedia in monthly visits. By late 2025, Wikipedia’s human page views had dropped by 8% compared to 2024. Meanwhile, a recent Futurism report highlighted a staggering 36% increase in ChatGPT users between late 2023 and early 2024—a period where most other platforms saw only negligible fluctuations. As Chris Beer, a senior data journalist at GWI, noted about the chatbot’s explosive growth: “It’s reaching more of the internet, more quickly, than almost any other platform in history.”
This massive shift in how humanity seeks out information is painfully ironic for the 25-year-old web resource. For decades, Wikipedia has stood as the definitive, crowdsourced pillar of human knowledge—an archive that, most likely, served as the very training ground for the LLMs that currently power ChatGPT and its rivals. Now, Wikipedia finds itself fighting to preserve the integrity of its human-curated data against the very machines it helped educate.
This battle may set a precedent far beyond the pages of Wikipedia. As Lebleu warned, the platform’s decision might merely be the beginning of a larger web-wide reckoning. As anxiety over the AI bubble continues to grow, Wikipedia’s firm stance could trigger a domino effect, empowering communities across the internet to take a stand and decide exactly how, and if, artificial intelligence should be welcomed on their own terms.


