The OpenAI Files Reveal Governance Scandals, Ethical Dilemmas, and Leadership Controversies
- The OpenAI Files, a 10,000-word investigative report by the Midas Project and Tech Oversight Project, exposes a web of governance issues, undisclosed financial maneuvers, and security lapses at OpenAI.
- Sam Altman, OpenAI’s CEO, faces intense scrutiny for alleged deceptive practices, including falsified titles, hidden equity stakes, and personal conflicts of interest.
- The report raises critical questions about OpenAI’s shift from a nonprofit mission to a profit-driven entity, alongside concerns over employee treatment and regulatory hypocrisy.
In a bombshell release that’s shaking the tech world, The OpenAI Files—a sprawling 10,000-word interactive report by the Midas Project and Tech Oversight Project—has pulled back the curtain on OpenAI, one of the most influential players in artificial intelligence. Spanning over 50 pages and packed with charts, data visualizations, and a painstaking reconstruction of OpenAI’s murky corporate structure, this report is being hailed as the most comprehensive collection of documented concerns about the company’s governance, leadership integrity, and organizational culture to date. Compiled over a year by Tyler Johnston, executive director of The Midas Project, and drawing from corporate disclosures, legal complaints, open letters, and media reports, the project aims to show how far OpenAI has strayed from its original nonprofit ideals. What emerges is a portrait of an organization—and its enigmatic CEO, Sam Altman—riddled with contradictions, ethical gray areas, and alarming power plays.
At the heart of the revelations is Sam Altman himself, whose leadership and personal dealings are under a harsh spotlight. One of the most jaw-dropping claims is that Altman falsely listed himself as chairman of Y Combinator in SEC filings for years, despite never holding the position. According to the report, after proposing a transition from president to chairman and preemptively announcing it on YC’s website, the partnership rejected the move, and the post was scrubbed—yet Altman continued to claim the title in official documents. This apparent fabrication is just the tip of the iceberg. The Files also allege that Altman, despite testifying to Congress that he held “no equity in OpenAI,” indirectly owned stakes through Sequoia and Y Combinator funds. His investment portfolio, meticulously charted in the report, includes a 7.5% stake in Reddit—netting him a$50 million windfall when Reddit partnered with OpenAI—and ties to Rain AI, which OpenAI later signed a$51 million letter of intent to buy chips from. Rumors even swirl that Altman could receive a 7% stake worth around$20 billion in a restructured OpenAI, raising questions about personal gain over public good.
OpenAI’s financial evolution is another focal point of the report, painting a picture of a company quietly abandoning its founding principles. Originally structured as a capped-profit entity to prioritize safety over greed, OpenAI reportedly altered its profit cap to increase by 20% annually—a trajectory that could exceed$100 trillion in 40 years—without disclosing the change. While the company continued to tout its capped-profit model publicly, this silent shift suggests a deeper pivot toward commercialization. The report also delves into the proposed restructuring of OpenAI, a topic of intense speculation, though it notes that prior coverage has already explored this angle. Instead, it focuses on how these financial maneuvers reflect a broader departure from the late 2010s vision of OpenAI as a nonprofit research lab dedicated to safe AI development, as Johnston emphasized in an interview with The Verge.
Perhaps even more troubling are the internal culture and security issues laid bare in the Files. A major 2023 security breach saw a hacker steal details of AI technology, yet OpenAI delayed reporting it for over a year. When researcher Leopold Aschenbrenner raised security concerns to the board, he was fired—a move that speaks volumes about the company’s priorities. Employee treatment comes under fire too, with allegations of restrictive NDAs and equity clawback provisions that threatened departing staff with losing millions in vested equity if they criticized OpenAI. Altman denied knowledge of these provisions, but Vox uncovered that he personally signed documents authorizing them in April 2023. Former employees have even filed SEC complaints, claiming OpenAI illegally barred them from reporting to regulators and forced them to waive federal whistleblower compensation rights. These practices paint a chilling picture of suppression within an organization that publicly champions transparency.
Leadership critiques within OpenAI add another layer of dysfunction to the narrative. Senior figures, including leading researcher Ilya Sutskever and CTO Mira Murati, expressed profound discomfort with Altman’s stewardship, with Sutskever reportedly telling the board, “I don’t think Sam is the guy who should have the finger on the button for AGI,” backed by a self-destructing PDF of Slack screenshots documenting toxic behavior. Murati echoed similar unease, while the Amodei siblings described Altman’s tactics as “gaslighting” and “psychological abuse.” At least five other executives provided negative feedback to the board, and historical accounts from Altman’s first startup, Loopt, reveal that senior employees twice attempted to have him ousted for “deceptive and chaotic behavior.” The report also claims Altman misled board members by fabricating claims about others’ intentions, demanded to be informed of all employee-board interactions, and failed to disclose his personal ownership of the OpenAI Startup Fund for years. An independent review after his brief firing found “many instances” of him saying different things to different people, further eroding trust.
On the policy front, OpenAI’s hypocrisy is laid bare. While publicly advocating for AI regulation, the company lobbied to weaken the EU AI Act behind closed doors. By 2025, Altman had reversed his earlier stance on government approval, calling it “disastrous,” and OpenAI began pushing for federal preemption of state AI safety laws before any federal framework even existed. This flip-flopping raises serious questions about the company’s commitment to the responsible development of artificial general intelligence (AGI), a technology with world-altering potential.
What makes The OpenAI Files so compelling isn’t just the laundry list of scandals—it’s the way it contextualizes OpenAI’s journey from a mission-driven lab to a commercial juggernaut, asking readers to draw their own conclusions. As Johnston told The Verge, this is an “archival project” meant to contrast OpenAI’s past promises with its present actions. The nonprofits behind the report assert complete editorial independence, receiving no support from competitors like Elon Musk’s xAI, Anthropic, or tech giants like Google and Microsoft. OpenAI itself declined to comment, leaving the allegations to stand unchallenged for now.
The OpenAI Files isn’t just a takedown of Sam Altman or OpenAI—it’s a wake-up call about the stakes of unchecked power in the AI industry. With interactive visualizations and a staggering depth of research, the report offers a rare glimpse into the opaque world of a company shaping our technological future. Whether you see Altman as a visionary navigating impossible challenges or a leader whose ambition has outpaced ethics, one thing is clear: the road to AGI is paved with far more than code and innovation—it’s littered with secrets, conflicts, and hard questions about who we trust to build the future. If you’re curious to dive deeper, the full report is available online, and it’s well worth the read to understand the inner workings of a company that’s become a household name.