Elon Musk accuses Sam Altman of ‘stealing a charity’ in explosive OpenAI trial testimony
Elon Musk took the stand Tuesday in a packed California courtroom and leveled a charge that cut straight to the core of OpenAI’s origin story. Under oath, he told the jury that Sam Altman “stole a charity.”
The remark came as the opening day of a closely watched trial got underway, one that reached back to 2015, when Musk and a group of researchers and founders launched OpenAI as a nonprofit. The idea at the time was clear: build advanced AI to benefit humanity, not to enrich a small group of investors. In court, Musk said that the promise didn’t hold.
The case traces back to March 2024, when Musk filed suit accusing OpenAI and Altman of “breach of contract” and abandoning the nonprofit mission the company was founded on.
“I co-founded OpenAI to prevent a Terminator outcome — a dystopian future where superintelligent AI escapes human control,” Musk testified. “Sam Altman took that charity and turned it into something else entirely. He stole it.”
Musk, wearing a dark suit and tie, was asked by one of his lawyers what the lawsuit was about when he took the stand.
“It’s actually very simple,” he said. “It’s not okay to steal a charity… If it’s okay to loot a charity, the entire foundation of charitable giving will be destroyed.”
Lawyers introduced early emails and meeting notes to support his claims. In one 2016 message, Musk warned that without strict nonprofit governance, the project would drift toward commercial priorities, BBC reported. He left OpenAI’s board in 2018 after disagreements over its direction. Years later, he launched xAI, positioning it as a counterweight focused on what he calls “maximum truth-seeking” AI.
The lawsuit that could redefine how AI companies are governed
OpenAI has pushed back, arguing in legal filings that its shift to a capped-profit structure was necessary to raise the capital required to compete in the global AI race. In court, its legal team framed the case as something else entirely.
An OpenAI lawyer said the lawsuit was motivated by Musk seeking to kneecap a “competitor”.
“We’re here because Mr. Musk didn’t get his way at OpenAI,” said OpenAI lawyer William Savitt. “Because he’s a competitor, Mr Musk will do anything to attack OpenAI.”
The judge warned both Musk and Altman against using their platforms to attempt to influence the trial, a reminder of how much attention the case is drawing beyond the courtroom.
Musk’s attorney, Steven Molo, urged the nine jurors in Oakland to set aside their views about two of Silicon Valley’s most prominent figures.
“You all took an oath to put personal opinions aside,” he said. “I know you will honor that oath.”
Musk argued that he became more deeply involved in AI as the technology advanced, growing concerned that “the government was not stepping up” to regulate it. That concern intensified, he said, after a 2015 meeting with then-President Barack Obama. From the start, Molo told jurors, Musk believed AI “wasn’t a vehicle for people to get rich”.
He pointed to Musk’s financial backing of OpenAI during its nonprofit years, noting that his client contributed $38 million over several years.
“Without Elon Musk, there would be no OpenAI. Pure and simple,” said Molo.
At the center of the case is a narrow legal question with wide implications: did OpenAI breach its founding agreements and fiduciary duties when it moved away from its original nonprofit structure? Musk is seeking to force the company to honor its early commitments or unwind parts of its current setup—an outcome that could ripple across the AI industry.
The trial is drawing intense interest from investors, researchers, and policymakers. For founders, it highlights a familiar tension: how to preserve a mission once the capital demands grow. For regulators, it raises a deeper question about whether companies building powerful AI systems can police themselves or whether stricter oversight will follow.
The split between Musk and Altman reflects a broader divide across the AI sector. One camp warns that rapid commercialization without stronger safeguards could introduce serious risks. Another argues that slowing progress hands an advantage to global competitors, including state-backed efforts.
Inside the courtroom, the dispute carries both personal and industry-wide weight. Two former allies now stand on opposite sides of one of the most consequential debates in tech.
Proceedings are expected to run for several weeks, with testimony from former employees and expert witnesses on AI safety and governance. A decision could come by late summer, though appeals are widely expected.
After the morning session, Musk paused briefly in the hallway to take questions from reporters. Asked whether OpenAI could be steered back toward its original mission, he didn’t hedge.
“The only way to fix it is to make them honor the charter they signed. Otherwise, it really was just a charity that got stolen.”
Why it matters
This case puts the founding promise behind OpenAI under scrutiny in a way few disputes ever reach. The outcome could shape how AI companies are structured, how they raise capital, and how they balance mission with growth. It may influence how far regulators go in setting guardrails for systems that are becoming central to economies and governments. For founders building with AI, it’s a reminder that early decisions can have consequences years later—sometimes in a courtroom.

