Elon Musk admits xAI distilled OpenAI models as trial reveals last-minute settlement push
The courtroom battle between Elon Musk and Sam Altman is starting to pull back the curtain on how AI models are actually built—and the latest revelation could have far-reaching consequences. During testimony this week, Musk acknowledged that his startup xAI had, to some extent, used outputs from OpenAI’s models to train its own systems, a practice known as model distillation.
The admission came during cross-examination in a trial that has already exposed internal tensions, strategic disagreements, and competing visions for the future of artificial intelligence. Lawsuits between top tech figures are rare for a reason: they tend to surface details that companies usually work hard to keep private. In this case, the spotlight is now on one of the most sensitive areas in AI—how models learn from each other.
“The biggest revelation came on the stand Thursday, when plaintiff Elon Musk admitted to OpenAI’s attorneys that his xAI startup had, to some extent, “distilled” OpenAI’s models,” Semafor reported.
Meanwhile, the courtroom drama nearly didn’t happen. According to a recent court filing first reported by Reuters, Musk reached out to OpenAI President Greg Brockman just days before the trial to explore a possible settlement. The last-minute outreach suggests both sides understood the risks of airing sensitive AI practices in public—something the trial is now doing in real time.
“Elon Musk contacted OpenAI President Greg Brockman to gauge interest in a settlement two days before their high-stakes trial got underway in Oakland federal court,” Reuters reported, citing a new court filing.
Distillation itself is not new. It involves using a larger, more capable model to generate synthetic data, which is then used to train a smaller or more specialized system. The technique has been widely used across the industry to improve efficiency and reduce costs. But as AI becomes more competitive—and more valuable—the line between acceptable practice and intellectual property misuse is becoming harder to define.
The issue is already playing out on a global stage. U.S. officials have accused Chinese AI developers of relying on American models for distillation, framing the practice as industrial espionage. Now, similar questions are being raised closer to home, with Musk’s testimony suggesting that even leading U.S. AI companies may be navigating the same gray areas.
Musk’s position adds another layer of complexity. As a co-founder of OpenAI, he has argued in the lawsuit that the organization deviated from its original nonprofit mission and was effectively taken from its founding vision. That backdrop may shape how he views the use of OpenAI’s outputs, though it’s unlikely to settle the broader legal or ethical questions.
What’s becoming clear is that distillation is no longer just a technical shortcut—it’s a strategic and legal flashpoint. As AI systems become more powerful, the methods used to train them are drawing increasing scrutiny from courts, regulators, and competitors alike. And with more companies racing to build advanced models, the pressure to define clear boundaries is only growing.
Why It Matters: Musk’s admission puts model distillation at the center of a high-stakes legal fight, signaling that how AI systems learn may soon be as contested as what they can do.

Elon Musk

