Skip to main content

Technology

Sam Altman testifies in Elon Musk's OpenAI suit as Oakland trial turns to governance and control

OpenAI's chief executive spent hours on the stand defending the company's nonprofit-to-commercial arc, rebutting claims that insiders hijacked a charity—and sketching a co-founder relationship that fractured over money, structure, and who would steer research.

NewsTenet Technology deskPublished 8 min read
Hands typing on a laptop keyboard, suggesting technology leadership testimony and legal scrutiny of an AI company.

Sam Altman spent roughly four hours on the witness stand in Oakland, California, in May 2026, answering questions in Elon Musk's civil suit against OpenAI leadership—a case that has become a public referendum on whether one of the world's most consequential AI developers betrayed a nonprofit charter when it built a parallel for-profit engine (reported).

The trial is not a consumer gadget launch; it is a governance fight with Silicon Valley mythology attached. Musk, an early co-founder who later left the board, alleges that roughly $38 million in donations was entangled with a commercial strategy he says was not what signatories promised. Altman's testimony, as described in public filings and courtroom reporting, pushes the opposite moral: that OpenAI faced a funding cliff, that co-founder tensions over control and capital were real, and that the organization adapted structurally because the alternative was irrelevance (reported).

What the jury is being asked to decide (and what it cannot)

Civil trials about corporate history often hinge on emails, board minutes, and oral promises that different executives remember differently. Here, the dispute also implicates mission language from 2015—an era when large language models were academic curiosities, not trillion-dollar balance-sheet risks. Altman's narrative thread, as summarized from May 12 testimony, is that Musk stepped back from formal governance in February 2018 after negotiations over for-profit options collapsed, while OpenAI staff worried about retaliation and funding continuity (reported).

Court procedure matters for readers following headlines: reporting from the courthouse notes a nine-person jury whose recommendations may be advisory, with U.S. District Judge Yvonne Gonzalez Rogers positioned to decide the ultimate outcome—an arrangement that can lengthen the story even after testimony ends (reported). That structure means public drama on the stand may not map one-to-one onto a final judgment timeline.

Altman's defense in plain terms: abandonment versus betrayal

Without adopting either side's legal labels, Altman's on-the-record testimony frames Musk as having concluded OpenAI could not compete without transformational capital—and, in a widely quoted line from the stand, that the startup felt "left for dead" once co-founder dynamics soured (reported). Musk's camp, meanwhile, continues to argue that a for-profit subsidiary became the "tail wagging the dog," undermining the nonprofit parent in ways donors never approved—language Musk himself used earlier in the trial month (reported).

Altman also testified that he did not make firm commitments to Musk about OpenAI's eventual corporate shape, a point that matters if the court evaluates contract-like expectations versus handshake politics common in early-stage labs (reported). For AI policy watchers, the fight is a preview of how courts treat dual-entity structures when public benefit rhetoric meets private-market fundraising imperatives.

Cross-examination and credibility: where the trial gets personal

Musk's attorneys cross-examined Altman on trustworthiness and on prior disputes with colleagues, including figures now prominent at rival labs—threads that attempt to cast the CEO as a unreliable narrator of his own company's history (reported). Altman also addressed his abrupt November 2023 removal by the OpenAI board and swift reinstatement, describing shock and anger while disputing the board's public framing about candor (reported).

Those passages matter legally because they feed a pattern argument: if decision-makers lacked transparency internally, a jury might infer similar opacity toward external promises. They also matter commercially because enterprise customers and regulators increasingly read AI vendors through governance lenses—safety disclosures, data usage, and conflict-of-interest controls—not only through model benchmarks.

ThemeWhy it is contestedWhy non-lawyers should care
Nonprofit missionCompeting narratives about what was promised in 2015–2018Sets precedent language for foundation models
Capital intensityWhether billions-per-year compute needs justify structure shiftsExplains GPU financing arms races
ControlWho could steer strategy—single founder vs collective boardShapes alignment incentives inside labs
Donor fundsWhether $38 million in donations maps cleanly to later commerceTests philanthropy boundaries in tech
Dual entitiesFor-profit subsidiaries under 501(c)(3)-style parentsTemplate other labs may copy—or avoid

What happens after the testimony spotlight fades

Closing arguments were slated later in the week as testimony progressed, after which deliberations could begin—still subject to judicial scheduling and post-trial motions (reported). Regardless of the verdict's shape, the case already feeds three parallel conversations: securities and IPO watchers pricing OpenAI-linked risk, antitrust and competition scholars arguing about vertical ties between cloud hyperscalers and model labs, and legislators looking for vocabulary to regulate frontier systems without crushing startups.

Even a narrow factfinding win for either side could ripple through term sheets: investors may insist on clearer fiduciary lanes between nonprofit boards and commercial subsidiaries, while employees may push for whistleblower protections when safety and product incentives conflict.

NewsTenet will update this file when a judgment or settlement materially changes the factual baseline; until then, treat on-the-stand quotations and characterizations as trial-stage claims, not settled findings of fact.

Reference article

NewsTenet stories are written for context; this link points to reporting, data, or an official source worth opening next.