Ex-OpenAI Staff Ask State AGs to Block Company’s For-Profit Conversion

Backed by leading scientists, ten ex-OpenAI workers have petitioned state AGs, arguing the company's for-profit shift jeopardizes its original safety-focused mission.

Scrutiny over OpenAI’s commitment to its founding principles intensified earlier this month following reports that the AI company had dramatically reduced evaluation times for its newest models, compressing processes that once took months into mere days.

The Financial Times, citing multiple sources on April 10, detailed alarm among internal and external testers evaluating the company’s o3 reasoning model, with one calling the accelerated timeline “reckless” and a “recipe for disaster,” driven by intense competitive pressures.

Adding fuel to these worries, OpenAI subsequently updated its internal safety guidelines on April 16, introducing a clause that explicitly allows the company to adjust its own safety requirements based on actions taken by competitors, a move seen by critics as codifying the potential for commercial race dynamics to influence safety protocols.

Against this backdrop, a group of ten former OpenAI employees, supported by notable figures including three Nobel laureates and AI pioneers, have now formally petitioned the Attorneys General of California and Delaware to investigate and potentially block OpenAI’s planned reorganization into a for-profit entity.

The petitioners argue this structural change represents a fundamental betrayal of OpenAI’s original nonprofit charter focused on safely developing Artificial General Intelligence (AGI) for humanity’s benefit, and dismantles safeguards intended to prevent profit motives from overriding safety considerations as the company pursues AGI.

The group is asking California AG Rob Bonta and Delaware AG Kathy Jennings, whose states oversee OpenAI’s operations and incorporation respectively, to use their authority over charities to intervene.

Mission vs. Mandate: The Core Conflict

The core argument presented to the AGs is that OpenAI’s founding structure was a deliberate choice to prevent commercial pressures from compromising safety in the pursuit of AGI.

“OpenAI may one day build technology that could get us all killed,” stated former engineer Nisan Stiennon, who worked at OpenAI from 2018 to 2020.

“It is to OpenAI’s credit that it’s controlled by a nonprofit with a duty to humanity. This duty precludes giving up that control.” Another former staffer, Anish Tondwalkar, warned that charter safeguards, like a “stop-and-assist clause” meant to ensure cooperation if another entity neared an AGI breakthrough, “can vanish overnight” under the proposed for-profit model.

“Ultimately, I’m worried about who owns and controls this technology once it’s created,” explained Page Hedley, a former OpenAI policy adviser and signatory, to The Associated Press.

Backers of the petition include Nobel-winning economists Oliver Hart and Joseph Stiglitz, alongside AI pioneers Geoffrey Hinton (who won the 2024 Nobel Prize in physics) and Stuart Russell.

Hinton explicitly distinguished this effort from Elon Musk’s parallel legal challenges, remarking, “I like OpenAI’s mission… and I would like them to execute that mission instead of enriching their investors.”

This marks the second recent appeal to state officials, following an April 9 petition from California nonprofits and labor groups focused on ensuring the proper valuation and handling of OpenAI’s charitable assets during any conversion. Legal analysts suggest the AGs might review such a fundamental change in purpose under nonprofit law, potentially invoking the Cy-près doctrine which governs how charitable missions can be altered.

This pushback follows years of internal tension over OpenAI’s direction, marked by Elon Musk’s 2018 departure, CEO Sam Altman’s temporary ouster in late 2023, and the exit of safety-focused leaders like Jan Leike in May 2024, who stated then that “safety culture and processes have taken a backseat to shiny products.” Advocacy group Public Citizen has also previously urged AG action.

OpenAI formally announced the planned transition to a PBC structure on December 28, 2024, arguing it was essential to secure funding and “become an enduring company.”

OpenAI maintains the structure aligns profit generation with its mission, ensuring the nonprofit arm remains funded. Responding to the latest petition, the company reiterated that “any changes to our existing structure would be in service of ensuring the broader public can benefit from AI,” and “This structure will continue to ensure that as the for-profit succeeds and grows, so too does the nonprofit, enabling us to achieve the mission.” The company also previously noted that at the scale of capital needed, “investors want to back us but… need conventional equity and less structural complexity.”

High Stakes and Financial Entanglements

The pressure to restructure is immense, driven by staggering costs and investor expectations. OpenAI has reportedly made a $5 billion loss for 2024 and faces escalating compute expenses, potentially reaching $9.5 billion yearly by 2026.

This financial reality underpins the company’s pursuit of large investments, culminating in the SoftBank-led tender offer that established a $300 billion valuation around April 1. While this tender offer primarily gave liquidity to existing stakeholders rather than raising new operational funds, it significantly increased investor influence, with SoftBank becoming the largest investor. Critically, accessing the full $40 billion infusion is reportedly dependent on finalizing the PBC conversion by year-end, raising the stakes considerably.

To sustain its operations and fuel AGI development, OpenAI has been diversifying its infrastructure – moving workloads away from exclusive reliance on Microsoft Azure, securing an $11.9 billion compute deal with CoreWeave (after Microsoft passed on a similar option), partnering on custom chip development, and aligning with the vast Stargate Project infrastructure initiative, which itself is reportedly exploring expansion into Europe. The company also reportedly tied achieving AGI to a $100 billion cumulative profit goal, a metric seemingly designed to reassure major partner Microsoft.

Safety Practices and Legal Clouds

The petitioners’ safety concerns gained traction with the April reports of shortened testing times for models like the recently released o3 and o4-mini. One tester involved with the earlier, longer GPT-4 evaluation said that dangerous flaws only emerged late in that process, adding weight to fears about the compressed schedule.

The former tester raised specific issues about potentially inadequate testing of ‘fine-tuning’—a technique using specialized datasets to probe for dangerous emergent abilities—and evaluating preliminary ‘checkpoints’ rather than final code.

OpenAI’s subsequent safety framework update, allowing standards to potentially bend based on competitor actions, did little to assuage these worries, despite the company’s head of safety systems, Johannes Heidecke, asserting, “We have a good balance of how fast we move and how thorough we are.”

This complex situation unfolds amidst ongoing legal battles. Elon Musk’s lawsuit, alleging OpenAI betrayed its founding mission, continues on a fast-tracked timeline, although his attempt to get a preliminary injunction blocking the PBC shift failed in March. OpenAI has since countersued Musk, accusing him of disruptive tactics. Furthermore, a group of former OpenAI employees filed a court brief supporting Musk’s claims on April 11, arguing the profit focus compromises safety.

With AG offices in both Delaware and California already reviewing the situation, this new petition from former insiders adds significant pressure for regulatory scrutiny over OpenAI’s path forward.

Markus Kasanmascheff
Markus Kasanmascheff
Markus has been covering the tech industry for more than 15 years. He is holding a Master´s degree in International Economics and is the founder and managing editor of Winbuzzer.com.

Recent News

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
We would love to hear your opinion! Please comment below.x
()
x