Encode, a youth-led nonprofit advocating for responsible AI governance, has filed an amicus brief opposing OpenAI’s transition to a Public Benefit Corporation (PBC).
The legal filing, submitted to the U.S. District Court for the Northern District of California, supports Elon Musk’s ongoing lawsuit and raises significant concerns about the ethical implications of profit-driven artificial general intelligence (AGI) development.
“The courts must intervene to ensure AI development serves the public interest,” commented Encode’s president Sneha Revanur the initiative. Backed by leading AI researchers such as Geoffrey Hinton, the filing underscores the risks of prioritizing investor profits over the safety and welfare of humanity.
“OpenAI was founded as an explicitly safety-focused non-profit and made a variety of safety related promises in its charter. It received numerous tax and other benefits from its non-profit status. Allowing it to tear all of that up when it becomes inconvenient sends a very bad message to other actors in the ecosystem,” Hinton said in support of Encode’s initiative.
Public Safety vs. Profit: The Core Debate
OpenAI’s latest announcement on December 28, 2024, revealed its plan to transition its for-profit division into a PBC by 2025. This structure aims to attract large-scale investment while maintaining a legally mandated focus on societal benefit. However, Encode’s brief argues that this balance is inherently flawed, especially for an organization committed to developing AGI.
“Control over the development and deployment of AGI is a charitable asset that should not be sold for any price,” Encode stated in its filing.
The brief criticizes OpenAI’s restructuring as a fundamental departure from its nonprofit origins, warning that the shift endangers commitments like halting competition to assist other safety-aligned AGI projects.
Musk’s Legal Challenge and Microsoft’s Role
Elon Musk’s lawsuit, refiled in August 2024, accuses OpenAI of betraying its nonprofit mission and consolidating power with corporate stakeholders like Microsoft. Musk claims that Microsoft’s $13 billion investment gives it undue influence over OpenAI’s governance, transforming it into a profit-driven subsidiary.
Musk’s legal filing alleges that OpenAI’s governance has become disproportionately influenced by Microsoft, effectively aligning its priorities with profit-driven objectives, highlighting exclusivity agreements that restrict competition and prioritize shareholder interests. Encode’s brief aligns with these criticisms, emphasizing the incompatibility of private profit motives with public accountability in AGI development.
Microsoft’s role as both a key investor and strategic partner complicates the narrative. While its Azure cloud infrastructure underpins OpenAI’s operations, growing resource disputes and Microsoft’s in-house development of AI models suggest potential friction in their relationship.
Financial Pressures Behind the Restructuring
OpenAI’s restructuring is driven by mounting financial challenges. The organization projects a $5 billion loss for 2024, with cumulative deficits potentially reaching $44 billion by 2028. Annual compute costs alone are expected to climb to $9.5 billion by 2026 as the organization develops increasingly complex AI models.
To address these challenges, OpenAI has introduced new revenue strategies, including the $200/month ChatGPT Pro subscription for enterprise users, and partnerships with semiconductor manufacturers like TSMC and Broadcom to develop custom AI chips. These efforts aim to optimize computational efficiency and reduce training costs, but they also underscore the scale of investment required to sustain AGI research.
OpenAI’s CEO Sam Altman framed the transition as essential, stating, “As we enter 2025, we will have to become more than a lab and a startup — we have to become an enduring company.”
Redefining AGI in Financial Terms
Internal documents reveal OpenAI has linked the realization of AGI to a $100 billion cumulative profit benchmark, a significant departure from traditional definitions emphasizing technological capabilities. This financial metric aligns with investor priorities but raises ethical questions about the organization’s long-term mission.
This benchmark also solidifies OpenAI’s relationship with Microsoft, which retains exclusive access to OpenAI’s models and infrastructure until the threshold is met. Critics, including Encode, argue that this arrangement exemplifies the risks of concentrating AGI control within corporate interests.
Historical Context: OpenAI’s Evolution
Founded in 2015 as a nonprofit research lab, OpenAI aimed to advance AI technologies for societal benefit without the constraints of financial returns. Early funding came from tech giants like Google and Microsoft, enabling groundbreaking research in robotics, reinforcement learning, and language models.
However, by 2019, escalating costs prompted the organization to adopt a capped-profit model, attracting a $1 billion investment from Microsoft and launching commercial products like ChatGPT. Despite these successes, financial pressures have continued to mount, driving the latest transition to a PBC structure.
Unveiled Emails Reveal Governance Tensions
Recently disclosed internal emails shed light on Musk’s early advocacy for a for-profit structure. In one 2017 exchange, Musk proposed merging OpenAI with Tesla to secure the billions needed to compete with Google. “This needs billions per year immediately, or forget it,” Musk wrote.
These proposals sparked resistance from co-founders like Ilya Sutskever, who argued, “Absolute control over AGI is incompatible with our mission.” The resulting tensions culminated in Musk’s departure from OpenAI’s board in 2018.
Implications for AI Governance and Ethics
The legal battle over OpenAI’s restructuring highlights broader tensions in the AI industry, where organizations grapple with balancing innovation, funding, and ethical accountability. Encode’s filing emphasizes the stakes of ensuring AGI development remains aligned with public welfare rather than investor profit motives.
As the January 14 court hearing approaches, the outcome will not only shape OpenAI’s trajectory but also influence governance models across the AI sector. Encode’s intervention amplifies the call for transparency and public accountability in one of the most consequential technological debates of the 21st century.