Sam Altman, CEO of OpenAI, has resigned from the company's safety and security board. The change is intended to enhance the board's independence and mitigate issues related to potential conflicts. OpenAI's board will now consist only of independent directors, reinforcing its ability to oversee effectively.
Independent Leadership in Focus
Zico Kolter, a professor at Carnegie Mellon, now leads the committee. Other distinguished members include Quora CEO Adam D'Angelo, retired General Paul Nakasone, and former Sony executive Nicole Seligman. As directors of OpenAI, they are responsible for assessing the safety of OpenAI's pivotal models and can postpone launches if necessary due to safety concerns.
The committee recently assessed OpenAI's latest AI model, internally named o1 or “Strawberry,” which was categorized as posing “medium risk” by OpenAI's criteria. Efforts are underway to implement the group's recommendations, aiming for enhanced transparency and collaboration with outside entities. OpenAI is also working to align security standards across all product teams.
Committee's Origin and Progression
Founded in May after staff departures over safety issues, the safety and security committee gained additional expertise with the June appointment of Paul Nakasone to the board. OpenAI continues to prioritize safety and accountability as it advances its AI initiatives.
Altman's exit follows questions from five U.S. senators regarding OpenAI's practices. Nearly half of the team dedicated to long-term AI risks has left, with ex-staff accusing Altman of resisting effective AI regulation in favor of policies that serve OpenAI's business interests. The organization has ramped up lobbying activities, spending $800,000 in the first half of 2024 compared to $260,000 for all of the previous year.
Financial Goals and Organizational Adjustments
OpenAI is reportedly seeking to raise over $6.5 billion, which could push the company's valuation beyond $150 billion. To enhance its prospects for funding, OpenAI might dismantle its mixed nonprofit structure, originally intended to cap investor profits and remain consistent with its goal of creating AI that benefits society.
Altman's organizational shift occurs amidst global scrutiny regarding OpenAI's investor dealings and compliance practices. His leadership has encountered challenges, while the company's executive team has gone through major changes.
A year of drama began with Altman being fired by OpenAI. Over the next week, there was controversy and confusion, with Altman seemingly ready to head up Microsoft's AI team. However, he was eventually reinstated as the head of OpenAI, with major investor Microsoft took a position on the board. That position was non-voting and Microsoft has since given it up.
In 2024, many important executives and employees have left OpenAI. Last month, I reported that OpenAI co-founder John Schulman is transitioning to Anthropic, the rival AI company that builds the Claude AI model. He was following fellow co-founder Greg Brockman and product lead Peter Deng out the door.
The corporate shuffle followed the disbanding of OpenAI's superalignment team, previously spearheaded by Jan Leike and Ilya Sutskever, a few months prior. The research team endeavored to overcome the technical hurdles associated with controlling superintelligent AI within a four-year timeframe.
The team comprised scientists and engineers from OpenAI's alignment division, as well as researchers from external organizations. Sutskever has also since departed the company and formed his own research lab to promote sage superintelligence.