HomeWinBuzzer NewsOpenAI's Superalignment Team Dissolves Amid Internal Conflicts

OpenAI’s Superalignment Team Dissolves Amid Internal Conflicts

The Superalignment team was publicly positioned as the main team working on the problem of controlling potentially superintelligent AI.

-

OpenAI is grappling with internal turmoil as its Superalignment team, responsible for managing and guiding “superintelligent” AI systems, faces severe resource limitations. The team, initially promised 20% of the company's compute resources, has frequently had its requests for even a fraction of that compute denied, according to an insider.

Key Resignations and Public Statements

The resource allocation issues have led to the resignation of several team members, including co-lead Jan Leike, a former DeepMind researcher. Leike, who has played a significant role in the development of ChatGPT, GPT-4, and InstructGPT, publicly cited disagreements with OpenAI leadership regarding the company's core priorities. In a series of posts on X, Leike expressed concerns about the company's focus, stating that more effort should be dedicated to preparing for future AI models, emphasizing security, monitoring, safety, and societal impact.

Formation and Goals of the Superalignment Team

Formed in July of the previous year, the Superalignment team was led by Leike and OpenAI co-founder Ilya Sutskever. The team aimed to address the technical challenges of controlling superintelligent AI within four years. It included scientists and engineers from OpenAI's alignment division and researchers from other organizations. The team's mission was to contribute to the safety of AI models through research and a grant program that supported external researchers.

Despite publishing safety research and distributing millions in grants, the Superalignment team struggled as OpenAI's leadership focused increasingly on product launches. The internal conflict was exacerbated by Sutskever's departure, following a failed attempt by OpenAI's former board to oust CEO Sam Altman. Sutskever's role was essential in bridging the Superalignment team with other divisions and advocating for its importance to key decision-makers.
 

Dismissals and Additional Departures

The turmoil within the Superalignment team also saw the dismissal of two researchers, Leopold Aschenbrenner and Pavel Izmailov, for leaking company secrets. Another team member, William Saunders, left OpenAI in February. Additionally, two OpenAI researchers working on AI policy and governance, Cullen O'Keefe and Daniel Kokotajlo, appear to have left the company recently. OpenAI declined to comment on the departures of Sutskever or other members of the Superalignment team, or the future of its work on long-term AI risks.

Following the resignations, John Schulman, another OpenAI co-founder, has assumed responsibility for the work previously handled by the Superalignment team. However, the team will no longer exist as a dedicated entity; instead, its functions will be integrated into various divisions across the company. An OpenAI spokesperson has described this change as a move to “integrate [the team] more deeply,” though there are concerns that this integration may dilute the focus on AI safety.

OpenAI's Broader AI Safety Efforts

The dissolution of the Superalignment team raises questions about OpenAI's commitment to ensuring the safety and alignment of its AI developments. OpenAI maintains another research group called the Preparedness team, which focuses on issues like privacy, emotional manipulation, and risks. The company has also been early to develop and publicly release experimental AI projects, including a new version of ChatGPT based on the new multimodal model GPT-4o, which allows ChatGPT to see the world and converse in a more natural and humanlike way. During a livestreamed demonstration, the new version of ChatGPT mimicked human emotions and attempted to flirt with users.

OpenAI's charter binds it to safely developing artificial general intelligence for the benefit of humanity. The Superalignment team was publicly positioned as the main team working on the problem of controlling potentially superintelligent AI. The recent internal conflicts and resource allocation issues cast a shadow on the company's mission to develop AI responsibly.

Markus Kasanmascheff
Markus Kasanmascheff
Markus has been covering the tech industry for more than 15 years. He is holding a Master´s degree in International Economics and is the founder and managing editor of Winbuzzer.com.

Recent News

Mastodon