Helen Toner, a former member of OpenAI‘s board, disclosed that the board was unaware of the ChatGPT launch until it was publicly announced on Twitter. Toner revealed this during an interview on The TED AI Show with Bilawal Sidhu.
During the podcast, Toner provided her most detailed account to date of the circumstances leading to Altman's removal in November of the previous year. She explained that the decision was influenced by a series of undisclosed events that raised concerns among board members.
❗EXCLUSIVE: “We learned about ChatGPT on Twitter.”
What REALLY happened at OpenAI? Former board member Helen Toner breaks her silence with shocking new details about Sam Altman's firing. Hear the exclusive, untold story on The TED AI Show.
Here's just a sneak peek: pic.twitter.com/7hXHcZTP9e
— Bilawal Sidhu (@bilawalsidhu) May 28, 2024
Toner gave a TED talk some weeks ago, in which she shared her vision about how AI should be governed without sharing any details about the turmoil at OpenAI or her role in all of this. With the interview, this changes, as Toner attacks Sam Altman directly and accuses him of lying and hiding important details from the OpenAI board.
OpenAI Leadership Turmoil and Employee Backlash
In the aftermath of the ChatGPT launch, OpenAI experienced considerable internal upheaval. A year later, CEO Sam Altman was unexpectedly removed from his position by the board and replaced by CTO Mira Murati. However, Altman was reinstated within days, following strong support from Microsoft's CEO Satya Nadella. This reinstatement led to the departure of Toner and other board members.
Following Altman's abrupt dismissal, employees reacted strongly, many of whom threatened to resign. This internal pressure led to Altman's reinstatement and the subsequent departure of Toner and other board members, highlighting the deep divisions within the organization.
OpenAI's board was designed to prioritize public welfare over profit, operating as a non-profit entity. Despite this mission, the board was not informed about significant developments, such as the release of ChatGPT in November 2022.
Internal Criticism and Departures
Other key figures have also criticized the company's direction. Co-founder Ilya Sutskever and safety researcher Jan Leike left OpenAI, expressing concerns over the company's decision to halt superalignment work. This research ensures AI systems align with human values and intentions.
As announced yesterday, Leike will spearhead a new “superalignment” team at Anthropic. Leike announced via a post on X that his new team at Anthropic will concentrate on various facets of AI safety and security. The primary focus areas will include scalable oversight, weak-to-strong generalization, and automated alignment research.
OpenAI has established a Safety and Security Committee in response to internal and external criticism. This new body aims to scrutinize the company's safety procedures related to product development, ensuring that operations align with its foundational mission of public good. The committee includes OpenAI's CEO Sam Altman, board members Bret Taylor, Adam D'Angelo, and Nicole Seligman, along with chief scientist Jakub Pachocki and head of security Matt Knight.