OpenAI co-founder Ilya Sutskever has launched Safe Superintelligence Inc. (SSI), a venture focused on advancing safe and powerful AI systems. This move comes in the wake of Sutskever leaving OpenAI, a departure notable for its connection to the controversial removal and reinstatement of CEO Sam Altman.
The creation of SSI coincides with a recent commitment by AI firms to include a “kill switch” policy, potentially pausing the development of advanced AI models if certain risks are identified.
Strong Emphasis on AI Safety Research
Unlike other AI entities such as Google and Anthropic, SSI will prioritize research over commercial pursuits. Sutskever has stated that the organization’s sole aim will be the development of a safe superintelligence, steering clear of product development and market rivalry.
Superintelligence is within reach.
Building safe superintelligence (SSI) is the most important technical problem of our time.
We’ve started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence.
It’s called Safe Superintelligence…
— SSI Inc. (@ssi) June 19, 2024
“This company is special in that its first product will be the safe superintelligence, and it will not do anything else up until then“, Sutskever told Bloomberg. “It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck in a competitive rat race”, he added.
SSI will establish offices in Tel Aviv, Israel, and Palo Alto, California. Joining Sutskever are co-founders Daniel Levy, an ex-OpenAI researcher, and Daniel Gross, who led AI projects at Apple. This diverse team combines significant expertise and experience in AI research.
The formation of SSI underscores a growing concern for AI safety and ethics in the development of advanced systems. By separating research from commercial incentives, Sutskever says he aims to uphold the highest standards of safety and responsibility. This could influence other AI research organizations to similarly prioritize ethical considerations over market competition.
Background and Context
Following his exit from OpenAI in May, amid Jan Leike’s resignation from OpenAI’s superalignment team citing a focus shift away from safety, Sutskever has remained supportive of the company’s leadership.
Sutskever’s departure from OpenAI followed the unveiling of the company´s latest generative AI model, GPT-4o, and major upgrades to the AI-powered chatbot, ChatGPT.
A week before Thanksgiving 2023, Ilya Sutskever and OpenAI’s CTO Mira Murati brought concerns regarding Sam Altman’s behavior to the attention of the former board of directors, allegedly due to disputes about the direction of OpenAI. The previous board, counting Sutskever among its members, suddenly decided to terminate Altman’s employment without informing the majority of OpenAI’s employees. They attributed the dismissal to Altman’s inconsistent transparency in his dealings with the board.
The decision incensed Microsoft and other investors, imperiled the company’s stock sale, and prompted the majority of OpenAI employees, including Sutskever, to threaten resignation unless Altman was restored to his position. Altman was eventually reinstated, and much of the old board resigned. Sutskever never returned to work after that. Jakub Pachocki has effectively served as chief AI scientist at OpenAI since November last year.
OpenAI hires former NSA Chief as Board Member
While Sutskever is doubling down on AI Safety, OpenAI is still in the process of restructuring. The company has announced the creation of a Safety and Security Committee within its Board of Directors to oversee the safety of its generative AI systems.
CEO Sam Altman being part of the new team has raised concerns over its independence and impact. Another controversial move since then was the appointment of the ex-director of the National Security Agency (NSA) Paul M. Nakasone to its board.
Last Updated on November 7, 2024 3:53 pm CET