OpenAI has announced the creation of a Safety and Security Committee within its Board of Directors to oversee the safety of its generative AI systems. This move coincides with the company's confirmation that it is training its next AI model, which is likely to be GPT-5.
Committee Members and Objectives
The committee includes OpenAI's CEO Sam Altman, board members Bret Taylor, Adam D'Angelo, and Nicole Seligman, along with chief scientist Jakub Pachocki and head of security Matt Knight. The committee will also consult with external safety and security experts. According to a blog post by OpenAI, the committee's initial task is to review and improve the company's safety protocols over the next 90 days. At the end of this period, the committee will present its recommendations to the full Board, which will then review and publicly share an update on the adopted measures.
Background on the Committee Formation
The establishment of the committee follows reports that OpenAI disbanded its superalignment team earlier in May. The superalignment team was initially created to explore methods for maintaining human control over generative AI systems, especially those that might surpass human intelligence in the future.
In the same blog post, OpenAI confirmed that it has begun training its next major AI model, expected to be named GPT-5. The company stated that this model aims to advance capabilities on the path to achieving Artificial General Intelligence (AGI). OpenAI emphasized the importance of a “robust debate” regarding the safety of these systems at this critical juncture.
Earlier this month, OpenAI introduced GPT-4o, an enhanced version of its GPT-4 model, which features more realistic voice interactions. This version is available to both free and paid users of GPT-4.