OpenAI Got Hacked in 2023 and Kept Silent About Security Risks

There are fears that stolen data could be utilized by foreign actors, potentially threatening U.S. national security.

A hacker gained unauthorized access to OpenAI‘s internal messaging systems, stealing sensitive information about the company’s AI technologies, reports The New York Times.

The breach, which occurred early in 2023, was revealed to OpenAI employees but not disclosed to the public or law enforcement agencies. The hacker infiltrated an online forum used by OpenAI employees to discuss the latest AI technologies. Fortunately, the hacker did not access systems where GPT models such as the latest GPT-4o are housed and trained.

Internal Reactions and Concerns

The incident caused substantial worry among OpenAI staff. There are fears that the stolen data could be utilized by foreign actors, potentially threatening U.S. national security. Employees have voiced doubts about OpenAI’s security measures, expressing anxiety over the potential misuse of AI.

Leopold Aschenbrenner, a former technical program manager, drew attention to these security issues in a memo to the board. Aschenbrenner, who recently discussed OpenAI´s practices publicly, claims his subsequent dismissal was politically driven and detailed his apprehensions about OpenAI’s security protocols.

Besides that, a group of former OpenAI employees and existing Google DeepMind employees wrote an open letter showing concern over safety measures in AI companies. The group of expressed concern that OpenAI’s focus on growth and profitability may be neglecting critical issues of safety and transparency, especially in the context of developing artificial general intelligence (AGI).

Internal Tensions and Leadership Disputes

The cyber attack also exacerbated existing tensions within OpenAI’s leadership. CEO Sam Altman has had prior conflicts with the board, and several AI safety researchers have left the company due to disagreements regarding superalignment—a strategy designed to maintain human oversight over advanced AI. These disputes add to the difficulties OpenAI faces in ensuring a unified approach to AI security and development.

Jan Leike, a prominent AI researcher who recently criticized ‘s approach to  and resigned as head of the superalignment team, has transitioned to Anthropic. Leike, who has played a significant role in the development of ChatGPTGPT-4, and InstructGPT, publicly cited disagreements with OpenAI leadership regarding the company’s core priorities.

Meanwhile,  has announced the creation of a  new Safety and Security Committee within its Board of Directors to oversee the safety of its  systems.

Anthopic co-founder Daniela Amodei has commented on the breach to The New York Times. Amodei believes that while the immediate impact of the stolen generative AI designs might be limited, advancing AI technology could heighten future risks, emphasizing the need for stronger security measures.

The hack is only one of several challenges currently confronting OpenAI. The Security breach highlights the need for OpenAI to tackle both internal and external challenges effectively.

Last Updated on November 7, 2024 3:41 pm CET

SourceNYT
Markus Kasanmascheff
Markus Kasanmascheff
Markus has been covering the tech industry for more than 15 years. He is holding a Master´s degree in International Economics and is the founder and managing editor of Winbuzzer.com.

Recent News

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
We would love to hear your opinion! Please comment below.x
()
x