HomeWinBuzzer NewsOpenAI Whistleblower Claims He Was Dismissed For Raising Security Concerns

OpenAI Whistleblower Claims He Was Dismissed For Raising Security Concerns

Leopold Aschenbrenner says a document he created raising concerns over AI safety led to OpenAI firing him when the memo reached the board.

-

Leopold Aschenbrenner, an ex-safety researcher at OpenAI, has publicly claimed his termination was driven by his efforts to spotlight AI security flaws within the organization.

In a discussion on the Dwarkesh Podcast, Aschenbrenner detailed his apprehensions about OpenAI's security protocols. He authored an internal memo criticizing the company's capabilities in protecting critical algorithmic information and model weights. This document, initially distributed amongst his peers and some leaders, was later escalated to the board following a notable security incident at OpenAI.

HR Warnings and Continued Advocacy

Aschenbrenner states that HR issued a formal warning citing his memo as a significant concern. Despite this, he persisted in pushing for better security measures. According to Aschenbrenner, the leadership was particularly irked by his decision to inform the board, which subsequently pressured the leadership on the security front.

The immediate cause for Aschenbrenner's departure involved sharing a document with external researchers. OpenAI argued that the document contained sensitive details; however, Aschenbrenner maintains that it lacked confidential information and that sharing for feedback was a routine practice at the firm. The disputed document included a timeline for AGI (Artificial General Intelligence) development by 2027-2028, which Aschenbrenner contends was already public knowledge.

Company Culture and Internal Dynamics

Aschenbrenner's account suggests deeper issues at OpenAI regarding the management of internal critiques and security concerns. He recalled being questioned by a company lawyer about his beliefs on AI progress and the extent of security needed, reflecting a possible attempt to assess his loyalty. Additionally, his stance against calling for Sam Altman's reinstatement after his brief ousting by the board may have further distanced him from other employees.

OpenAI has yet to comment on Aschenbrenner's assertions. The unfolding situation underscores the tensions within the company as it grapples with the complexities of AI development and security management. That tension came into the spotlight last year when CEO Sam Altman was fired from his position. Following rumors of boardroom backstabbing, Altman was reinstated. However, not until he had threatened to join a AI team, with OpenAI's investor taking a non-voting position on the board. 

Whistleblowers Seek Better Safety Measures

This week, a group of former OpenAI employees and existing Google DeepMind employees wrote an open letter showing concern over safety measures in AI companies. The group of expressed concern that OpenAI's focus on growth and profitability may be neglecting critical issues of safety and transparency, especially in the context of developing artificial general intelligence (AGI).

The group has publicly appealed for AI firms, including OpenAI, to enhance transparency and strengthen whistleblower protections. This open letter urges these companies to ensure their AI advancements are both ethical and responsible. The employees point out that the current atmosphere at OpenAI could foster the development of hazardous AI systems if left unchecked.

Former OpenAI Safety Lead Jan Leike Now Wroking for Anthropic

Jan Leike, a prominent AI researcher who recently criticized ‘s approach to  and resigned as head of the superalignment team, has transitioned to Anthropic. Leike, who has played a significant role in the development of ChatGPTGPT-4, and InstructGPT, publicly cited disagreements with OpenAI leadership regarding the company's core priorities.

Meanwhile,  has announced the creation of a  new Safety and Security Committee within its Board of Directors to oversee the safety of its  systems.

Earlier last month, OpenAI introduced GPT-4o, an enhanced version of its  model, which features more realistic voice interactions. This version is available to both free and paid users of GPT-4.

Luke Jones
Luke Jones
Luke has been writing about Microsoft and the wider tech industry for over 10 years. With a degree in creative and professional writing, Luke looks for the interesting spin when covering AI, Windows, Xbox, and more.

Recent News

Mastodon