OpenAI CEO Sam Altman has addressed concerns over a clause in the company's exit documents that could potentially revoke vested equity. Altman clarified that OpenAI has never enforced this clause and will not do so if employees refuse to sign separation or non-disparagement agreements.
As recently revealed, OpenAI made new employees sign highly restrictive nondisclosure agreements (NDAs) that prevent them from criticizing OpenAI indefinitely. These agreements also stipulate that any breach could result in the loss of vested equity, which for many could amount to millions of dollars.
Employee Equity Concerns
Altman, in a statement on social media, emphasized that vested equity remains secure regardless of the signing of separation agreements. He acknowledged the presence of a controversial clause in exit documents but assured that it has never been acted upon. “This is on me and one of the few times i've been genuinely embarrassed running openai“, Altman wrote.
in regards to recent stuff about how openai handles equity:
we have never clawed back anyone's vested equity, nor will we do that if people do not sign a separation agreement (or don't agree to a non-disparagement agreement). vested equity is vested equity, full stop.
there was…
— Sam Altman (@sama) May 18, 2024
Altman's comments come in response to recently growing concerns about OpenAI's handling of employee equity, particularly regarding the potential for equity cancellation tied to the revealed non-disparagement agreements.
Public and Media Reactions
The issue has sparked significant discussion on various social media platforms and forums. On X (formerly Twitter), users and journalists have questioned the ethics and transparency of such clauses. Some have suggested that the clause would still be in place if not for media scrutiny. Others have called for OpenAI to release former employees from these restrictive agreements and allow them to sell their equity.
I am glad that OpenAI acknowledges this as an embarrassment, a mistake, and not okay. It's in their power to set right by releasing everyone who was threatened with loss of equity from the NDAs they signed under this threat.
— Kelsey Piper (@KelseyTuoc) May 18, 2024
Safety Culture and Leadership Responses
In addition to equity concerns, OpenAI's safety culture has also come under fire these days. Jan Leike resigned as head of alignment at OpenAI and raised issues about the company's prioritization of product development over safety. Leike's departure has prompted responses from Altman and Greg Brockman, who have reiterated their commitment to AI safety and research.
Altman and Brockman have responded to the criticisms by stating that they are grateful for Leike's contributions and will continue to focus on AI safety. They acknowledged the challenges in navigating the path to Artificial General Intelligence (AGI) and emphasized the importance of ongoing safety research.