Lilian Weng, OpenAI’s vice president of research and safety, is set to leave the company on November 15, capping nearly seven years of contributions that shaped OpenAI’s approach to AI safety and applied AI research.
Her departure is part of a broader pattern that has seen several high-level executives exit the company this year, raising questions about OpenAI’s future direction and its balance between safety and innovation.
Lilian Weng’s Impact and Departure Context
Weng joined OpenAI in 2018, initially working on robotics, where she contributed to teaching a robotic hand to solve a Rubik’s Cube using reinforcement learning—a technique where models learn by receiving feedback through rewards and penalties. As OpenAI pivoted to generative models, Weng transitioned to leading applied research and later took on the role of building and managing the Safety Systems team.
This team became central to OpenAI’s efforts to ensure that releases, including GPT-4 and its iterations, met safety standards. The o1-preview model, which Weng described as their most secure to date, showcased enhanced resistance to adversarial attacks, a type of threat where AI systems are manipulated to produce incorrect or potentially harmful outputs.
Addressing AI Safety on a Global Platform
Just days before her announced departure, Weng appeared at a Bilibili-hosted event, speaking in Chinese about AI’s dual nature. “It brings convenience and challenges, and our involvement is critical,” she noted, emphasizing the importance of comprehensive training to mitigate potential risks. OpenAI, known for its main product ChatGPT, does not operate in China due to regulatory constraints but maintains a global reputation for its AI advancements.
A Year Marked by Executive Departures
Weng’s exit follows a series of significant departures within OpenAI. CTO Mira Murati, who led the development of ChatGPT, stepped down in September to pursue a new venture. That same month, Chief Research Officer Bob McGrew and Vice President of Research Barret Zoph also departed. Earlier in the year, Co-founder Ilya Sutskever left to start Safe Superintelligence, an AI safety-focused initiative, while John Schulman joined Anthropic, a competing firm. Policy researcher Miles Brundage, who advised OpenAI’s AGI readiness team before it was dissolved, departed in October, citing concerns over how safety is prioritized.
The reorganization within the company occurred after the dissolution of OpenAI’s superalignment team, which was led by Jan Leike and Ilya Sutskever. The team had been dedicated to addressing the challenges of managing superintelligent AI systems over a projected period of four years. Leopold Aschenbrenner, an ex-safety researcher at OpenAI, later publicly claimed his own termination was driven by his efforts to spotlight AI security flaws within the organization.
Strategic Hires and Shifts in Focus
Despite these challenges, OpenAI has sought to bolster its team with strategic appointments. Caitlin Kalinowski, previously with Meta where she led AR hardware projects, joined OpenAI to lead its robotics and consumer hardware division. Her expertise may signal a push to integrate AI capabilities into physical devices, expanding beyond software-based solutions. This development aligns with CEO Sam Altman’s ongoing project with former Apple design chief Jony Ive, aimed at creating a new AI-driven device designed to reshape user interactions with technology.
Legal Battle and Industry Implications
Recently, OpenAI secured a legal victory against Raw Story Media and Alternet Media, who alleged that the company violated the Digital Millennium Copyright Act (DMCA) by using their articles for training without proper copyright management information. U.S. District Judge Colleen McMahon dismissed the case, noting that the plaintiffs failed to prove specific harm.
She pointed out that generative AI models, like those developed by OpenAI, synthesize data rather than replicate it verbatim. The ruling reflects similar conclusions reached in cases involving other AI firms, such as Microsoft’s GitHub Copilot.
Navigating Safety and Growth Amid Shifting Leadership
Weng’s farewell post highlighted her pride in working on the Safety Systems team and expressed confidence in its continued success, despite her departure. “I’m so proud of everything we’ve achieved,” she wrote, noting that OpenAI’s safety measures included training models to handle sensitive requests and developing robust defenses against misuse:
“While OpenAI got into the GPT paradigm and we started exploring ways to deploy best AI models to the real world, I built the first Applied Research team, that launched initial versions of the fine-tuning API, embedding API and moderation endpoint, built the foundation for applied safety work, as well as novel solutions for many early API customers.
After the GPT-4 launch, I was asked to take on a new challenge, reconsidering the vision for OpenAI’s safety systems and centralizing the work under one team that would own the full safety stack. It has been one of the most difficult, stressful and exciting things I have done. Today the Safety Systems team has more than 80 brilliant scientists, engineers, PMs, policy experts and I am extremely proud of everything we’ve achieved as a team. Together, we’ve been the cornerstone of every launch—from GPT-4 and its visual and turbo variants, to the GPT Store, voice capabilities and o1.
Our work in training these models to be both powerful and responsible has set new industry standards. I’m particularly proud of our latest achievement with the o1-preview model, which stands as our safest model yet, showing exceptional resistance to jailbreak attacks while maintaining its helpfulness.
Our collective achievements have been remarkable:
- We trained the models on how to handle sensitive or unsafe requests including when to refuse or not, striking a good balance between safety and utility by following a set of well-defined model safety behavior policies.
- We improved adversarial robustness in each model launch, including defense for jailbreak, instruction hierarchy and largely improved robustness via reasoning.
- We designed rigorous and creative evaluation methods aligned with the Preparedness Framework and ran comprehensive safety testing and red teaming for each frontier model. Our commitment to transparency shows in our detailed model system cards.
- We developed the industry’s leading moderation model with multimodal capabilities, freely shared with the public. Our current work on a more generic monitoring framework and enhanced safety reasoning capabilities will empower even more safety workstreams.
- We set up the engineering foundation for safety data logging, metrics, dashboarding, active learning pipeline, classifier deployment, inference time filtering and a novel rapid response system.
Looking at what we have achieved, I’m so proud of everyone on the Safety Systems team and I have extremely high confidence that the team will continue thriving. I love you 💜.
Now after 7 years at OpenAI, I feel ready to reset and explore something new. OpenAI is on a rocket growth trajectory and I wish nothing but the best for everyone here.”
OpenAI and AI Safety
The exits have fueled debate over OpenAI’s commitment to safety, especially as it pursues ambitious growth. Observers have pointed out that maintaining this focus without key leaders may be challenging. The company’s ability to balance safety with product innovation will be closely watched, especially as new hires like Kalinowski bring fresh perspectives but face high expectations.
With Kalinowski’s entry into OpenAI’s leadership, the company appears set on exploring hardware opportunities. Her background in augmented reality and consumer tech could pave the way for OpenAI’s move into AI-integrated physical products. This direction could also tie into broader industry trends, with competitors such as Apple and Tesla making strides in robotics and AI-enhanced consumer electronics.
The Competitive Landscape
The pressure on OpenAI isn’t just internal. Rivals like Google DeepMind and Amazon Robotics continue to advance their AI capabilities, setting high standards for the industry. OpenAI’s latest shifts, including the development of an AI device with Jony Ive’s design firm LoveFrom, signal that it aims to stay competitive, though these efforts are still in early stages.
The company’s recent legal win and new leadership could help stabilize its trajectory, but questions remain about whether it can uphold its safety commitments amid rapid growth. As Weng and other key figures exit, how OpenAI manages these transitions will define its ability to maintain its reputation and meet industry expectations.