HomeWinBuzzer NewsFormer OpenAI Safety Lead Jan Leike Joins Anthropic in Similar Role

Former OpenAI Safety Lead Jan Leike Joins Anthropic in Similar Role

Leike's new role at Anthropic bears resemblance to his previous position at OpenAI, where he co-led the Superalignment team.

-

Jan Leike, a prominent AI researcher, has transitioned from OpenAI to Anthropic. Leike, who recently criticized OpenAI’s approach to AI safety and resigned as head of the superalignment team, will spearhead a new “superalignment” team at Anthropic.

Leike announced via a post on X that his new team at Anthropic will concentrate on various facets of AI safety and security. The primary areas of focus will include scalable oversight, weak-to-strong generalization, and automated alignment research.

A source familiar with the matter told TechCrunch that Leike will report directly to Jared Kaplan, Anthropic’s chief science officer. Researchers at Anthropic currently engaged in scalable oversight will now report to Leike as his team begins its work.

Leike’s new role at Anthropic bears resemblance to his previous position at OpenAI, where he co-led the Superalignment team. This team aimed to address the technical challenges of controlling superintelligent AI within four years but faced obstacles due to OpenAI’s leadership decisions.

Anthropic’s Safety-First Approach

Anthropic has consistently positioned itself as prioritizing safety more than OpenAI. This stance is reflected in its leadership, with CEO Dario Amodei, a former VP of research at OpenAI, having left the company due to disagreements over its commercial direction. Amodei, along with several ex-OpenAI employees, including former policy lead Jack Clark, founded Anthropic with a focus on AI safety.

Dario Amodei’s departure from OpenAI was driven by a divergence in vision, particularly regarding the company’s increasing commercial focus. This split led to the formation of Anthropic, which has since attracted several former OpenAI employees who share a commitment to prioritizing AI safety and ethical considerations.

Leike’s move to Anthropic underscores a growing emphasis on AI safety within the industry. By leading the new superalignment team, Leike aims to advance research in scalable oversight and alignment, ensuring that AI systems behave in predictable and desirable ways.

Resignations at OpenAI and New Safety Committee

At OpenAI, resource allocation issues have led to the resignation of several team members, including co-lead Jan Leike, a former DeepMind researcher.

Leike, who has played a significant role in the development of ChatGPTGPT-4, and InstructGPT, publicly cited disagreements with OpenAI leadership regarding the company’s core priorities. In a series of posts on X, Leike expressed concerns about the company’s focus, stating that more effort should be dedicated to preparing for future AI models, emphasizing security, monitoring, safety, and societal impact.

Meanwhile, has announced the creation of a  new Safety and Security Committee within its Board of Directors to oversee the safety of its systems.

The committee includes OpenAI’s CEO Sam Altman, board members Bret Taylor, Adam D’Angelo, and Nicole Seligman, along with chief scientist Jakub Pachocki and head of security Matt Knight. The committee will also consult with external safety and security experts.

Earlier this month, OpenAI introduced GPT-4o, an enhanced version of its  model, which features more realistic voice interactions. This version is available to both free and paid users of GPT-4.

Last Updated on November 7, 2024 8:03 pm CET

Markus Kasanmascheff
Markus Kasanmascheff
Markus has been covering the tech industry for more than 15 years. He is holding a Master´s degree in International Economics and is the founder and managing editor of Winbuzzer.com.

Recent News

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
We would love to hear your opinion! Please comment below.x
()
x
Mastodon