OpenAI's AI safety lead, Aleksander Mądry, has been reassigned to a new position focused on research, according to a report from CNBC. In addition to his role at OpenAI, Mądry is a professor at MIT. His new responsibilities will involve concentrating on fundamental AI reasoning research.
Oversight on AI Models
Previously, Mądry headed the Preparedness team, dedicated to evaluating the risks of upcoming AI models. This team aimed to prevent potential large-scale threats from AI advancements. The shift in Mądry's role suggests a change in how OpenAI is integrating safety considerations within its research efforts.
An internal announcement communicated that Mądry will now take on a more comprehensive role within OpenAI's research sector. This change is part of OpenAI's broader effort to push the boundaries of AI development while maintaining a focus on ethics and safety. Balancing these aspects is increasingly complex as AI technologies advance.
Industry Trends and Impact
This move comes as efforts to manage advanced AI risks increase. Competitors like Anthropic are also working on mitigating AI's catastrophic risks. The reassignment might be a strategy for OpenAI to remain competitive and ensure the safety of its AI models.
Mądry's new assignment was announced just before Democratic senators requested detailed information from OpenAI's CEO, Sam Altman, regarding the company's safety measures and financial strategies. This request underscores the enhanced scrutiny AI companies are facing.
Challenges from Within and Outside
Microsoft recently gave up its observer seat on OpenAI's board, approving the company's new board structure. Concurrently, current and former OpenAI employees raised concerns through an open letter about the swift pace of AI development and the lack of oversight and protections for whistleblowers. The FTC and DOJ are set to investigate OpenAI, Microsoft, and Nvidia for potential antitrust issues.