OpenAI CEO Altman Changes Stance on AI Safety, Urges ‘Light-Touch’ AI Rules

OpenAI CEO Sam Altman has signaled a major shift in his approach to AI regulation, now advocating for industry-led standards and warning against stringent government rules he believes could hinder U.S. innovation, a stark contrast to his previous calls for more robust federal oversight.

OpenAI CEO Sam Altman has signaled a significant recalibration of his views on AI regulation during a Senate Commerce Committee hearing on May 8, 2025.

Altman cautioned that imposing stringent government pre-approval for the release of powerful AI models could prove “disastrous” for America’s leadership in the rapidly advancing field. This position notably contrasts with his May 2023 testimony, where he had advocated for the creation of a new federal agency to license and test advanced AI, calling it his ‘number one’ recommendation for ensuring AI safety.

His current emphasis is on industry-led standards and a more restrained approach to government intervention, a stance that aligns with a broader shift in the tech sector and the prevailing sentiment within the Trump administration, which advocates for a ‘light-touch’ regulatory framework to foster innovation and maintain a competitive edge, especially against China.

The implications of this pivot are substantial for the future of AI governance. As a leading voice in the field, Altman’s revised perspective could influence policymakers to adopt a more hands-off approach, potentially speeding up the deployment of advanced AI technologies. This comes at a time when the capabilities of AI are rapidly expanding, alongside growing concerns from critics and some lawmakers about existing and potential societal harms, such as bias in AI systems, the generation of nonconsensual imagery, and the potential for AI to be used in disinformation campaigns.

While Altman now suggests that “I think standards can help increase the rate of innovation, but it’s important that the industry figure out what they should be first.”, the debate continues over whether self-regulation will be sufficient to address the complex challenges posed by increasingly powerful AI.

At the hearing, Altman further clarified his position on the role of bodies like the National Institute of Standards and Technology (NIST) in setting AI model development standards, stating, “I don’t think we need it. It can be helpful”, a divergence from other industry leaders present.

The broader political and industry context is increasingly framing AI development through the prism of international competition and national security. Senator Ted Cruz (R-Texas), chairing the hearing, underscored this by stating the U.S. “cannot allow regulation, even the supposedly benign kind, to choke innovation and adoption” and announced plans to introduce a bill creating a ‘regulatory sandbox for AI,’ according to the Senate hearing.

This sentiment was echoed by Vice President JD Vance at a Paris summit, who declared that “The AI future is not going to be won by hand-wringing about safety.” This approach marks a departure from the Biden administration’s strategy, which included an executive order mandating AI safety tests, later repealed by President Trump. The Trump administration has since issued new guidance aiming to remove bureaucratic restrictions and promote American AI dominance, as detailed by WilmerHale.

The AI Industry Navigates a Shifting Regulatory Landscape

Sam Altman’s call for industry to take the lead in defining AI standards, while conceding that “No, I think some policy is good, but I think it is easy for it to go too far.”, was a central theme of his recent testimony. He specifically described the European Union’s comprehensive EU AI Act as “disastrous.”

This view resonates with other tech executives, including Microsoft President Brad Smith, who, despite earlier calls for a dedicated federal AI agency, now supports a ‘light touch’ framework. The evolving stance is also reflected in policy changes at major AI labs; Google’s DeepMind, for instance, scrapped a long-held pledge in February 2025 against developing AI for weaponry or surveillance, and OpenAI, Meta, and Anthropic have similarly updated their policies regarding military projects.

However, this push for diminished government oversight faces criticism. Rumman Chowdhury, a former State Department science envoy for AI, suggested to The Washington Post that the tech industry’s earlier focus on existential AI risks functioned as a “bait and switch,” diverting focus from immediate harms and leveraging national security concerns to sidestep robust regulation.

Internal pressures at OpenAI regarding its safety commitments have also surfaced. Jan Leike, former co-lead of OpenAI’s Superalignment team, resigned in May 2024, publicly stating that “safety culture and processes have taken a backseat to shiny products”, as Winbuzzer reported. This followed the dissolution of the Superalignment team, which was established in July 2023 to focus on long-term AI risks. In response to these internal dynamics, Sam Altman acknowledged in May 2024 that OpenAI had “a lot more to do” concerning safety.

More recently, in April 2025, OpenAI updated its internal safety guidelines, introducing a provision that could allow the company to adjust its safety requirements if a competitor releases a high-risk system without comparable safeguards. This move came shortly after reports emerged that OpenAI had significantly reduced safety testing times for its new models, like o3, from months to sometimes under a week, prompting concerns from testers, one of whom described the approach as “reckless,”.

Balancing Innovation, Safety, and Public Trust

The delicate balance between fostering rapid AI innovation and implementing effective safety measures remains a central challenge. While OpenAI has detailed technical approaches like ‘deliberative alignment’ to embed safety reasoning into its models, and has stated its research into AI persuasion aims to ensure AI does not become too effective, as covered in OpenAI’s AI Persuasion Studies, the company’s actions, such as Altman’s September 2024 resignation from OpenAI’s safety and security board, have fueled ongoing debate. Former OpenAI staff have even accused Altman of resisting effective AI regulation in favor of policies that serve the company’s business interests.

Other AI labs are also navigating this complex terrain. Anthropic, for instance, unveiled an interpretability framework in March 2025 to make its Claude AI’s reasoning more transparent, and also submitted recommendations to the White House urging national security testing of AI systems.

However, the company also reportedly removed some of its voluntary safety pledges made under a previous White House initiative. This highlights the intricate dance AI companies perform between advocating for certain types of governance and maintaining operational flexibility.

The broader context includes the very real risks demonstrated by AI, such as a recent study showing AI outperforming human experts in virology lab troubleshooting, raising dual-use concerns about bioweapon risks. Furthermore, new security vulnerabilities in widely used AI tools, like Microsoft’s Copilot for SharePoint, continue to emerge, highlighting immediate risks even as calls for lighter regulation intensify.

Defining the Future of AI Governance

Sam Altman’s 2023 warning that AI might be “capable of superhuman persuasion well before it is superhuman at general intelligence” seems particularly salient as he now advocates for more industry freedom. His previous call for a licensing agency and pre-release testing stands in stark contrast to his current emphasis on avoiding regulations that could ‘choke innovation.’

This evolution in his public stance occurs as the U.S. government itself, under a new administration, signals a preference for accelerating AI development to compete globally, particularly with China, a concern also voiced by other tech leaders like Scale AI’s Alexandr Wang, who has described military AI support as a “moral imperative.”

While Altman suggests that complex issues like privacy represent a “This is a gray area… for you [lawmakers] to think about and take quite seriously.”, his overarching message now leans towards industry self-determination in setting safety standards.

Max Tegmark of the Future of Life Institute remains a critical voice, highlighting the current regulatory imbalance by noting to The Washington Post that “If there’s a sandwich shop across the street from OpenAI or Anthropic or one of the other companies, before they can sell even one sandwich they have to meet the safety standards for their kitchen. If [the AI companies] want to release super intelligence tomorrow, they’re free to do so.”

As the ‘Intelligence Age’ unfolds, the critical task of balancing innovation with public safety, and defining the governance frameworks for increasingly powerful AI, will undoubtedly remain at the forefront of global discourse.

Markus Kasanmascheff
Markus Kasanmascheff
Markus has been covering the tech industry for more than 15 years. He is holding a Master´s degree in International Economics and is the founder and managing editor of Winbuzzer.com.
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
We would love to hear your opinion! Please comment below.x
()
x