Anthropic has submitted a detailed set of AI policy recommendations to the White House’s Office of Science and Technology Policy (OSTP), calling for enhanced government oversight in artificial intelligence development.
The company warns that advanced AI systems could surpass human expertise in key fields as early as 2026, and argues that without immediate intervention, the U.S. may not be prepared for the economic and security challenges such technology brings.
However, as it urges federal regulators to take a more active role in AI governance, Anthropic has quietly removed multiple AI policy commitments it made under the Biden administration, raising questions about its evolving stance on self-regulation.
Anthropic’s AI Regulation Proposal and National Security Focus
The new policy recommendations submitted to OSTP focus on six key areas, including national security testing of AI models, tightening semiconductor export restrictions, and ensuring that AI research labs have secure communication channels with intelligence agencies.
Anthropic also proposes a large-scale expansion of the U.S. power grid, estimating that AI systems will require an additional 50 gigawatts of energy infrastructure in the near future.
Additionally, the company calls for government AI adoption to be accelerated and for federal agencies to modernize economic data collection methods to better measure AI-driven disruptions in the job market.
Anthropic has framed its proposal as a response to the rapid advancements in AI capabilities. The company warns that AI systems are progressing at an unprecedented rate, with models capable of outperforming human experts in areas such as software development, decision-making, and research.
This argument is reinforced by recent AI performance benchmarks, which show that Anthropic’s Claude model improved its software engineering success rate from 1.96% to 49% in just a year.
These concerns align with broader cybersecurity threats posed by AI misuse. A recent Microsoft report found that AI-driven cyberattacks have surged to over 600 million incidents per day, with state-backed hacking groups from Russia, China, and Iran leveraging AI for increasingly sophisticated attacks.
This data underscores Anthropic’s recommendation for mandatory security testing of AI models before they are widely deployed.
Anthropic Quietly Removes Biden-Era AI Commitments
While advocating for tighter regulations, Anthropic has simultaneously erased several AI policy commitments it previously made under the Biden administration.
These commitments were part of a voluntary safety framework agreed upon by major AI firms, including OpenAI and Google DeepMind, as part of a broader White House initiative.
The commitments, which focused on responsible AI scaling and risk mitigation, were quietly removed from Anthropic’s website without public explanation.
The timing of this policy reversal is notable, as it comes just as the company pushes for stronger government intervention in AI governance. This move has sparked speculation about whether Anthropic is repositioning itself strategically in response to regulatory changes or shifting industry dynamics.
While the company has not publicly commented on the removals, the change raises concerns about the level of transparency AI firms maintain while advocating for stricter external regulation.
Warnings of AI Risks and the 18-Month Window for Action
Anthropic’s latest policy push follows its stark warning in November 2024, when it urged global regulators to act within 18 months or risk losing control over AI development.
At the time, the company highlighted several emerging risks, including AI’s ability to autonomously conduct cybersecurity breaches, facilitate biological research, and execute complex tasks with minimal human oversight.
The urgency of this call for action was amplified by concerns about AI-enabled cyber threats, which have been escalating in parallel with advancements in AI performance.
Security researchers have warned that AI-driven phishing attacks and automated malware creation are making traditional security systems increasingly vulnerable. As AI models become more advanced, their ability to generate realistic deepfakes and manipulate digital content poses additional risks to information integrity.
Corporate Influence and the Role of Big Tech in AI Regulation
Anthropic’s evolving policy stance comes at a time when corporate influence over AI regulation is under increased scrutiny. The UK’s Competition and Markets Authority has launched an investigation into Google’s $2 billion investment in Anthropic, assessing whether the partnership could reduce competition in the AI sector.
As the White House reviews Anthropic’s recommendations, the broader conversation around AI regulation continues to evolve. Companies like Anthropic, OpenAI, and Google are actively shaping the narrative, but government intervention will determine whether voluntary commitments remain a viable regulatory strategy or if stricter rules will be enforced.
Given the growing influence of these tech giants, the outcome will significantly impact how the AI industry is governed and whether national security and ethical concerns are adequately addressed.