California Governor Gavin Newsom has vetoed the high-profile AI Safety bill aimed at enforcing new safety regulations on AI technology, a decision that has ignited fresh debate over how to manage the risks posed by artificial intelligence. The rejected legislation would have placed safety requirements on the development of large AI models, with an emphasis on protecting the public from potential harm.
Tech Industry Divided Over AI Regulation
The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, as SB 1047 is officially known, sets forth that developers of “frontier” AI models—those costing at least $100 million to train—must implement safety measures and testing frameworks. Companies must undergo audits and provide “reasonable assurance” that their models will not cause catastrophic events.
Top AI organizations and tech firms, including OpenAI, Anthropic, Cohere, and Meta, argue that these rigorous standards might hinder innovation and drive talent away from California. Y Combinator, a venture capital accelerator known for mentoring startups, has joined the debate. In a letter signed by 140 startups, Y Combinator criticizes the bill's demands, fearing they could hamper the growth of new companies.
Anthropic is one company that raised concerns about SB 1047. In a letter to Axios, the company voiced its stance that, while ensuring safe AI development is a worthy objective, the bill has significant shortcomings. OpenAI's Chief Strategy Officer, Jason Kwon, cautioned in a letter to Senator Wiener that this bill could hamper growth, stall innovation, and cause engineers and entrepreneurs to relocate.
Some of the feedback has been taken on board with recent amendments to the bill. The AI Safety Act has undergone significant changes. The attorney general's powers have been reduced, the Frontier Model Division has been scrapped, and AI developer responsibilities have been lessened. The bill now focuses on preventing future AI disasters and regulating AI models through the Board of Frontier Models.
Despite the changes, AI companies remained concerned. In late August, legislators in the state passed the bill. That meant the decision passed to Governor Newsom, who had until today, September 30, to reach a decision.
Newsom's Reasoning for the Veto
In his veto message, Newsom explained that while the bill was well-meaning, it didn't adequately consider whether AI systems were being deployed in high-risk scenarios or dealing with sensitive information.
He highlighted the fact that smaller, exempt models could handle more critical tasks like managing medical records or power grids, while larger models, which were the target of the legislation, often performed relatively low-risk activities, such as chatbots or customer service automation.
Newsom argued that by applying the same safety requirements across all large models, the bill lacked precision and could over-regulate AI systems that didn't present a clear threat. He pointed out that the focus on model size and cost failed to address the different ways AI is being used.
California's AI Oversight in Flux
The legislation would have made California the first state to implement such extensive safety standards for AI development. The bill's failure to pass leaves a void in the regulation of artificial intelligence, particularly as discussions in Congress about AI safety have stalled.
Despite this, Newsom has indicated that he remains committed to regulating AI and is working with leading researchers and industry figures to develop more tailored approaches. Among those collaborating with the governor is Stanford professor Fei-Fei Li, who has been influential in AI development. The new legislative effort is expected to focus on the specific risks of advanced AI models, especially those used in critical applications.
Tech Industry Reactions to the Veto
The tech community reacted swiftly to the governor's decision. In a statement, Google expressed appreciation for Newsom's veto, highlighting their interest in collaborating with his office on building responsible AI frameworks. Similarly, OpenAI welcomed the opportunity to work with state lawmakers on more narrowly defined regulations addressing key public concerns like AI-driven deepfakes and the ethical use of digital tools.
Yet, not all responses were positive. Scott Wiener, the state senator who authored SB 1047, voiced his disappointment, suggesting that the veto represented a missed chance for California to lead the way in tech regulation, much as it did with data privacy and net neutrality in the past.