California's potential new AI safety law, SB 1047, introduced by State Senator Scott Wiener, is encountering heavy criticism from major AI firms. The bill requires an extensive safety protocol, including a deactivation mechanism for hazardous AI models. It has passed the state Senate and is pending a vote in the general assembly.
Top AI organizations and tech firms like OpenAI, Anthropic, Cohere, and Meta are pushing back. They contend the bill's rigorous standards could drive innovation away from California and lead to excessive liabilities. Renowned computer scientist Andrew Ng, who serves on Amazon‘s board, dismissed the bill's need, calling it an overreaction to what he calls “science-fiction risks” that might hinder technological development.
The effort to protect innovation and open source continues. I believe we're all better off if anyone can carry out basic AI research and share their innovations. Right now, I'm deeply concerned about California's proposed law SB-1047. It's a long, complex bill with many parts… pic.twitter.com/VUpKh4vhrg
— Andrew Ng (@AndrewYNg) June 6, 2024
SlashNext's Field CTO, Stephen Kowski, told PYMNTS that companies would need to evaluate risks, instate safeguards, and report to a new state body or face fines. The compliance requirements introduce uncertainty, and Zendata CEO Narayana Pappu mentions that, similar to California's CCPA, the bill could lead to more class action lawsuits, despite limited enforcement actions.
Safety and Oversight Initiatives
The proposed legislation aims to manage the fast evolution of AI and its dangers, a sentiment echoed by Elon Musk, who has spoken about potential AI threats. With backing from the Center for AI Safety (CAIS), the bill proposes basic safety assessments and a deactivation switch for large AI models. Critics, however, argue these measures might unduly burden smaller AI firms and open-source developers.
The bill's reach goes beyond tech, affecting industries such as retail, healthcare, finance, and transportation, all of which increasingly rely on AI for operational enhancements. Critics worry it could decelerate AI adoption and disadvantage California companies versus those in less regulated areas.
Senator Wiener has offered amendments to the bill in response to the opposition. These adjustments clarify that open-source developers won't be liable for third-party modifications and that the deactivation requirement won't affect open-source models. The bill focuses on large models costing over $100 million, sparing most smaller start-ups.
Regulatory Background
AI regulation efforts extend beyond California. Last year, President Joe Biden signed an executive order to set new AI safety and national security standards, and the UK is crafting its own AI laws. Critics are surprised at the bill's rapid Senate progress, spurred by CAIS funding from Open Philanthropy, a research and grantmaking foundation that makes grants based on the doctrine of effective altruism.
AI veterans Geoffrey Hinton and Yoshua Bengio support the bill, noting AI's existential risks, with Hinton calling for a balanced approach. Conversely, industry voices like Sunil Rao, CEO of Tribble, stress the need for accurate regulations to sustain innovation. Bob Rogers, CEO of Oii.ai, warns that compliance might boost costs, impacting customers.
Sunil Rao suggests California's strategy could influence other state policies, highlighting the need for coordinated efforts to avoid fragmented regulations. He underlines the importance of tech-industry involvement in legislative processes to support innovation. Tarun Thummala, CEO of AI automation startup PressW, fears the bill might suppress innovation and create less secure systems, proposing that regulations should target specific AI uses instead of the development workflow.