HomeWinBuzzer NewsAI Industry Feedback Leads to Amendments in Controversial California Bill

AI Industry Feedback Leads to Amendments in Controversial California Bill

California's AI safety bill, SB 1047 now requires less stringent safety measures from AI developers, dropping the creation of a new government agency and reducing reporting requirements.

-

In response to feedback from technology companies like Anthropic and other industry figures, California legislators have altered the AI safety bill SB 1047. The bill, aimed at addressing AI-associated risks, faced significant pushback from tech firms in the region.

The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, as SB 1047 is officially known, sets forth that developers of “frontier” AI models—those costing at least $100 million to train—must implement safety measures and testing frameworks. Companies must undergo audits and provide “reasonable assurance” that their models will not cause catastrophic events. Developers are also required to report their safety activities to state agencies.

Key Changes in SB 1047

The bill, which recently passed through the Appropriations Committee, has undergone several crucial revisions. Senator Scott Wiener indicated to TechCrunch that these changes were intended to address the concerns expressed by industry stakeholders.

Initially, a clause in the bill allowed the attorney general to file lawsuits against AI firms for negligent safety practices before a major incident occurred. The provision has been removed. Now, the attorney general can seek injunctive relief to stop dangerous AI activities and can still take legal action if an AI model causes a catastrophe.

The bill also scrapped the creation of a new government agency called the Frontier Model Division (FMD). Instead, the responsibilities will be transferred to the Board of Frontier Models within the existing Government Operations Agency. The board will expand from five to nine members and will be responsible for setting compute thresholds, issuing safety guidelines, and regulating auditors.

Reduced Responsibilities for AI Developers

Further amendments lessen the obligations of AI developers. Labs are no longer required to submit safety test certifications under penalty of perjury. They must now provide public statements about their safety practices instead. The revised bill also demands AI developers exercise “reasonable care” rather than “reasonable assurance” to ensure their models do not pose significant risks.

Additionally, there are new protections for developers of open-source fine-tuned models. Those investing under $10 million in fine-tuning are not considered developers within the context of SB 1047, placing responsibility on the original creators.

Reactions from Industry and Legislators

These amendments come after considerable opposition from a range of stakeholders, including U.S. congressmen, tech researchers, Big Tech, and venture capitalists. Senator Wiener contends that the changes meet core concerns while still promoting AI safety. Nonetheless, some critics, like Andreessen Horowitz general partner Martin Casado, argue that the amendments fail to resolve the bill’s fundamental issues.

In a letter to Governor Gavin Newsom, eight members of the U.S. Congress from California have urged him to veto the bill, arguing it would negatively impact the state’s startup ecosystem, scientific progress, and AI safety efforts. The bill now progresses to the California Assembly for a final vote. If it passes, it will return to the Senate for another vote on the latest amendments before reaching Governor Newsom’s desk for a final decision.

Last Updated on November 7, 2024 3:16 pm CET

Luke Jones
Luke Jones
Luke has been writing about Microsoft and the wider tech industry for over 10 years. With a degree in creative and professional writing, Luke looks for the interesting spin when covering AI, Windows, Xbox, and more.

Recent News

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Newest
Oldest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x
Mastodon