Newsom Veto Kills California AI Safety Bill

California legislators crafted a bill that would be the most comprehensive regulation in the US. But detractors said it would hinder innovation and discourage business.

Shane Snider, Senior Writer, InformationWeek

September 30, 2024

4 Min Read
Gavel And Mallet In Front Of Lawyer Shaking Hand With Digital Partners Against Grey Background
Andriy Popov via Alamy Stock

California Governor Gavin Newsom on Sunday vetoed an artificial intelligence safety bill that would have put tighter controls on AI firms to guard against serious risks. But Newsom says the state will continue to work with AI safety experts and key stakeholders to develop guardrails.

The bill, SB 1047 or the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, focused on large AI models that cost $100 million or more to develop. Anthropic, OpenAI, Google, Meta, and Microsoft would have been major targets of the proposed law. The tech sector is a major contributor to California (the world’s fifth largest economy), adding $623.4 billion to the state’s economy in 2022 alone. And after ChatGPT’s successful launch in 2022, the ensuing race to adopt AI across industries has been a cash cow for the tech industry.

“California is home to 32 of the world’s 50 leading AI companies, pioneers in one of the most significant technological advances in modern history,” Newsom said in a letter to state senators after the veto. “… By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology. Smaller, specialized models may emerge as equally or even more dangerous models …”

Related:Could California's AI Bill Be a Blueprint for Future AI Regulation?

State Senator Scott Wiener (D-San Francisco), author of the bill, expressed his disappointment.

“This veto is a setback for everyone who believes in oversight of massive corporations that are making critical decisions that affect the safety and welfare of the public and future of the planet,” Wiener said in a statement. “The companies developing advanced AI systems acknowledge that the risks these models present to the public are real and rapidly increasing … This veto leaves us with the troubling reality that companies aiming to create an extremely powerful technology face no binding restrictions from US policymakers …”

Powerful Opponents Encourage Veto

The law would have impacted the AI industry beyond California, just as the state’s Consumer Privacy Rights Act (CPRA) has become a benchmark for data privacy as federal regulations have stalled in Congress. SB 1047 would have established a Government Operations Agency to oversee the law. Violations could have cost companies up to 10% of the cost of the quantity of computing power used to train models plus 30% for each additional violation.

The bill would have also required AI developers to publicly disclose how the company would test the possibility of their model to cause critical harm, and under which conditions the model would be fully shut down.

Related:EU AI Act Clears Final Hurdle to Become Global Landmark

The bill faced opposition from most of the AI companies (with Elon Musk and his company xAI being a notable exception), and some prominent members of Congress, including Democrat Speaker Emerita Nancy Pelosi, who in a statement called the bill “well-intentioned but ill informed.”

A letter from Rep. Zoe Lofgren, (D-CA), ranking member of the House Committee on Science, Space, and Technology, said the bill was too focused on large-scale risks of mass casualties or harmful weapon creation. “By focusing on hypothetical risks rather than demonstrable risks, the efficacy of this legislation in addressing real societal harms -- including those faced by Californians today -- is called into question,” she wrote.

AI Safety Proponents Left Seething

Daniel Colson, executive director of the Artificial Intelligence Policy Institute, said the veto was a mistake and that popular support from Californians should have guided Newsom, saying in an email that his decision “to veto SB 1047 is misguided, reckless, and out of step with the people he’s tasked with governing. Time after time Californians have made it abundantly clear that they support legislation to rein in AI, are deeply concerned about the ramifications of unfettered AI, and will point the finger at him and make him pay a political price in the event that AI causes a dangerous event like a cyberattack.”

Related:Biden Pens Landmark AI Executive Order

Colson cited a poll showing 70% of respondents supporting the AI bill.

For his part, Newsom pointed to eight of 38 AI bills already signed into law as evidence that the state was taking AI seriously. “Let me be clear … we cannot afford to wait for a major catastrophe to occur before taking action to protect the public,” Newsom wrote. “California will not abandon its responsibility. Safety protocols must be adopted. Proactive guardrails should be implemented and severe consequences for bad actors must be clear and enforceable.”

While California failed to enact SB 1047, AI companies will still need to contend with the recently enacted EU AI Act, which aim to set a global standard for AI safety regulations.

Read more about:

Regulation

About the Author

Shane Snider

Senior Writer, InformationWeek

Shane Snider is a veteran journalist with more than 20 years of industry experience. He started his career as a general assignment reporter and has covered government, business, education, technology and much more. He was a reporter for the Triangle Business Journal, Raleigh News and Observer and most recently a tech reporter for CRN. He was also a top wedding photographer for many years, traveling across the country and around the world. He lives in Raleigh with his wife and two children.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights