Could California's AI Bill Be a Blueprint for Future AI Regulation?

California’s SB 1047 has the potential to shape future regulation around the raging question of AI safety.

Carrie Pallardy, Contributing Reporter

September 5, 2024

5 Min Read
Alphabets AI on advanced central processing unit (CPU) chip and gavel in wireframe on electronic mother boards.
Dragon Claws via Alamy Stock

The question of AI safety has been heating up as the technology continues to proliferate and dollars pour into a multitude of models and use cases. Senate Bill (SB) 1047, making its way to California Governor Gavin Newsom’s (D) desk, could have a significant part to play in shaping regulators’ answer to that question.

If passed, the bill would enact the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act in California. Given California’s clout, the legislation could serve as the foundation for AI regulation in other states.

SB 1047 has been met with applause, cautious optimism, and outright opposition. If it does pass, what could this legislation mean for the AI industry and future regulation?

The Bill

The proposed legislation focuses on safety and security protocols for large AI models, those that cost $100 million or more to develop. It aims to hold the developers of these models liable for harmful outcomes. The bill offers examples of harm, such as the creation of weapons resulting in mass casualties and cyberattacks on critical infrastructure resulting in $500 million or more in damages.

The legislation would empower the Attorney General to take civil action against AI developers if their models are linked to a catastrophic event. 

Related:US Lawmakers Mull AI, Data Privacy Regulation

Opposition and Support

The need for AI regulation is widely agreed upon as the use of the technology grows. “AI is becoming more and more part of our lives, like a coffee machine, like a car. You may not directly be using AI, but you probably are using some application that may be [in] the background,” Melissa Ruzzi, director, artificial intelligence at SaaS security company AppOmni, tells InformationWeek.

The debate around how to regulate this quickly moving technology is contentious. SB 1047 has garnered support from AI researchers Geoffrey Hinton and Yoshua Bengio, as well as billionaire Elon Musk. Hinton pointed to the bill's “sensible approach,” according to a press release from Senator Scott Wiener (D-San Francisco), who introduced the bill. Musk took to X to voice his support for regulating “…any product/technology that is a potential risk to the public.”

Anthropic, an AI startup with a $4 billion investment from Amazon, penned a letter to Gov. Newsom outlining cautious support, contingent on amendments made to the bill. The company’s proposed changes focus on narrowing the scope of pre-harm enforcement, reducing requirements in the absence of harm, and cutting down on “extraneous aspects.”

OpenAI, on the other hand, has been vocal in its opposition to the bill. The company stated support for the intent behind parts of the bill, but argued that regulation should occur at the federal level.

Related:EU AI Act Clears Final Hurdle to Become Global Landmark

“However, the broad and significant implications of AI for U.S. competitiveness and national security require that regulation of frontier models be shaped and implemented at the federal level,” according to the company’s opposition letter.

The AI Alliance, a group with members like Meta and IBM, has also spoken out against the bill. It argues that the bill penalizes open-source development, amongst other criticisms.

The focus on LLMs has also been cause for debate. “A single model or a broad model is only one part of application that could cause all kinds of…damage,” says Geoffrey Mattson, CEO of cybersecurity company Xage Security. “You could have…several smaller models [linked] together in a network, for instance. It doesn't address that. You could have models that are smaller but that are trained in a certain way with [a] certain type of information that make them dangerous.”

Other industry stakeholders have questioned the focus on model builders as the gatekeepers of AI safety. “As the model builder it's impossible to know how the model will be used, and right now, the bill is really putting the onus on the model builders,” says Manasi Vartak, chief AI architect at Cloudera, a data lake software company.

Related:Should Government Be Allowed to Regulate AI?

AI Regulation to Come

SB 1047 is not yet law, nor is it guaranteed to become so. But if it does, other states could look to California’s approach to AI regulation.

“If approved, legislation in an influential state like California could help to establish industry best practices and norms for the safe and responsible use of AI,” Ashley Casovan, managing director, AI Governance Center at non-profit International Association of Privacy Professionals (IAPP), says in an email interview.

California is hardly the only place with AI regulation on its radar. The EU AI Act passed earlier this year. The federal government in the US released an AI Bill of Rights, though this serves as guidance rather than regulation. Colorado and Utah enacted laws applying to the use of AI systems.

“I expect that there will be more domain-specific or technology-specific legislation for AI emerging from all of the states in the coming year,” says Casovan.

As quickly as it seems new AI legislation, and the accompanying debates, pops up, AI moves faster. “The biggest challenge here…is that the law has to be broad enough because if it's too specific maybe by the time it passes, it is already not relevant,” says Ruzzi.

Another big part of the AI regulation challenge is agreeing on what safety in AI even means. “What safety means is…very multifaceted and ill-defined right now,” says Vartak.

Defining safety could potentially be achieved by homing in on specific AI applications across different industries.

Vartak also argues that government agencies should be involved in testing various AI models in order to gain a better understanding of the technology they are attempting to regulate. She points to the agreement between Anthropic and OpenAI and the US Artificial Intelligence Safety Institute at the National Institute of Standards and Technology (NIST) as an example of this kind of collaboration.

Given how nascent the AI industry is, answering fundamental questions about safety will likely take time and money. Funding academic institutions, research labs, and thinktanks could potentially help regulators answer those questions and shape their approach. “I would personally love to see more funding there so we can understand what we're up against and then we can think about how [to] best regulate it,” says Vartak.

Regardless of how the regulatory landscape forms, AI developers and users have to think about safety now. “It's important to start to develop a strong AI governance literacy within your organization,” Casovan urges. “Managing these systems independent of legislation will be important to reduce business risk.”

Read more about:

Regulation

About the Author

Carrie Pallardy

Contributing Reporter

Carrie Pallardy is a freelance writer and editor living in Chicago. She writes and edits in a variety of industries including cybersecurity, healthcare, and personal finance.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights