<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=2006193252832260&amp;ev=PageView&amp;noscript=1">

2 Min Read

California's Controversial AI Safety Bill One Step Away from Becoming Law

Featured Image

Wondering how to get started with AI? Take our on-demand Piloting AI for Marketers Series.

Learn More

California's controversial AI safety bill, SB 1047, is now one step away from becoming law. But as it inches closer to reality, the debate around its potential impact on innovation and safety is heating up.

The Bill at a Glance

SB 1047 has cleared both the California State Assembly and Senate, leaving just one more process vote before it lands on Governor Gavin Newsom's desk. If signed, it would require AI companies operating in California to implement several safety measures before training advanced foundation models, including:

  • The ability to quickly shut down a model in case of a safety breach
  • Protection against unsafe post-training modifications
  • Maintaining testing procedures to evaluate potential critical harm risks

Sound reasonable? Not everyone thinks so.

A House Divided

The AI industry is split on SB 1047. OpenAI is largely against the bill. Anthropic initially pushed back, but now appears supportive after proposing amendments. And AI experts appear divided. 

Some, like Andrew Ng and Fei-Fei Li, argue that the bill focuses too heavily on catastrophic harm and could stifle innovation, particularly in open-source development. Others, like Geoff Hinton, believe it’s a sensible and necessary approach to AI regulation.

Why This Matters Beyond California

Here's the kicker: This isn't just about companies based in California, says Marketing AI Institute founder and CEO Paul Roetzer on Episode 113 of The Artificial Intelligence Show.

"It's not just companies in California, it's companies that do business in California,” he says.

Given that fact, and California's massive economy, SB 1047 could have an impact on AI companies—and firms that rely on their products—far beyond California’s borders. 

Corporate America Is Watching

The uncertainty around AI regulation is already sending ripples through the business world:

  • 27% of Fortune 500 companies cited AI regulation as a risk in recent SEC filings
  • Concerns range from higher compliance costs to potential revenue drags
  • Some corporations are proactively setting their own AI guidelines

“This uncertainty matters to businesses,” says Roetzer.

If this law gets signed soon, many will now need to comply with it—and that will affect everyone at the company. “The CMO [for instance] is all of a sudden going to have to care about this law,” says Roetzer.

The Unintended Consequences

If SB 1047 becomes law, we might also see some significant shifts in the AI landscape, says Roetzer.

The extra layers of safety checks and potential government interventions could extend the development cycle of new AI models from 8-12 months to 18-24 months.

Instead of big "model drops," we might see more frequent, smaller capability updates to navigate regulatory hurdles.

And major AI companies might line up to voluntarily participate in federal initiatives, using them as cover to continue development.

The Regulation Dilemma

The core challenge lawmakers face is striking a balance between safety and innovation. 

The bill attempts to set thresholds based on model size and training methods. But in a field advancing as rapidly as AI, today's "unsafe" model could be tomorrow's obsolete technology.

An emerging school of thought suggests regulating at the application level rather than the model level. 

AI expert Andrew Ng compared it to a general purpose technology in a TIME editorial, writing:

“Consider the electric motor. It can be used to build a blender, electric vehicle, dialysis machine, or guided bomb. It makes more sense to regulate a blender, rather than its motor. Further, there is no way for an electric motor maker to guarantee no one will ever use that motor to design a bomb. If we make that motor manufacturer liable for nefarious downstream use cases, it puts them in an impossible situation. A computer manufacturer likewise cannot guarantee no cybercriminal will use its wares to hack into a bank, and a pencil manufacturer cannot guarantee it won’t ever be used to write illegal speech. In other words, whether a general purpose technology is safe depends much more on its downstream application than on the technology itself.”

Related Posts

[The AI Show Episode 113]: OpenAI’s Upcoming “Strawberry” Launch and $100B Valuation, SB-1047 Passes, Oprah Gets Into AI & Viggle Scrapes YouTube Videos

Claire Prudhomme | September 4, 2024

Explore OpenAI's "Strawberry" system and fundraising, analyze California's SB-1047 bill on AI, and catch up on AI news with Paul Roetzer and Mike Kaput in Episode 113 of The Artificial Intelligence Show.

[The AI Show Episode 117]: OpenAI’s Wild Week, Sam Altman’s Prophetic Post, Meta’s AR Breakthrough & California’s AI Bill Veto

Claire Prudhomme | October 1, 2024

OpenAI's turbulent week, Sam Altman's predictions, Meta Connect 2024 highlights, SB-1047 updates, FTC's AI crackdown, Anthropic's valuation, and more industry news dissected.

[The AI Show Episode 108]: OpenAI Voice, AI Cloud Wars, Microsoft’s AI Revenue, Meta Earnings and AI, Perplexity Publishers Program & SB-1047 California AI Bill

Claire Prudhomme | August 6, 2024

Episode 108 of The AI Show discusses ChatGPT's voice feature, risks of a global 'cloud war', and Microsoft's AI financials.