AI, governance

An AI safety bill, expected to be signed into law in California soon, could set a regulatory template on how the rest of the nation polices the technology — as well as create nebulous guard rails for companies and developers.

Democratic California Gov. Gavin Newsom, who has not publicly made his feelings known on the topic, has until Sept. 30 to sign the bill into law.

For weeks, Senate bill 1047 had polarized Silicon Valley and state lawmakers. Supporters of the legislation included Elon Musk and two of the so-called “Godfathers of AI,” Yousua Bengio and Geoffrey Hinton, the former AI lead at Google.

Opponents of the bill include Alphabet Inc.’s Google, Meta Platforms Inc., OpenAI, former House Speaker Nancy Pelosi, and Democratic California Congress members Anna Eshoo, Ro Khanna and Zoe Lofgren, all of whom represent Silicon Valley.

The legislation requires AI companies to train “frontier models” that cost more than $100 million to establish safety testing to safeguard against cyberattacks and prevent AI from being used for harmful purposes — anything from creating biological weapons to automated cybercrime.

AWS

As with previous attempts to regulate tech, Senate 1047 has deeply divided Silicon Valley, renewing and inflaming a long-held debate over whether legislation and innovation can coexist, and whether larger companies have a distinct advantage over smaller competitors in their ability to adhere to new laws.

Most big companies developing AI, led by Anthropic (Claude models) and Alphabet Inc.’s Google (Gemini), are focusing on ‘do no harm’ safety measures.

The new law is “a good move in the right direction based on positive intentions,” Melissa Ruzzi, AI director at AppOmni, said in an email message. “It’s necessary so more people can start trusting AI and more competition in the field can appear. We all know AI is very new and there are challenges in writing laws around it. We cannot expect the first laws to be flawless and perfect – this will most likely be an iterative process, but we have to start somewhere.”

“California’s SB1386 [a state law in 2003 that strengthened the privacy of personal information] revolutionized data breach notifications globally, proving that state-level regulation can drive widespread change even without federal action,” said Davi Ottenheimer, vice president of trust and digital ethics at Inrupt, a data startup founded by WWW Inventor Sir Tim Berners-Lee.

“I was on the front-lines in Silicon Valley detecting and preventing breaches in 2003 and remember a positive sea-change after California passed a law, like it was yesterday,” Ottenheimer said. “In this light, OpenAI’s opposition to California’s proposed AI regulation appears out of touch with very important recent historical precedent.”

Alon Yamin, co-founder and CEO of Copyleaks, an AI-based text analysis platform, sees the bill as a “an important first step in the right direction” of establishing a difference between AI content and human content.

Chief among opponents of Senate 1047 are worries about AI increasingly becoming intertwined with critical infrastructure such as power grids, water systems and transportation networks. A security breach in any of those systems could conceivably lead to widespread power outages and disruptions in essential services.

“I’m hopeful the legislation prompts the ‘California Effect,’ in which similar action is taken on the state and/or national level,” Geoff Mattson, chief executive of Xage Security, said in an interview. “Flawed as the bill is, and it is backward looking and too broad in scope, this is a discussion that needs to be addressed.”

“Many AI regulation bills focus on the computational power, model size or monetary cost required to train a model, but these values are the wrong metrics to use,” David Brauchler, principal security consultant at cybersecurity consulting firm NCC Group, said in an email message. There is not a direct correlation between model size and risk, and these bills mistakenly address the computational power required to train Large Language Models (LLMs) and overlook that small, specialized and powerful models may be far more equipped to do harm than large natural language processing (NLP).”

Meanwhile, the state-specific nature of the bill could “result in a fragmented regulatory landscape across the U.S., complicating compliance for businesses operating nationally,” Sam Peters, chief product officer at ISMS.online, warned in an email. “The focus on developers rather than users who might misuse AI has sparked criticism, potentially driving innovation away from California to more lenient jurisdictions.”
Peters believes compliance standards like ISO 42001 could solve many issues for U.S. businesses caught between state and federal legislation and trying to meet many diverging requirements. He said ISO 42001 could offer a globally recognized standard that “could help streamline compliance efforts and reduce the complexity of adhering to different jurisdictions’ rules. It could also make it easier for businesses to expand and operate across state lines without getting entangled in conflicting regulations.”

The legislation is about to land as the influence of AI courses through the economy and reaches into nearly every aspect of Americans’ lives. Witness the growth and prosperity of valley staples Nvidia Corp. and OpenAI.

Nvidia racked up another monster quarter in revenue and profits last month, in large part from a quartet of unnamed partners who each purchased more than $3 billion each.

Meanwhile, OpenAI is headed to a market valuation of $100 billion with fresh funding from Apple Inc., NVIDIA Corp. and others, according to published reports.

Then, there are those who believe AI apps, not technology, should be regulated.

“AI (of all types including generative AI) is a tool. We have quite a few issues with the current AI tech (such as accuracy) that we will just end up hindering improvements in these much-needed areas through these regulations,” Egnyte Chief Technology Officer Amrit Jassal said in an email message.

“Additionally, California (or the U.S. or the EU or …) is not the sole innovator here and there is no conceivable way to enforce any regulations across all such countries and entities,” Jassal wrote. “Instead, we should focus on regulating specific applications such as deepfakes for election interference, ill-advised use of this tech in places where accuracy is required, etc.”

Paul Lekas, who leads public policy and government affairs at the Software and Information Industry Association, opposes the bill, but believes there is room to oversee and regulate AI.

“It is the right idea, but premising this bill on liability is problematic,” Lekas said in an interview. “A bill should not be done at state level. We do need guardrails on a national level.”

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

AI Data Infrastructure Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY