Air Force, DARPA

What is SB 1047, and why is Silicon Valley so scared of California’s Safe and Secure Innovation for Frontier Artificial Intelligence Models Act?

The bill, crafted to prevent large AI models from being used to cause “critical harms” against humanity, applies to AI models that cost at least $100 million and require 10^26 FLOPS during training. But a broad swath of the valley, in their own words, consider it “draconian,” “flawed” and “a mess.”

In a rare display of unanimity, executives from the largest companies (Meta Platforms Inc.) to the smallest startup (Cranium) and all others in between contend it is nebulous and stifles innovation.

“It is a draconian legislation that won’t see the light of day, and rightfully so,” Kevin Cochrane, chief marketing officer at Vultr, said in an email message, summarizing the valley’s view. “This act could use a few chapters from the EU AI Act — namely talking to industry experts to reflect the actual reality of building and governing AI models. We need sensible regulations that don’t penalize developers to push the boundaries of what is possible. Model sizes don’t matter. What matters is the impact of any model.”

The legislation, which is wending its way through the state legislature with amendments, mandates a safety protocol to prevent AI misuse that includes an “emergency stop” button to shutter an AI model. Developers are also responsible for testing procedures that address risks, as well as receiving annual audits from third parties. A new state agency, the Frontier Model Division, would oversee the rules.

Those who fail to comply are subject to a civil action levied by California Attorney General Rob Bonta of up to $10 million for the first violation and $30 million for subsequent violations.

“We have a history with technology of waiting for harms to happen, and then wringing our hands,” state Sen. Scott Wiener, the bill’s author, said in a recent interview with TechCrunch. “Let’s not wait for something bad to happen. Let’s just get out ahead of it.”

Meta, OpenAI, Alphabet Inc.’s Google, and Anthropic face the most scrutiny from the proposed law, which could set a precedent for other states and national governments. But SB 1047 arrives at a precarious time for the San Francisco Bay Area, where much of the AI start-up community, as well as many of the industry’s biggest companies, is based. The bill could push AI development into other states, just as the region is rebounding from a pandemic-induced slump, warns Ritu Jyoti, group vice president of worldwide artificial intelligence, automation, data and analytics research practice at IDC.

The bill also puts undue burden on model developers, including start-ups tasked with addressing all possible harmful uses, choking innovation in the process, claimed Daniel Christman, director of AI programs at Cranium.

“This bill would do tremendous harm to the ecosystem,” said Christopher Nguyen, co-founder and CEO of Aitomatic, who is also a member of the steering committee of the AI Alliance whose members include Meta and IBM Corp.

Others point out AI development is simply moving too fast, and requires legislation created with AI to accurately address the latest twists and turns of the technology.

“In the time it took for SB 1047 to get written and reach the Senate Assembly, generative AI and several other waves of AI innovation have come and gone, none of which are even contained in the bill. More innovations are on the way,” cautioned Cliff Jurkiewicz, vice president of global strategy at Phenom, a global HR tech company.

“Though SB 1047’s intention — to prevent ‘critical harms’ — is admirable, instituting this guidance too quickly or without thorough research and testing could lead to unintended consequences. And what those are might not yet be apparent,” Ellie Fields, chief product and engineering officer at Salesloft, said in an email message.

In a June 25 letter to Wiener, a Meta executive said the company remains concerned the bill “fundamentally misunderstands how advanced AI systems are built and therefore would deter AI innovation in California at a time when we should be promoting it.”

“The bill focuses solely on large foundation model developers, rather than addressing actions of the deployers who translate those models into the systems with which people actually interact (and therefore have the most control over how AI systems can be used), and the users themselves,” Rob Sherman, deputy chief privacy officer, policy, at Meta, wrote.

In another letter to Wiener, this time from venture firm a16z in early August, the VC’s chief legal officer, Jaikumar Ramaswamy, said the bill “will burden startups because of its arbitrary and shifting thresholds.”

It goes on, as Silicon Valley attempts to develop and adopt AI as quickly as possible while appeasing regulators. Indeed, the Department of Justice and Federal Trade Commission are looking into the business practices and influence of major AI players OpenAI, Microsoft and Nvidia Corp.

The bill “is fundamentally flawed as it targets AI technologies rather than their applications, posing a significant threat to the competitiveness of U.S. AI companies, particularly smaller ones and open-source projects,” Together AI CEO Vipul Ved Prakash said in an email message, “We believe this bill will stifle innovation and unfairly burden startups. Open-source AI, crucial for responsible, sustainable and safe AI advancements, would suffer greatly.”

Adds Todd Mostak, co-founder and chief technology officer of Heavy.ai: “The industry would benefit from targeted regulation of concrete potential harm caused by AI models. However, bureaucratic certification processes for open models and open-ended liability provisions will not prevent potential harm, instead driving innovation out of state and likely offshore.”

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Mobility Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY