AI, AI regulation, generative AI, GenAI, AI regulation, AI growth

Democratic California Gov. Gavin Newsom vetoed a controversial AI safety bill on Sunday that would have required developers to submit their large models to the state attorney general, who would then decide if they caused harm or posed an imminent threat to public safety.

“While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data,” Newsom said in a statement explaining his decision. “Instead, the bill applies stringent standards to even the most basic functions — so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology.”

Senate bill 1047 sparked fierce debate in Silicon Valley. OpenAI, Facebook parent Meta Platforms Inc., and the Democratic Congressional delegation led by Former Speaker Nancy Pelosi headed opposition on the grounds it would stifle innovation. The Chamber of Progress voiced its displeasure with an AI generated rock song with the lyrics, “veto the chains that hold us down.”

Supporters, conversely, insisted the bill was essential to establish guardrails to prevent the fast-evolving technology from inflicting serious damage to the economy, infrastructure and national defense. Proponents included Tesla Inc. Chief Executive Elon Musk, the Center for AI safety, and prominent AI researchers such as Geoffrey Hinton.

“This veto is a setback for everyone who believes in oversight of massive corporations that are making critical decisions that affect the safety and welfare of the public and the future of the planet,” the bill’s author, Sen. Scott Wiener, D-San Francisco, said in a statement posted on social media.

AWS

The bill, which would have applied to models that cost at least $100 million and use 10^26 FLOPS, was amended based on suggestions from Anthropic and others.

Despite the veto, Newsom’s office said he signed more than a dozen AI regulatory bills in the past month, as well as consulted experts such as Fei-Fei Li and Jennifer Tour Chaves to “help California develop workable guardrails for deploying GenAI.”

Proponents of SB 1047 had hoped California would serve as the gold standard of AI regulation that would be adopted by other states in the absence of federal law. President Joe Biden is expected to receive an executive memo any day outlining what the federal government can do to mitigate AI as a national security risk. The memo stems from Biden’s AI executive order a year ago.

“Most products created by an industry (food, cars, child safety seats) are held to some level of standard or safety, either through regulations or certifications,” Cliff Jurkiewicz, vice president of global strategy at Phenom, said in an email. “They exist to provide the public with some measure of confidence and trust. AI has none. With no legislation in the United States that ensures a standard of safety that developers and frontier AI models are following, the risks of AI grow.”

Added Peter Guagenti, president of Tabnine: “The outcry surrounding SB 1047 and its alleged stifling of innovation was vastly overblown. In reality, only the largest tech producers feared being exposed and would have had to adjust their practices to meet ‘basic safety protections’ — a positive and necessary step. Common sense regulations can build trust in AI, ultimately accelerating adoption and growth of AI businesses.”

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

AI Data Infrastructure Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY