
Last year, Democratic California Gov. Gavin Newsom vetoed a landmark artificial intelligence (AI) safety bill. This year, it’s déjà vu all over again: Newsom faces a similar bill that could test his alliance with the state’s influential tech industry as he mulls a run for the White House.
The latest stab at AI regulation in California, SB 53, passed in the state Legislature on Saturday, requires AI companies to share their safety testing and certify their compliance, potentially leading to other states adopting similar bills and – possibly – calls for a national standard.
Stuck in the middle of it all is Newsom, a progressive whose political ambitions are tied to deep-pocketed Silicon Valley, where he grew up, went to college (Santa Clara University), and was mayor of San Francisco. When faced last year with the same expansive measure last year from State Sen. Scott Wiener, D-S.F., Newsom said no.
“We dominate this space, and I don’t want to lose that competitiveness,” Newsom said then. “The impact of signing the wrong bills over the course of a few years could have a profound impact on our competitive strength.”
On the other side is Wiener, who has political aspirations of his own. He plans to run for Former House Speaker Nancy Pelosi’s Congressional seat in San Francisco. Along the way, he has clashed with Meta Platforms Inc., venture-capital firms like Andreessen Horowitz (a16z), and Pelosi, who resisted last year’s bill. OpenAI has suggested compliance with voluntary federal testing or the EU’s AI Code of Practice as a means for safety certification in California.
Wiener has called his latest pared-down version of the bill the “strongest AI safety regulation in the country.”
Given the conflicting stakes and dueling interests, Wiener rewrote his bill based on a report Newsom commissioned from AI safety experts following his veto a year ago. It now “requires large AI labs to be transparent about their safety protocols, creates whistleblower protections for (employees) at AI labs & creates a public cloud to expand compute access (CalCompute).”
The amended SB 53 allows companies with less than $500 million in annual sales developing “frontier” AI models to only disclose high-level safety details; higher-revenue companies are required to provide more details.
The bill is supported by Anthropic. “It’s saying you have to tell us what your plans are. You have to tell us the tests you run. You have to tell the truth about them,” Anthropic CEO Dario Amodei told POLITICO in support of the bill.
Co-founders of a16z have pointed to AI tech regulation in California and elsewhere as a key reason for their support of President Donald Trump’s second term. The Trump administration, in turn, has called for a 10-year ban on state AI rules and regulations.