Nine months after Democratic California Gov. Gavin Newsom vetoed aggressive legislation that would have policed Big Tech’s use of artificial intelligence (AI), a task force he assembled to look further into the issue has come back with a troubling assessment: “Without proper safeguards… powerful Al could induce severe and, in some cases, potentially irreversible harms,” it concluded in a report issued Tuesday.

Authors of the 52-page “California Report on Frontier Policy” said the ability of AI models to reason have “rapidly improved” since Newsom’s decision in September to torpedo SB 1047, which would force makers of large models, particularly those that cost more than $100 million to train, to test them for specific dangers. Newsom called the bill too stringent and restrictive last year.

AI Unleashed 2025

But members of the task force, led by AI researchers from Stanford University, the University of California at Berkeley, and Carnegie Endowment for International Peace, warn continuing AI breakthroughs in the Golden State without proper safeguards could adversely affect agriculture, biotechnology, clean tech, education, finance, medicine and transportation.

The group, which was assigned by Newsom to come up with a plan to support the development and the governance of generative AI in California with proper guardrails, suggested a new framework that would require more transparency and independent scrutiny of AI models. Their recommendation was based on case studies, empirical research, modeling, and simulations.

“The report does not argue for or against any particular piece of legislation or regulation,” the report said. “Instead, it examines the best available research on foundation models and outlines policy principles grounded in this research that state officials could consider in crafting new laws and regulations that govern the development and deployment of frontier AI in California.”

Since the group crafted a draft version of the report in March for public comment, they said, evidence has mounted that reasoning so models can contribute to “chemical, biological, radiological, and nuclear (CBRN) weapons risks.” At the same time, leading AI firms have self-reported spikes in their models’ capabilities in those areas.

As home to the corporate headquarters of OpenAI, Alphabet Inc.’s Google, Meta Platforms Inc., Anthropic, and others, California — as it has before with progressive laws on digital privacy and personal information — is attempting to lay the groundwork to hold Big Tech accountable in the absence of federal legislation.

Indeed, California is moving in the opposite direction of the Trump Administration, which recently proposed a 10-year moratorium on state regulation of AI as part of the One Big Beautiful Bill. OpenAI openly supports the ban.

“No State or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10-year period beginning on the date of the enactment of this Act,” says a telling passage in the 1,116-page bill.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Tech Field Day Events

TECHSTRONG AI PODCAST

SHARE THIS STORY