accenture, multimodal ai, ai, artificial intelligence AI

With the veto of a first-of-its-kind AI safety bill in California that could have set a template for U.S. policy, those in fear of the fast-evolving technology’s risk must await a forthcoming executive memo headed to President Joe Biden’s desk soon. Or they can self-govern.

Neither is hardly enough, as generative AI grows by leaps and bounds, posing myriad risks. Recently, OpenAI released an advanced new model that experts cautioned could help in the construction of a bioweapon more effectively than internet research could. That, among other developments, led a group of leading AI experts to warn that as the technology rapidly progresses, “we face growing risks that A.I. could be misused to attack critical infrastructure, develop dangerous weapons or cause other forms of catastrophic harm.”

Such was the reasoning behind Senate bill 1047 in California: To safeguard large language models from running amuck and inflicting damage. But Democratic California Gov. Gavin Newsom, facing intense pressure from Silicon Valley and the party’s strong Congressional delegation, vetoed the bill on the grounds it would stymie innovation.

“This veto is big, and Newsom did what (Democratic Colorado Gov. Jared) Polis apparently couldn’t — send a problematic bill back to the legislature so that they can try again with science and data instead of fear and anxiety,” Mark Brennan, who leads the global technology and telecommunications industry sector group at law firm Hogan Lovells, said in an email. “I expect AI legislative efforts to continue in light of Newsom’s veto — both in California and in other states. It remains to be seen whether we’ll see any harmonization from state to state.”

Proponents of the bill consider it a missed opportunity to rein in a technology with far-reaching implications with no federal law in sight or reputable national safety board. For now, it is up the Federal Trade Commission and Justice Department to hold AI in check, they claim.

“Just as oil and gas giants, along with big banks, have historically pushed back against regulations, Big Tech is following the same playbook. You have to take their concerns with a grain of salt,” Tabnine President Peter Guagenti said in an email. “The outcry surrounding SB 1047 and its alleged stifling of innovation was vastly overblown. In reality, only the largest tech producers feared being exposed and would have had to adjust their practices to meet ‘basic safety protections — a positive and necessary step. Common sense regulations can build trust in AI, ultimately accelerating adoption and growth of AI businesses.”

“Regulators need to hold Big Tech accountable, demand genuine transparency about data usage, and refuse to accept opaque systems and rigid, unchangeable solutions,” Guagenti said. “This may affect their cost of doing business, but will build trust in AI more broadly and ultimately help us build a more vibrant, more profitable ecosystem.”

The barren AI regulation landscape leaves enterprise with little choice but to “seize this opportunity to proactively address AI risks and protect their AI initiatives now,” Kjell Carlsson, head of AI strategy at Domino Data Lab, said in an email.

AWS

“Rather than wait for regulation to dictate safety measures, organizations should enact robust AI governance practices across the entire AI lifecycle: Establishing controls over access to data, infrastructure and models; rigorous model testing and validation, and ensuring output auditability and reproducibility,” Carlsson said. “By embedding these governance practices into AI workflows from the start, companies can protect against misuse, mitigate risks, and show accountability, putting them ahead of potential future regulations (and competitors).”

Added Rick Caccia, chief executive of WitnessAI: “Implementing safe AI enablement policy across models, across countries, etc., can be done at the level of user activity, not only at the level of model development. Think of it like driving safety: We teach people how to make safe turns, stops, etc., regardless of the model car they will eventually use. We can do the same for AI usage.”

Private business owners remain split on AI regulation. While 46% polled by financial services firm Brown Brothers Harriman think the U.S. should enact more regulations, 51% believe regulations should remain status quo. Their top three AI uses in the next 12 months are data and predictive analytics (49%), marketing and content creation (47%), and automating processes (46%).

Lawmakers and regulators’ arrangement of managing AI risks through voluntary commitments places enormous trust in the companies developing potentially dangerous technology. But can the U.S. truly trust an industry, and companies like OpenAI specifically?

Sam Altman’s company has plunged headlong into developing LLMs as quickly as possible, leading to a steady conga line of executive departures the past several months over safety concerns. The latest, and most high-profile defection, was Chief Technology Officer Mira Murati’s exit in September. [Last year, Altman was briefly fired after the nonprofit board lost trust in him before he was reinstated.]

Todor Markov, an AI safety researcher who recently left OpenAI after nearly six years with the firm, recently offered a chilling scenario to the New York Times: “An A.I. company promises to test its models for dangerous capabilities, then cherry-picks results to make the model look safe. A concerned employee wants to notify someone, but doesn’t know who — and can’t point to a specific law being broken. The new model is released, and a terrorist uses it to construct a novel bioweapon.”

Michael Graf, a partner at Brown Rudnick’s Intellectual Property group, had the final word: “Legislation takes time, and whether the veto is negative or positive, it is by no means the end of the road for updated laws, whether created by states, the federal government or judges. While California and other states have legislated on more minor AI risks such as deepfakes and impersonating celebrities, it remains to be seen whether, when and how — like in the E.U. — legislators will seek to hold accountable parties responsible for the potentially greater harms of massive AI systems.”

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

AI Data Infrastructure Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY