LLM. edge, LLMs, reinforcement learning, large language models, LLM

Tech firms like OpenAI and Google that create and train the large-language models (LLMs) that underpin such generative AI tools as ChatGPT and Bard will soon have to notify the federal government about its work and related safety test results.

The requirement is the first significant deadline coming out of President Biden’s broad Executive Order for AI, which the White House unveiled in late October 2023. Administration officials over the past several days have been giving AI companies like OpenAI, Google, Microsoft and Amazon the heads up that the reporting regulation is on its way.

Biden invoked the Defense Protection Act when requiring such companies to report the vital information about their AI work, saying that the rapid adoption and use of generative AI – which kicked off in November 2022 when OpenAI released its ChatGPT chatbot – poses a national security threat.

The Executive Order represents the federal government’s most expansive initiative to establish guardrails for AI, touching on not only national security but also privacy, civil rights, consumer and worker protections, innovation and competition and the United States’ leadership in the rapidly emerging market.

Safety First

Ben Buchanan, special adviser on AI for the White House, told the Associated Press that government agencies want “to know AI systems are safe before they’re released to the public — the president has been very clear that companies need to meet that bar.”

AWS

Meanwhile, Commerce Secretary Gina Raimondo spoke about the upcoming institution of the transparency requirement late last week at an event hosted by Stanford University’s Hoover Institute, according to Wired.

“We’re using the Defense Production Act, which is authority that we have because of the president, to do a survey requiring companies to share with us every time they train a new large language model, and share with us the results – the safety data – so we can review it,” Raimondo told attendees.

She didn’t say when the new rules will come into effect, though the White House AI Council is expected to meet today to go over what’s been done up to this point to put the Executive Order into effect. The order gave the Commerce Department until January 28 to develop a process for AI companies to report about their work on developing new models.

A clearer start date for the new transparency rule could be set during the meeting. The rules focus on such LLMs as OpenAI’s GPT-4 and Google’s Gemini, though only those that reach a computing power threshold set by the EO. GPT-4 reportedly doesn’t reach that point.

NIST Working on a Framework

According to the AP, a lineup of categories for the safety tests have been developed, but the AI companies don’t yet have to comply with common standards, which is something the National Institute for Standards and Technology (NIST) is working on.

At the same time that the government is preparing for the new transparency requirements, OpenAI bolstered the capabilities of GPT-4 while Google boasted of Gemini’s improved position in the generative AI horse race.

According to Wired, Raimundo also said that the Commerce Department will soon require cloud services providers like Amazon Web Services, Microsoft Azure and Google Cloud to tell the government when a foreign company uses their resources to train a LLM that needs more than 100 septillion flops of compute power, the same threshold that triggers reporting requirements for U.S. companies.

There has been a whole-of-government approach to addressing the evolving AI market, with such federal departments as Transportation, Health and Human Services, and Defense developing risk assessments of the technology. The Cybersecurity and Infrastructure Security Agency (CISA) also has been vocal about such issues as ensuring developers are using security-by-design principles when creating AI applications.

The government also has been working with the private sector, illustrated by an agreement reached with top AI companies – including Microsoft, Google, Amazon, OpenAI, and Anthropic – industry experts, and researchers last month to address security, safety and risk concerns that arise with the technology.

There’s Opposition

Unsurprisingly, Biden’s Executive Order and its use of the Defense Protection Act – which gives the U.S. government greater powers over private companies – has led to pushback by tech industry lobbyists and Republican opponents. According to Politico, they are focusing on the President’s use of emergency powers, with some arguing that there is no national emergency and that the Defense Protection Act – created during the Korean War – was not created for such uses.

“The Defense Production Act is about production — it has it in the title — and not restriction,” Adam Thierer, a senior fellow at the free-market R Street Institute, told Politico. “I’m not sympathetic to utilizing that sort of language, to basically start regulating artificial intelligence systems and computation in a pretty expansive way.”

Senate Republicans also are working to rein in AI regulations, the news site reported.

However, Buchanan – the White House special AI adviser – told the AP, “We know that AI has transformative effects and potential. We’re not trying to upend the apple cart there, but we are trying to make sure the regulators are prepared to manage this technology.”

Gal Ringel, co-founder and CEO at data privacy management firm Mine, said Biden’s Executive Order was important given that Congressional action on AI legislation is not on the horizon. It’s also critical for the government to develop a working relationship with the tech industry.

“The government and Big Tech never coaligned on data privacy issues until it was too late and the government’s hand was forced by broad public support, so there cannot be a repeat of that failure or the consequences could be immeasurably more damaging when it comes to AI,” Ringel said.

That said, Omri Weinberg, co-founder and chief revenue office at SaaS security company DoControl, questioned how effective requirements for companies to self-report what they’re doing will be. Mandatory report from cloud providers could be more reliable, but that assumes the hyperscalers can detect the training of powerful or potentially danger AI models, which made be difficult at a time of confidential computing and federated learning.

“If the goal is to ‘do something’ to address the perceived threat of AI, then this action is doing something,” Weinberg said. “However, it is far from clear if this regulation will have any positive impact on mitigating risk or deter future hostile cyberattacks.”

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Edge Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY