security, MLSecOps, AI, AI security, cybersecurity

A classified memo outlining AI’s risks to national security and measures the federal government must adopt to “preserve and expand U.S. advantages” to reshape science, business and warfare, is headed to President Joe Biden.

Contours of the memorandum advocate prohibited government uses of AI, such as operating nuclear weapons and tracking free speech.  The memo, first reported by Nextgov/FCW, is designed to help “develop a coordinated executive-branch approach to managing AI’s security risks” and is a byproduct of Biden’s AI executive order nearly a year ago.

The plan’s strategy centers on defense and intelligence agencies, led by the recently created AI Safety Institute at the Commerce Department and its National Institute of Standards and Technology, according to a report in the Washington Post. Government agencies, in turn, would strike public-private partnerships with AI leaders OpenAI, Anthropic, Google DeepMind Technologies, Elon Musk’s xAI and Meta AI, the Post said. Ultimately, the White House’s AI policy would align with international law, such as the European Union’s AI Act.

Additionally, the memo is expected to touch on the energy demands of AI computing amid a government push for clean energy.

A key motivational force behind the memo, which is expected to land on Biden’s desk by early October, is the ominous threat of “artificial general intelligence” (AGI), a future version of AI that would achieve levels of superhuman intelligence.

AWS

Security experts, while lauding the tone of the memo in addressing the risks and rewards of AI, also consider it a soft framework that would greatly benefit from strong national legislation.

Cliff Jurkiewicz, vice president of global strategy at HR tech company Phenom, argues the memo reads more like guidance than governance, leaving the biggest players in the tech private sector to drive both innovation and oversight while the government stands by idly.

“The Biden administration’s decision to issue a memorandum on AI, rather than enacting formal legislation, signals a significant retreat from meaningful regulation,” Jurkiewicz said in an interview. “Acting in advance of an election is a good thing, but having only the big players [in tech] having a say in policy is a real challenge. There is a deficiency of knowledge among lawmakers, and they are being influenced in part by [Big Tech] lobbyists.”

A paucity of AI legislation protects individuals or entities harassed or harmed by AI-generated output. And only a smattering of state laws address the use of AI in elections, according to Michael Leach, global compliance director at Forcepoint.

Absent federal AI law, the onus rests on states.

To that end, California lawmakers have crafted and passed a safety bill that mandates AI companies with large language models show “reasonable care to avoid unreasonable risk” of catastrophes. The bill has faced staunch opposition from Silicon Valley, which is awaiting a decision this month from Democratic Gov. Gavin Newsom, who hasn’t said whether he will sign it into law.

But a patchwork of state laws, lacking a unifying national approach, hinders the ability of smaller companies to compete with the rich resources of Big Tech, Jurkiewicz warns.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

AI Data Infrastructure Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY