
The Biden administration on Thursday offered its long-awaited plan to adopt artificial intelligence (AI) throughout the federal government for national security purposes while at the same time preserving privacy and civil rights.
The directive is part of the first-ever national security memorandum (NSM) to take advantage of the most advanced AI systems to maintain an edge over foreign adversaries such as China and Russia. A senior administration official warned “failure” by the U.S. to fully tap into AI could put the nation at risk of a “strategic surprise.”
“There are very clear national security applications of artificial intelligence, including in areas like cybersecurity and counterintelligence, not to mention the broad array of logistics and other activities that support military operations,” a senior administration official said on a call previewing the memo.
In the memo, the White House called on U.S. agencies “to improve the security and diversity of chip supply chains … with AI in mind,” and it prioritized the collection of information on other countries’ operations against the U.S. AI sector.
But it hastily cautioned that any efforts must also protect human rights and democratic values.
Thursday’s memo comes a year after President Joe Biden issued a sweeping executive order on AI aimed at limiting risks the emerging technology poses to consumers, workers, minority groups and national security. It is the latest AI policy articulated by the administration while Congress and regulatory agencies grapple with regulating AI.
The directive also calls for a framework for Washington, D.C., to work with allies to ensure AI “is developed and used in ways that adhere to international law while protecting human rights and fundamental freedoms.”
An NSM arrangement is a prime vehicle to procure AI technology, said Joel Meyer, president of public sector at Domino Data Lab. He pointed to the U.S. Navy’s Project AMMO, which provides underwater drone operations.
By the same token, the federal government’s broad initiative is sure to evoke strong reaction from Silicon Valley, in particular. Officials at leading AI companies such as OpenAI, Anthropic and Alphabet Inc.’s Google are struggling to balance the scale of their ambitions with security and privacy concerns. As they navigate that tricky path, more states are rolling out laws, led by California, that are designed to safeguard AI’s wide-ranging impact.
The Securities and Exchange Commission, meanwhile, is set to increase oversight of AI use by financial firms to ensure compliance with regulations.
More than a dozen civil society groups in July sent an open letter to the Biden administration calling for robust safeguards to be built into the NSM.
“Despite pledges of transparency, little is known about the AI being deployed by the country’s largest intelligence, homeland security, and law enforcement entities like the Department of Homeland Security, Federal Bureau of Investigation, National Security Agency, and Central Intelligence Agency,” the letter said.
“Its deployment in national security contexts also risks perpetuating racial, ethnic or religious prejudice, and entrenching violations of privacy, civil rights and civil liberties,” the letter warned.
Added Dean Ball, research fellow with the Mercatus Center: “Having the world’s best AI models is of little use if we do not also have the world’s best AI applications and use cases. It would be all too easy for the administration’s envisioned policy ‘guardrails’ to get in the way of these innovative use cases — something policymakers everywhere should keep in mind.”
But Melissa Ruzzi, AI director at security company AppOmni, deemed the actions listed in the memo as “great starting points to get a good picture of the status quo and obtain enough information to make decisions based on data, instead of jumping to conclusions to make decisions based on vague assumptions.”