White House

The Biden Administration is rolling out several new policies governing the use of AI technology by federal agencies that touches on a range of issues, from managing risk to ensuring strong governance by hiring chief AI officers.

Vice President Kamala Harris unveiled the latest guidelines, which are the latest the White House has put forth since President Biden in October 2023 unveiled his executive order for the safe and secure use of the emerging technology, calling on a whole-of-government approach to the issue.

In speaking with reporters, Harris said the new policies are aimed at ensuring the technology is used to advance the public interest. The guidelines will be managed by the White House’s Office of Management and Budget (OMB).

“All leaders from government, civil society, and the private sector have a moral, ethical, and societal duty to make sure that artificial intelligence is adopted and advanced in a way that protects the public from potential harm while ensuring everyone is able to enjoy its full benefit,” she said.

Safeguards, Transparency a Focus

The guidelines are meant to ensure that federal agencies identify and manage the risks that come with AI to ensure the citizens’ rights, safety, and privacy are protected. Federal agencies have until December 1 to implement “concrete” safeguards that include mandatory actions to assess, test, and monitor the technology’s impact on the public, mitigate the risks of discrimination and bias that can come from AI algorithms, and ensure transparency into how the government is using AI.

AWS

The vice president used as examples enabling airport travelers to opt out of letting TSA use facial recognition technology with them without being penalized by delays or losing their place in line, having humans verifying health care test results that come from AI systems, and ensuring human oversight to make sure people aren’t harmed when AI is used to detect fraud in government services.

Agencies that can’t implement the safeguards have to stop using AI systems unless it would increase the threat to safety or rights or hurt critical operations.

Regarding transparency, agencies must release inventories of their AI use cases every year, report metrics about the uses cases that can’t be released to the public because of their sensitivity, and release government-owned AI code, models and data. OMB issued guidance about this public reporting.

The White House also wants to remove barriers to AI innovation in such critical areas as climate change, public heath and public safety and grow the federal AI workforce through training and incentives. Harris noted the administration adding $5 million to expand the government’s AI training program.

Chief AI Officers are Required

In addition, agencies will need to appoint chief AI officers, who will coordinate their use of AI, and create AI governance boards. So far, the departments of Defense, Veterans Affairs, Housing and Urban Development, and State have created governance boards. Other agencies have until May 27 to do so.

“This is to make sure that AI is used responsibly, understanding that we must have senior leaders across our government who are specifically tasked with overseeing AI adoption and use,” Harris said.

The latest policies come on the heals of European lawmakers earlier this year passing the sweeping EU AI Act after almost three years of work, putting the political and economic bloc at the forefront of the global push to regulate the use of the technology.

Good Steps, but More Needed

The Biden Administration’s new policies received applause in the industry, though it was somewhat muted by concerns about the limited scope of the effort.

Gal Ringel, co-founder and CEO of data privacy management firm Mine, noted that the rules, which will be “somewhat successful” in safeguarding the use of AI, only apply to the public sectors.

“The American private sector, from where much of the technological innovation of the past few decades has come, is still operating with mostly free rein when it comes to AI,” Ringel said. “Regarding the rules for government itself, internal assessments and oversight could provide a loophole for lax AI governance.”

It would be better to have independent third parties to run the assessments, he said.

The White House has stressed the need to work with private AI companies to secure the development of the technology. For example, in December, companies like Microsoft, Google, Amazon, OpenAI, and Anthropic announced a new public-private consortium for addressing the risks, security, and privacy concerns the come with AI. CISA and the National Security Agency (NSA) also are part of the group.

Some stressed the need for testing throughout the AI development process. Katie Bowen, vice president of the public sector at security testing company Synack, said it’s good to see the White House moving quickly on the issue, adding that the “stakes are too high to move slowly,” and supported the emphasis on testing AI and having agencies review their cybersecurity practices.

“Any serious conversation around ‘responsible’ AI innovation should include security test planning from the beginning,” Bowen said. “There is enormous benefit to the deployment of AI in government, but also enormous risk in everything from bias to services architecture. This makes it essential to continuously test these systems and assess where there may be opportunities for bad actors to exploit them.”

Testing is Important

Her colleague, Brandon Torio, an AI specialist at Synack, said more details about the policies are needed, including a list of vulnerabilities to test for – pointing to OWASP’s AI and LLM top 10 security issues – timelines for penetration testing LLMs, and AI vulnerability disclosure programs. Torio said he expects more clarity later this year from the National Institute of Standards and Technology (NIST).

“Unfortunately, I think it’s a matter of when, not if, a breach occurs through an AI attack vector,” he said. “The good news is that we can minimize those incidents with commonsense policies like this one.”

Narayana Pappu, CEO of Zendata, a data security and privacy compliance vendor, noted that the White House’s new rules are similar to privacy regulations like the EU’s GDPR.

The “AI bias and transparency problem is a data governance problem,” Pappu said. “If [you] feed AI biased data you have biased results and if you don’t have governance in place for mission-critical systems – things like shadow IT – again you have biased results and lack of transparency. Two main things the laws are trying to address.”

He stressed the need to back test historical data, finding experts to run manual evaluations of the results, and adjusting or listing exclusions when AI shouldn’t be used.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

AI Data Infrastructure Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY