The White House is poised to reveal a comprehensive executive order (EO) on artificial intelligence (AI), marking a pivotal moment in the U.S. government’s efforts to regulate this rapidly advancing technology.
First reported by the Washington Post, the EO is likely to introduce several significant measures, among them mandates that advanced AI models undergo assessments before they are employed by federal workers. This step is designed to enhance oversight and ensure the responsible use of AI technology.
Federal government agencies, including the Defense Department, Energy Department and intelligence agencies, will be obligated to conduct assessments to explore the integration of AI within their respective operations.
Particular emphasis will be placed on strengthening national cybersecurity.
“First and foremost, an AI executive order should focus on fostering the ethical and responsible development of AI for socially constructive, equitable advancement,” says Eyal Benishti, CEO of Ironscales.
He notes equally as important is to ensure human oversight is involved throughout the development lifecycle — from ideation and model training to post-production updates and re-training — so that the technology is well-regulated and controlled.
“I believe it is critical that any executive order surrounding AI should mandate a formal commitment to the continual monitoring, adaptation and improvement of AI systems in production,” Benishti say.
This includes the use of human-led approaches to said training, such as Reinforcement Learning from Human Feedback (RLHF).
The order will also seek to streamline immigration processes for highly skilled workers. By doing so, it aims to bolster the United States’ technological capabilities in an increasingly competitive global landscape.
Kjell Carlsson, head of data science strategy and evangelism at Domino Data Lab, points out global workers with the most AI experience and expertise face difficulty contributing to these organizations because of challenges getting work visas and permanent residency.
“Ideally, the order will serve to improve the ability of these leading AI innovators to incorporate the talent they need to lead in their markets by making it easier for workers with AI experience and expertise to be a part of their team,” he says.
The forthcoming executive order was first announced in July and reaffirmed by President Biden in September and coincides with international efforts to regulate AI, such as the European Union’s E.U. AI Act.
That piece of legislation aims to protect consumers from potentially hazardous AI applications. In contrast, U.S. lawmakers are in the nascent stages of bipartisan legislation development to address AI’s multifaceted challenges.
Carlsson notes the Biden administration can learn from the EU, which has dedicated significant time and resources to formulating their proposed AI regulation.
“An effective order will leverage the successful parts of the proposed EU legislation, such as the application of different levels of regulation depending on the riskiness of the AI use case, while avoiding its pitfalls,” he says.
From his perspective these include the ineffective requirements for generative AI, or the contradictions surrounding the use of AI to prevent criminal activity, such as anti-money laundering, that appear in conflict with other EU legislation and national level legislation.
Carlsson explains the most effective parts of the proposed EU AI act are the tiering of different AI use cases with different regulatory requirements depending on their level of risk.
“Ideally, the U.S. executive order will do something similar, providing clarity on what uses of AI will not be allowed, which use cases will not be subject to regulatory scrutiny and the testing, reporting, and other requirements that will be necessary for those in between depending on their different levels of risk,” he says.
However, he says he thinks there is little to suggest that the U.S. executive order will take this approach given the voluntary commitments it has pursued with tech giants so far, which indicates a one-size-fits-all mindset.
“Instead, it is more likely that the executive order will make vague, universal testing and disclosure requirements that will do little to ensure safety and fairness, but will increase cost, time to value and potentially reduce innovation,” Carlsson says.
Recent advancements in AI, including the launch of ChatGPT and similar generative AI tools, have heightened the urgency for regulation.
Policymakers globally are increasingly concerned about AI’s potential impact on employment, surveillance and democratic processes, particularly as significant elections loom.
Kevin Bocek, vice president of ecosystem and community at Venafi, says the executive order should not race to create new regulation that slows down innovation and requires official certification – treating AIs today like weapons or pharmaceuticals.
“We need to promote research and innovation to achieve outcomes of standards, security, and safety instead of racing to apply rules and regulations from the last century,” he notes. “As the Federal government is the single largest customer for Silicon Valley, the executive order will have a huge impact on future developments on use of AI in the US, and around the world.”