President Biden this morning issued an executive order on AI, a move designed to start establishing guidelines that will promote innovation and the United States’ continued leadership in the fast-moving industry while protecting consumers and workers as well as security and privacy.

The executive order follows weeks of meetings between Biden and various advisors and agencies and concerns by the president about the numerous perils of AI, including disinformation like deepfakes, according to reports.

It also comes a few months after more than a dozen major players in the burgeoning AI space – including Microsoft, Google, Amazon and Meta – and high-profile startups like OpenAI (creator of ChatGPT) and Anthropic voluntarily agreed with the White House to ensure safe and secure development of AI technology and as Congress begins to figure out its role in legislating AI.

In addition, the Cybersecurity and Infrastructure Security Agency (CISA) in August reminded organizations that security needs to be built into AI software just as it is in other software.

The executive order encompasses as wide range of areas, with the first five aimed at safeguarding society against the dangers of AI, from ensuring security and privacy to protecting civils rights, consumers, and workers.

Security and Safety are Key

“Safety and privacy must continue to be a top concern for any tech company, regardless of whether it is AI focused or not,” Bugcrowd CEO Dave Gerry told in an email. “When it comes to AI, ensuring that the model has the necessary safeguards, feedback loop and most importantly, mechanism for highlighting safety concerns is critical.”

As organizations adopt AI for such benefits as efficiency, productivity, and the democratization of data, “it’s important to ensure that as concerns are identified, there is a reporting mechanism to surface those in the same way a security vulnerability would be identified and reported,” Gerry said.

Marcus Fowler, CEO of Darktrace Federal, told he was encouraged the Biden Administration is taking actions to address the safety and other challenges presented by AI.

“These models can be used for a wide variety of purposes – both beneficial and harmful,” Fowler said. “In our decade of experience applying AI to the challenge of cybersecurity, we’ve seen first-hand the significant benefits that AI offers. It can uplift people and make their work and lives faster, easier, more secure and more efficient. As we increasingly rely on these tools, it is even more vital that they are secured properly.”

Government-Wide Participation

The executive order involves a number of federal agencies, including the Commerce, State, and Commerce departments, National Institute of Standards and Technology (NIST), Federal Trade Commission (FTC) and the National Science Foundation.

In the area of security and safety, the White House is requiring organizations that develop powerful AI systems to share such information as safety test results with the government, create standards, tests and tools to ensure AI systems are secure, protect against organizations using AI to create dangerous biological materials and protect U.S. citizens from AI-based fraud and deceptions campaigns.

Among the tools for the last item include content authentication and watermarking “to clearly label AI-generated content,” according to the White House. “Federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic—and set an example for the private sector and governments around the world.”

There also are steps for protecting users’ privacy through technology innovation, guidelines and investigating how agencies collect and use publicly available information, with an eye on data brokers. Other issues raised include bias and discriminations in algorithms, guidelines to protect consumers, patients, and students and addressing jobs lost to AI.

The White House also wants the government to promote innovation and competition in the AI market, including the National AI Research Resource, a program being piloted that will give AI researchers and students access to AI resources and data, and more grants for AI research in areas like health care and climate change. Biden also wants to protect small businesses and entrepreneurs establish footholds in an AI industry full of huge companies and well-funded startups.

In addition, Biden wants the United States to lead the global AI industry, including helping to create international frameworks for pushing the benefits of AI while protecting against the dangers and to develop global standards around security and interoperability.

A Good First Step, But More Needs Doing

Hodan Omaar, a senior policy analyst with the Information Technology and Innovation Foundation (ITIF) in a statement applauded Biden’s executive order saying that “amid a sea of chaotic chatter about how to implement guardrails for AI, today’s executive order … sets a clear course for the United States.”

However, Omaar cautioned that while the order’s general direction is clear, specifics around implementation is unclear, which will force companies and regulators to establish more concrete guidelines. She pointed to such areas in the executive order as red teaming, biological synthesis screening, and detecting AI-generated content.

“These are all active areas of research where there are not simple solutions,” Omaar said. “Policymakers often forget that the reason [the AI] industry hasn’t already adopted certain solutions is because these solutions don’t yet exist. This is one reason why it will be essential for the United States to continue to fund critical AI research in these areas.”

Rob Enderle, principal analyst with The Enderle Group, told the executive order does a good job addressing sometimes competitive interests.

“Government has to balance safety with innovation assuring they don’t go too far in either direction in order to both protect the people and to assure the U.S. isn’t left in the dust by China,” Enderle said. “At first blush, what the administration appears to have done appears balanced but it will likely need to undergo changes as the technology advances.”

What the order does is provide the structure on which oversight can be built.

“This shows interest, provides initial structure, and enables the government to begin to better understand the technology before messing with it too much and perhaps doing unnecessary damage,” Enderle said.

One area that needs improvement involves red teams, which should be done by a third party rather than the companies themselves, who could prioritize revenue over safety. If the red teaming isn’t properly resourced or adequately independent, it likely will fail, he said.