Many of the giant IT vendors driving innovation around AI – including Microsoft, Google, Meta, OpenAI and Apple – are joining a new consortium announced Thursday by the Biden Administration to guide the safe development of the emerging technology.
The tech giants are among more than 200 organizations and government agencies that are part of the U.S. AI Safety Institute Consortium (AISIC), which includes everyone from AI creators, startups and systems makers to industry researchers, organizations using AI and civil society organizations.
The U.S. government has a role to play in creating standards and tools to mitigate the risks that come with the development of AI and machine learning technologies, particularly in the wake of the rapid innovation and adoption of generative AI in the 14-plus months after OpenAI rolled out its ChatGPT chatbot, Commerce Secretary Gina Raimondo said in a statement.
The consortium falls in line with the directives of President Biden’s executive order to establish guidelines for the development of AI that will allow the United States to stay on the cutting edge of the technology while protecting workers and consumers and ensuring security and privacy.
“President Biden directed us to pull every lever to accomplish two key goals: Set safety standards and protect our innovation ecosystem,” Raimondo said. “That’s precisely what the U.S. AI Safety Institute Consortium is set up to help us do. … By working with this group of leaders from industry, civil society, and academia, together we can confront these challenges to develop the measurements and standards we need to maintain America’s competitive edge and develop AI responsibly.”
Rob Enderle, principal analyst with The Enderle Group, told Techstrong.ai that while the consortium is a good start, it’s only a start.
“You need a focused group, that isn’t part of the industry for enforcement, but this will help set the guardrails and inform government as it moves to the next steps,” Enderley said. “This certainly won’t be enough but you have to start someplace.”
NIST Will Lead the Way
The consortium will be housed within the National Institute for Standards and Technology’s (NIST), U.S. AI Safety Institute (USAISI) and will work on developing protocols and guidelines for red teaming, capability evaluations, risk management, safety and security and watermarking synthetic content to ensure people know if they’re looking at or listening to something created by AI.
“AI is moving the world into very new territory,” Laurie Locascio, undersecretary of Commerce for standards and technology and NIST director, said during a press briefing announcing the consortium. “Like every new technology, or every new application of technology, we need to know how to measure its capabilities, its limitations, its impacts.”
The development of generative AI, machine learning and large-language models (LLMs) has accelerated rapidly over the past year, bringing with it the promise of huge benefits in every sector of society, from science and health care to business and education. At the same time, there are ongoing concerns about the use of the technology by global adversaries, malicious cybercriminals and groups sowing disinformation, particularly during the presidential election year.
Enderle said the pattern of security trailing innovation in the IT industry isn’t new, but this time there’s a hitch.
“[Security] is substantially behind, which isn’t unusual when it comes to a new technology,” he said. “The issue is that AI is moving so quickly there is some risk this will never be able to catch up.”
The White House is pushing a whole-of-government approach to addressing the safety and security of AI, enlisting agencies ranging from the Cybersecurity and Infrastructure Security Agency (CISA) and NIST to Commerce, Defense, Justice, Homeland Security and other departments.
For example, as part of Biden’s executive order, companies like OpenAI and Google that create and train LLMs will have to notify the federal government about their work and related safety test results, while Jessica Rosenworcel, chair of the Federal Communications Commission, last week talked about a plan to make illegal unsolicited robocalls that use generative AI-based voice cloning technology aimed at consumers.
While the government is putting pieces in place where it can, it also is pulling in the entities from outside the government to participate.
“To keep pace with AI, we have to move fast and make sure everyone – from the government to the private sector to academia – is rowing in the same direction,” White House Deputy Chief of Staff Bruce Reed said in a statement.
The Private Sector Steps Up
That work has been underway since last year, when Amazon, Google, Meta, Microsoft and startups OpenAI, Anthropic and Inflection agreed to ensure that their products are safe before releasing them. More recently, Meta this week said it will begin identifying and labeling AI-generated content on Facebook, Instagram and Threads.
“As the difference between human and synthetic content gets blurred, people want to know where the boundary lies,” Nick Clegg, president of global affairs at Meta, wrote in a blog post. “People are often coming across AI-generated content for the first time and our users have told us they appreciate transparency around this new technology.”
Meanwhile, OpenAI said it is updating its text-to-image generator, DALL-E3, to include specifications from the Coalition for Content Provenance and Authenticity (C2PA) that will ensure that all images generated by DALL-E3 or ChatGPT will include a watermark to identify them as AI-generated.
Among the other established IT companies in the consortium are IBM, Cisco, Intel, Adobe, Amazon, AMD, Hewlett Packard Enterprise and Salesforce. There also are AI companies like Credo AI, SaferAI, Hypergame AI and EqualAI, and cybersecurity vendors and industry groups including CrowdStrike, the Center for AI Safety and the Cyber Risk Institute.