It’s more than a handshake agreement but not quite a binding contract- eight tech firms recently committed to the Biden administration’s voluntary rules aimed at managing the risks of AI. The eight joined seven other leading AI companies that made the same commitment in July.
The companies that joined on Sept. 12 are Adobe, Cohere, IBM, Nvidia, Palantir, Salesforce, Scale AI, and Stability. They join
Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI, companies that committed to the rules on July 21.
“These commitments, which the companies have chosen to undertake immediately, underscore three principles that must be fundamental to the future of AI — safety, security, and trust — and mark a critical step toward developing responsible AI,” read a statement released by the White House press office on Sept. 12.
There are three voluntary commitments, each coming with a set of bullet points. The 15 companies have committed to; ensure products are safe before introducing them to the public, building systems that put security first, and earning the public’s trust. Of particular note, the companies have agreed to allow independent experts to test their AI systems before their release, to invest in cybersecurity and insider threat safeguards to protect proprietary and unreleased model weights, and to develop robust technical mechanisms to ensure that users know when content is AI-generated, such as a watermarking system.
“We are pleased to join these commitments, and to see the White House take seriously the unique challenges, risks and opportunities that face the enterprise AI sector,” stated Cohere President Martin Kon, after attending the Sept. 12 agreement event at the White House. “As the discussion around responsible adoption and regulation moves from the consumer to enterprise and public-sector domains, we look forward to working more closely with the White House and Congress to safeguard the interests of society, including companies looking to ensure the protection of sensitive data.”
Leaders of top technology firms, and experts on AI, have shuttled into Washington, D.C., in recent months to meet with U.S. lawmakers, to educate them on AI and point out where regulation may be necessary. And just a day after the latest commitments, almost two dozen tech heavyweights, such as Elon Musk, Mark Zuckerberg, and Bill Gates, met in a closed session with members of the U.S. Senate to further discuss risks posed by AI.
As tech companies release new generative AI products and the competition ramps up, cybersecurity experts believe that commitments are a step in the right direction to keep the robust growth in check.
“From my perspective, there is a value in voluntary commitments, given they help hold AI companies responsible, while preserving the pace of innovation,” said Dan Schiappa, the Chief Product Officer for Artic Wolf, a leading cybersecurity company based in Eden Prarie, MN., in an interview with Techstrong.
“New voluntary commitment agreements, such as the one announced by the Biden administration this month, will help align the private and public sectors. That said, from a cybersecurity perspective, just as the US is moving quickly on AI, so are our adversaries in China, Russia and Iran. Maintaining our pace of innovation while instituting regulation is crucial to safeguarding our global cybersecurity posture. At the end of the day, we do not want to be in a case where we over-regulate and off-throw the balance between innovation and regulation,” Mr. Schiappa said.
He added that the biggest threat that AI poses, outside of human error, is in granting bad actors the ability to exploit the technology to launch an attack, including one that would disrupt elections.