Seven leading AI companies – Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI – recently met at the White House to discuss how to effectively manage the risks of AI without compromising its upside. The Biden-Harris administration secured voluntary commitments from these companies to continue AI development in compliance with three key principles: Safety, security and trust.
AI’s transformative nature has generated an abundance of concern. How will companies mitigate security risks and reduce the threat of fraud? Can AI systems avoid bias and discrimination? Is the privacy of consumers protected? While leading AI companies are beginning to address these concerns, the consensus in the tech community is that more needs to be done.
One expert, Nitzan Shaer, CEO and co-founder of WEVO, promotes safeguards and testing in response to these concerns.
“While these developments recognize the potential of AI, the commitments also highlight the importance of reliable and responsible user testing, in order to address bias and discrimination,” Shaer said. “While AI’s transformative impact on our daily lives is all but inevitable, we must embrace responsible use of the technology by prioritizing safeguards and guardrails to maximize its potential while mitigating risks.”
In light of the rapid expansion of AI, ensuring the validity of AI-generated information is vital. Advanced AI systems have unlimited potential, but it’s important that these systems remain catered to the end user
“Given this explosion, it is crucial we remain vigilant about vetting information produced by AI for accuracy and relevancy and always put the end user first, prioritizing their experience while minimizing bias,” Shaer said.
Eyal Benishti, CEO of IRONSCALES, is another AI expert who’s concerned that voluntary commitments won’t be enough to keep the threats of AI at bay.
“While this agreement is a good start, it still leaves a lot to be desired. If we truly hope to minimize artificial intelligence’s potential to cause harm, we must prioritize two vital principles — continuous improvement and human oversight,” Benishti said.
With strategies like reinforcement learning and continuous training, developers must utilize continuous improvement to ensure their AI tools are up-to-date.
“Secondly, both developers and end users must keep in mind that artificial intelligence should never be treated as a replacement for human intelligence, but instead as a tool for augmenting human capabilities,” Benishti said. “In this way, we ensure that the most important, consequential decisions are still ultimately being made by human beings. Those decisions can certainly be informed by artificial intelligence, but there is no AI that surpasses (or matches) human levels of reasoning, inference and wisdom, yet.”
While continuous improvement and human oversight are integral to the safe development of AI, watermarks are another tool companies are using to earn the public’s trust. According to Rusty Cumpston, CEO of RKVST, watermarking systems are crucial to the safe development of AI, but they’re an incomplete solution.
“Watermarking is a key tool for people to use in determining whether or not digital content was created by AI. There are lots of ways for bad actors to modify and replace watermarks, which limits its impact on AI safety. But when combined with tools that can verify the provenance and authenticity of digital content, data consumers will have the information they need to decide if content is trustworthy or not,” Cumpston said.
The implementation of watermarking systems was one of the solutions discussed at the White House. To achieve transparency, and for AI to become safely integrated into our society, the dangers of fraud and deception must be addressed.
“It’s important for people to be able to understand where data comes from and when content is AI-generated,” Cumpston said. “Right now people are unsure, and with the phenomenal rise in AI-generated content, that makes them wary and distrustful. Trust levels will improve once mechanisms are deployed that enable users to easily identify AI-generated content using watermarks and are instantly verifiable, adding provenance metadata.”
The meeting at the White House is reflective of the impact AI is having on consumers, as well as some of the glaring issues that have emerged. Safety, security and transparency are imperative to the success of AI, and with these pillars at the forefront of AI development, it’s evident that leading AI companies are striving for responsible innovation.
“We’re already seeing consumer safety-conscious companies looking at ways to embrace and extend transparency and trust into the data they use to innovate and build new services and products,” Cumpston said.
As part of this effort, the Biden-Harris Administration has taken several other steps, including the publication of a blueprint for an AI Bill of Rights and an executive order aimed at bias prevention in AI systems. These efforts, coupled with hundreds of millions of dollars in funding and a national AI R&D strategic plan, will shape our country’s approach to AI development for years to come.
By calling on leading AI companies to pave the way, the White House is facilitating growth while upholding ethical expectations.