As more companies integrate AI into their workflow- adoption has doubled since 2017 – the concern over potential harms and bias has created an imperative to temper such rapid growth with responsible use guidelines.
The latest example of that is the white paper report released on October 5 by EqualAI, a non-profit organization created “to reduce unconscious bias in artificial intelligence and promote responsible AI governance.” EqualAI touts its white paper as the first of its kind to incorporate the views of multiple companies into a best practices primer.
The 32-page report is a framework for responsible AI governance, constructed from a series of meetings between leaders from industry, government and civil society. There were seven meetings in total, as part of EqualAI’s Badge Program, where senior executives convened to discuss best practices. The participants included representatives from Amazon, Verizon, Microsoft, PepsiCo, Salesforce and the SAS Institute, among others. The first meeting was held last May.
“Currently there is a lack of national, let alone global, consensus on standards for responsible AI governance,” the report’s executive summary stated. “While there is some indication of progress to come on this front, it is not imminent, and organizations cannot wait for the regulatory and litigation landscape to settle before adopting best practices from AI governance. The potential harm and liability associated with the complex AI systems currently being built, acquired, and integrated is too significant to delay the adoption of safety standards.”
The report contains six “pillars” pertaining to responsible use. The pillars are: Responsible AI values and principles, accountability and clear lines of responsibility, documentation, defined processes, multistakeholder reviews and metrics, monitoring and reevaluation.
The report stresses that integrating AI should be done in a manner that encourages employees to offer their input, to help guide implementation toward usage that reflects the company’s core values. “A common theme discussed and promoted in the Badge Program is that every employee should feel as though they are on the front lines of detecting potential harms, and encouraged and rewarded for promoting AI safety and, thus, building trust.”
Mike Tang, a computer scientist with Verizon, said on a recent episode of EqualAI’s podcast “In AI We Trust?” that collaboration is key in coming up with guidelines for responsible AI use. He attended the Badge Program meetings.
“Personally, I believe in proactive risk management, sharing best practices among each other,” Mr. Tang said. “The summit really provided an opportunity for companies that share similar views to get together, to discuss, and to learn from each other. I think that is the only way to move this field forward, to come up with a much better approach than individually trying to figure out what is the best thing to do.”
Another attendee, Catherine Goetz, the Global Head of Inclusion Strategy at LivePerson, said during the same podcast, “The reality is that nearly every company is a technology company today, and so that means that all people are going to be influenced in one way or another by the innovation of AI, and that makes it a really important conversation to hold as mainstream as we do things like security, so in terms of who should be reading this paper, I think every organization would find a lot of value in the contents of the white paper.”
She added that the white paper is practical, in that it breaks down a lot of the technical jargon associated with AI. “The white paper does such a good job, I think, of just mapping out a step-by-step of how an organization can get started, and it makes it really easy for organizations to see themselves in that work, regardless of how mature their processes may or may not be.”
Here are a few of the points highlighted in the report:
- Each organization should have a designated authority, preferably a senior executive or someone with significant power within the organization, who is ultimately responsible for AI governance. “Importantly, the tier 1 authority figure must embody the leadership skills needed to make difficult decisions that may be unpopular at the time, but necessary to achieve the organization’s responsible AI mandate.”
- Humans should maintain oversight of AI, and have the power to intervene. “Holding humans accountable for AI decision-making will establish certainty with internal and external stakeholders that humans bear ultimate responsibility for an AI system’s output, and in turn, will build trust.”
- Consumer privacy must be protected. “If not already in place, organizations should establish generally applicable policies that assure the privacy and protection of data and its owners and assure they address data issues relating to AI.”