The rapid adoption of generative AI has significantly impacted the AI landscape, posing challenges for organizations to ensure responsible use of the technology through Responsible AI (RAI) programs.
A study by MIT Sloan Management Review and Boston Consulting Group (BCG) reveals while 53% of organizations rely solely on third-party AI tools, 55% of AI failures are attributed to these tools.Â
The report is based on a global survey of 1,240 organizations with at least $100 million in annual revenues, spanning 59 industries and 87 countries.
The study found although RAI leaders have increased from to 29% from 16% year over year, most organizations surveyed (71%) are still considered non-leaders and 20% of organizations using third-party AI tools fail to evaluate the associated risks at all.
The heavy reliance on third-party AI tools (78% of surveyed organizations) exposes them to various risks such as reputational damage, loss of customer trust, financial loss, regulatory penalties, compliance challenges and litigation.
Steven Mills, chief AI ethics officer at BCG and co-author of the report, cautions AI system failures open up an array of risks for organizations – from reputational damage and loss of customer trust to a heightened need for regulatory compliance or ligation and financial impact.Â
“However, these risks can be mitigated and managed by having proper AI governance in place,” he says.
Without governance, organizations will lack a clear understanding of the AI being developed and thus cannot ensure the risks are being proactively identified and mitigated to prevent harm to individuals and society.
“If an organization were to deploy AI systems that are inconsistent with its corporate values, they would risk undermining ESG commitments, alienating employees and customers, and long-term erosion of trust,” he notes.Â
Caroline Carruthers, CEO of data consultancy Carruthers and Jackson, says she finds it concerning to see more than half of companies solely rely on third-party AI tools.
“Without proper governance policies in place, they are opening themselves up to large data privacy and security issues as employees share potentially confidential information with GenAI platforms,” she explains.Â
Without proper governance, data can easily be put at risk as employees aren’t sure what they should or should not be sharing with the various tools they are encouraged to use.
“From a regulatory standpoint, a lack of internal governance can also put compliance at risk as organizations become used to an AI free for all without acknowledging the potential data risks of AI use,” Carruthers says.
This makes it harder to conform to new regulations as they are rolled out; internal governance should always be a step ahead of regulatory frameworks, from her perspective.
She adds the sharing of confidential data is a very real risk without a proper framework in place.Â
“Employees should also be aware of the limitations of AI, and recognize, through RAI principles if needed, that they cannot overly rely on answers generated from AI platforms to make critical business decisions,” she says.
Without a proper understanding of the limitations of AI, businesses run the risk of employees uncritically following AI business advice without acknowledging the potential for hallucinations.
Mills says the C-Suite and executive leadership must define the organizational commitment to RAI by implementing the proper steps and investing in the adequate resources needed to develop an RAI program.
“We’ve seen various types of executives lead an RAI program – general counsel, head of risk and compliance, chief AI ethics officer,” he says.Â
However, more important than the specific executive that is selected, is that a leader is identified and provided with the right authority and resources to drive the initiative.
“This individual must be held accountable for delivering on objectives,” Mills says.Â
Carruthers agrees the behavior of senior execs is a crucial factor in whether proper AI governance will be effective within an organization.
“Employees copy what they see at the top, so execs must be seen to be leading form the front when it comes to responsible AI use,” she says.
In terms of who leads the program, she explains it is often down to who creates most other regulatory policies within a business.
“No two organizations are the same, but this will typically be either the data department or IT department,” she notes.Â
Carruthers says it’s clear the development of AI technology isn’t slowing down, and AI is giving businesses a big competitive advantage over slow adopters.
“However, it’s getting to a stage now where the key differentiator will be which organizations are using it most effectively,” she says. “To ensure your business is staying ahead of competitors, as well as ensuring readiness for any coming AI regulations, you need to start putting in place proper governance frameworks without delay.”Â