Accenture and Amazon Web Services (AWS) recently launched the Accenture Responsible AI platform powered by AWS, aimed at helping organizations automate and operationalize responsible AI practices.
The platform helps organizations assess AI risks and develop mitigation strategies to comply with regulations, integrating with AWS services like Guardrails for Amazon Bedrock and SageMaker Clarify.
The partnership combines Accenture’s industry experience and insights from more than 1,000 generative AI projects with AWS’s AI service capabilities and includes five offerings that aid clients in establishing and implementing policies, principles and standards.
These include AI governance and principles, AI risk assessment, systemic responsible AI testing and risk mitigation, monitoring and compliance support, and enterprise impact, including workforce, sustainability, and privacy and security.
Diya Wynn, responsible AI lead at AWS, said one of the most significant challenges organizations face when scaling AI technologies is ensuring that these systems are genuinely inclusive and beneficial to all consumers and stakeholders.
“It’s crucial to emphasize the importance of including diverse voices and perspectives throughout the process,” she said. “This inclusive approach is not just about fairness. This inclusive approach to AI development and deployment can lead to more innovative and robust solutions.”
Wynn said organizations can create AI systems that are more adaptable, resilient and truly beneficial to all members of society.
“A key challenge is mitigating AI hallucinations and ensuring high quality, coherent outputs that align with business needs,” she added.
Part of overcoming this challenge is looking beyond just the output and keeping humans in the loop throughout the AI lifecycle.
This is critical for quality assurance of the output produced by foundation models, particularly ones that may be used for consumer-facing applications like chatbots.
“It also ensures AI is used responsibly and that an organization’s AI adoption goals continue to align with their values,” Wynn said.
Accenture’s chief responsible AI officer Arnab Chakraborty noted there is no “easy button” for scaling AI.
“It is a multi-year commitment that takes consistent focus, investment, data readiness and time,” he said. “It requires a relentless focus on people, process and technology.”
He explained a solid data foundation is essential and is one of the most significant challenges, adding new data is created daily, making it challenging to achieve and maintain a solid data foundation.
Another hurdle is maintaining a focus on data governance, data privacy, security and the cultivation of the right talent and culture.
“Organizations need to have peripheral vision on where their data is coming from, who owns it, and what checks and balances it goes through,” Chakraborty said.
Wynn said organizations should adopt a holistic approach that prioritizes people and fosters a culture of responsibility, which involves considering the human impact of AI, ensuring human oversight and building trust.
“Human oversight is essential to guarantee accuracy, fairness, and safety in AI outputs,” she said.
While foundation models are continually improving, human involvement remains crucial to maintaining trust and address potential biases or unintended consequences.
“A people-centric approach will continue to be integral to responsible AI, even as technology advances,” she said.
From Chakraborty’s perspective, organizations can strike the right balance between innovation and risk management by staying focused on the desired business outcomes, and they can foster a culture of innovation by staying methodical.
“Have a plan by establishing a clear AI governance framework,” he said. “Be ready to adopt a risk-based approach with contingency planning. Prioritize ethical AI. Invest in people to develop talent, and continuously monitor and evaluate your progress and risks.”
He added a dedicated AI governance framework for businesses is not optional, it’s table stakes.
“Organizations must have guidelines, controls, and oversight to ensure responsible and ethical AI adoption,” Chakraborty said.