accenture, multimodal ai, AI, artificial intelligence

Just as the latest breakthroughs in artificial intelligence have captured the world’s attention, this advance has also spurred discussions about the ethical questions such smart technology raises. These issues include bias, privacy, misinformation, transparency, security and human job displacement. According to a recent survey by Deloitte, the top two concerns of leaders regarding ethical missteps are reputational damage and human damage. Nevertheless, MIT Technology Review reports that 81% of executives believe that AI will be a major competitive advantage for their business, and surveys show approximately half of companies worldwide are using some form of AI currently.

Some guardrails are already being constructed to move AI in an ethical direction through legislation like Europe’s AI Act, due in early 2024. Some other geographies are putting similar structures in place with the goal of protecting citizens while still allowing companies to realize the technology’s many benefits. But instilling moral and ethical values in AI systems, especially in decision-making contexts with significant consequences, presents a challenge. Codes of ethics in companies and regulatory frameworks are two main ways that AI ethics can be implemented.

Building an Ethics Blueprint

The most important way to create meaningful AI governance is at the grassroots level: Through company-driven efforts. Companies that have already been addressing ethics and responsibility in their AI software development have an advantage, but few companies today can afford not to align their products with the risk-management and best-practice frameworks being built into legislation like the EU AI Act. After all, AI ethics is all about risk identification and risk mitigation – meaning risk to humans. When technology comes in contact with people, what kind of unintended consequences could arise? Could these risks be related to health and safety or to fundamental rights?

Building an ethical AI process should still keep humans in the driver’s seat for this technology. “The purpose of AI is to augment human decision-making, not replace it,” explains Clare Hickie, CTO, EMEA at Workday, a leading provider of enterprise cloud applications for finance and human resources. Hickie is based in Workday’s Ireland office, the company’s EMEA HQ, which has a team of over 2,000 people. The goal should be to use AI to provide workers “with those insights, predictions and recommendations to help them to be better versions of themselves, making them more productive and enabling them to really focus on more strategic-orientated work,” she states.

To that end, the best internal process should consider a few important steps:

  1. First, consider the specific technologies of focus.

Companies that internally deploy or sell B2B software solutions employing advanced AI should use a product-specific approach because risks associated with data and technologies vary. As Hickie describes it, “How is the data integrated? What’s the ecosystem? How is the data used? How is it extracted? How is it purged?” In terms of product platforms, she reports that Workday has embedded AI and machine learning into the core architecture of specific platforms, which aids in product evolution.

Her colleague Dr. Kelly Trindel, Workday’s chief responsible AI officer, lays out some suggestions for companies wanting to increase trustworthiness in their AI software. “Start with a risk evaluation. We use a scalable tool for developers to determine whether they’re working on a sensitive, or a high-risk, use case. The tool drives developers to consider the context and characteristics for how the system will operate,” she explains. “Along with the outcome of the risk evaluation, we deliver a set of responsible AI guidelines to developers outlining steps for risk mitigation to avoid unintended consequences. These guidelines cover areas including bias and fairness, for example, and explainability and interpretability, as well as validity and reliability.”

  1. Benefit from diverse perspectives.

Dr. Trindel believes more oversight functions like hers will appear across companies, but hastens to add that one leader can’t do everything “because of the way the regulatory field is developing and the newness of it all,” she states. “So, you absolutely need input, buy-in and participation from across seniority levels and cross-disciplinary teams in order to do it right.” Workday involves senior leadership, product and engineering developers and managers, legal and public policy experts, and others, who meet regularly and provide input on her AI governance framework.

A key need, she says, is having a set of “responsible AI champions,” as she calls them, who are deeply knowledgeable and involved directly in technology development within the company. This is imperative because product and technology developers and managers “should have lines of reporting that are separate from the people who are in charge of the technology governance,” according to Dr. Trindel. “But at the same time, AI governance cannot be developed in a silo. The Responsible AI team must have input and review from other experts across the company.” In fact, diversity among the various stakeholders is crucial. “They should have different views, different backgrounds, different approaches, and different areas of focus,” she notes. “That’s the strength of a good program: bringing that kind of diversity together and making sure that you’re addressing different people’s thoughts, concerns, and perspectives,” she adds.

  1. Realize the importance of collaboration.

Companies will benefit greatly from collaborating with peers, researchers and government agencies as they build ethical AI frameworks. As Clare Hickie explains, “There are many ways we can learn. We can learn from a university that’s also right at the heart of innovation. We can learn from new talent that we may not have in our own organization.” For example, Workday’s Ireland offices recently co-developed a new machine learning program with Technological University Dublin to upskill its workers in core disciplines like data mining, data deep learning and probability. The company has also been working with Science Foundation Ireland ADAPT Center, which has been pioneering AI personalization and human-centric interaction.

  1. Understand that trust is the ultimate goal.

It’s no secret that some workers are frightened by the AI revolution in terms of their own careers. This is why “trust must be designed into the very foundation” of any AI ethics program according to Hickie. Workday is providing various resources and opportunities for employees to get familiar with the technology regardless of their specific job. Working on employee growth plans that can include identifying new roles, upskilling and retraining is another way to lessen possible fears, she says. “Instead of artificial intelligence, we talk about the technology as augmenting (human) intelligence. It’s interesting because the way we talk about it can put people at ease.”

But trust is paramount. As Dr. Trindel notes, “We sell our AI technology to customers who then use those technologies. And we know that customers won’t adopt, or use, the technology unless they trust that it was built responsibly. So it’s important for both ethical and moral reasons, but also very practical and concrete reasons as well, to focus on building trust into our AI products and technologies.”

The bottom line is that companies need to start creating ethical and responsible frameworks for their AI technologies. Although the EU’s AI Act is not yet finalized, companies should review the current guidelines so their internal standards can be aligned. Dr. Trindel cautions: “If creating responsible and trustworthy AI risk management frameworks isn’t done right, over time organizations who lag in this area could die out. In the future, there’ll be more and more demand for responsible and trustworthy AI technology.”

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Mobility Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY