The social and human sciences sector of the United Nations Educational, Scientific and Cultural Organization (UNESCO), recently announced the launch of the Women4Ethical AI Platform.
The platform aims to encourage global gender equality leaders in emerging technologies to collaborate and implement UNESCO’s Recommendation on the Ethics of Artificial Intelligence with a gender lens.
The first global normative standard in the field of AI was adopted by all 193 Member States of the Organization at its 41st General Conference in 2021. It includes policy recommendations to address gender-related challenges in AI, such as dedicating funds from public budgets for gender-related schemes and ensuring national digital policies have a gender action plan.
The recommendations also encourage female entrepreneurship, participation and leadership in all AI stages, investing in targeted programs to increase girls’ and women’s participation in STEM and ICT disciplines and eradicating gender stereotyping and discriminatory biases in AI systems.
“UNESCO will engage women in AI to advance the ethical development and deployment of AI, for fair and inclusive outcomes, with a special focus on gender diversity and empowerment,” the organization explained through a press statement.
Lydia Gregory, CEO and co-founder of Figaro, says it is also important to ensure that women and minorities are represented equally in the decisions about funding AI.
“Ultimately, all development costs money and people make decisions about which projects get funding, and which don’t,” she says. “Greater diversity of backgrounds, experiences, and interests at an investor level would impact the types of projects that get funded.”
She agrees it is true that as humans we tend to design tools for problems we understand based upon our own experiences, interests and frustrations.
“Technology is a tool and it is up to us what we use it for,” she notes. “We therefore need representation in AI, not just of women, but of people from as many backgrounds and disciplines as possible to ensure that we create tools with outcomes that suit all of our needs.”
From her perspective, the possibilities for bias being learned and having real-life consequences have been well documented, with more examples coming to light all the time, from mis-identifying images to medical datasets of primarily European patients.
“I want to present an alternative view: designed and built in the right way, machine learning systems have the possibility to positively impact the experience of women and minorities,” Gregory says.
Zoe Thompson, social strategy leader for KPMG, explains fairness should be looked at through multiple dimensions, including whether AI development is ensuring equitable access to opportunities, providing a standard level of value to all, or looking to purposefully serve historically underrepresented populations.
“Equity is one goal but not the only goal in how an organization can think about achieving fairness in the development of AI solutions,” she says.
Key considerations to help ensure equity in AI development include diverse and inclusive development teams, which will widen perspectives; ethical frameworks and governance to identify risks and establish policies and controls; and data sets for AI training where processes during the identification, collection and processing of data take into account intersectional identities.
“As important will be to develop a comprehensive AI fairness methodology, process and tools and regularly assess its effectiveness,” she says. “And then there is also how AI applications will be used, which although outside of the AI itself, introduces bias implicit in human decision making and application.”
Thompson said she agrees with Gregory that bias can lead to unreliable AI that can’t be trusted or fails to deliver on its objective.
“Transparency, inclusive and comprehensive datasets, and diligent testing, evaluating, validating and verifying are critical to establish trust and remove bias,” she says. “Bias in AI also can lead to a reduction or loss of opportunities, or unsafe conclusions, particularly when AI is used in health and safety decisions.”
From her perspective, a diverse set of opinions, viewpoints and perspectives are proven to lead to better outcomes.
“Gender is one of the more than 20 categories of intersectional identities that need to be considered as it relates to ethics and bias,” she adds.
Military service, apparent and non-apparent disabilities, parent, guardian and caregiver status and faith-based practices are just some examples that shed light on how biases can occur beyond gender.
“Equitable representation of women in AI can work to reduce the perpetuation of biases in AI, which the National Institute of Standards and Technology identifies as systemic, statistical and human biases,” she says. “Overcoming those biases may result in inclusive design that creates equity in the data and overall fair systems.”