Fears that generative artificial intelligence may damage personal reputations while also putting consumer data at risk intensified with reports the U.S. Federal Trade Commission has opened an investigation into OpenAI, the maker of ChatGPT. The move, as reported by the Washington Post, would be the strongest regulatory threat to the Microsoft-backed start-up that has taken the internet by storm since its debut in November, 2022. The agency is investigating whether OpenAI utilized unfair or deceptive practices that caused “reputational harm” to consumers.
Those reputational harm concerns were amplified after an AI experiment by the EPIC Irish Emigration Museum in Dublin, Ireland generated a massive portfolio of misinformed images and nary a positive one.
“While being Irish can’t possibly be summed up in a single image, we were surprised to find that AI generated images of the Irish are full of outdated stereotypes,” said Aileesh Carew, CEO of the museum.
The experiment was devastating in its simplicity. When “Irish Man” images were requested of ChatGPT, the results were, without exception, full of outdated and derogatory stereotypes. Every generated image showed a man that was ugly, aggressive, drunk, leprechaun-like or some combination of the above.
“One of the downsides of AI is the potential to reinforce bias that exists in information already published,” commented Alex Gibson, head of digital marketing discipline at the Technological University (TU) Dublin. “We all need to understand that AI processes essentially trawl data that already exists and is therefore susceptible to significant misrepresentation.” The findings of the EPIC team are a very useful illustration of the challenges AI poses, added Gibson. AI is able to draw on negative images of the Irish on the internet that date back to the 1800s.
“This is not us,” said the museum in a statement. “The sad reality is that even though we have come a long way in the past couple of decades, popular culture still perpetuates negative of the Irish.”
The EPIC museum hopes this experiment “will inspire people to look beyond the stereotypes whenever they encounter them and create meaningful conversations about the pitfalls and potentials of AI,” explained Carew. A new ad campaign sponsored by EPIC aims to generate a higher profile regarding the issue. While all bad, the “Paddy A.I-rishman” campaign features the most hideous images generated using AI.
The EPIC experiment would seem to reinforce the findings of a recent study by Cornell University that examined AI toxicity and found negative bias baked into ChatGPT, especially if assigned a persona. “Depending on the persona attached to ChatGPT, its toxicity can increase up to 6X, with outputs engaging in incorrect stereotypes, harmful dialogue, and hurtful opinions,” concluded the “Toxicity in ChatGPT” study.
Meanwhile, ChatGPT stands accused of gender stereotyping by the Alliance for Universal Digital Rights after developing a story line in which a female student decides against an engineering major because of its perceived complexity, opting for a fine arts degree instead. Likewise, a male student rejected majoring in the fine arts in favor of engineering, saying he was uncomfortable with creativity.
The Cornell study hopes awareness of the issue will lead to better guardrails for developing AI systems that appear to be sorely needed. The FTC investigation into OpenAI indicates those concerns are widespread.