A bipartisan coalition of 54 attorney generals across the country sent a letterĀ to Congress calling for a study on how artificial intelligence (AI) can and is being used to exploit and endanger children through the generation of child sexual abuse material.
The coalition is urging Congress to form a commission to specifically study how AI can be used to exploit children and to act to deter and address child exploitation, including the use of deepfakes to create Child Sexual Abuse Material (CSAM).
“CSAM on the internet lasts forever and if a child’s face is being used to create CSAM material the consequences can be very deadly,” says Alexander Delgado, director of public policy for ECPAT International a global network of civil society organizations working to end the sexual exploitation of children.
He explains many survivors have mentioned that they are scared to go outside because they don’t know who may have watched one of the most horrific moments of their lives.
“It impacts their lives every day,” he says. “If deepfakes are created, a child’s face would be connected to the CSAM which might result in a similar level of trauma and anxiety regardless of whether the child was abused in real life or not.”
Delgado points out AI can also be used to create “fake” CSAM material by creating an image of a child that doesn’t exist.
“This would lead to an increase in the distribution of CSAM online,” he says. “Even though no child was directly harmed by the creation of the CSAM, it could possibly lead to an increase in child sexual abuse.”
In one study, individuals who watched CSAM felt they were more likely to contact a child online or have sexual contact with a child.
The letter recommends expanding existing restrictions on child abuse material to explicitly cover AI-generated child sexual abuse material.
Congress is also strongly encouraged to propose legislation that would protect children from those abuses.
Some of the ways that AI may be used to exploit and endanger children online is using “deepfakes”, which enables bad actors to easily create material showing child sexual abuse by using photos of both abused and non-abused children.Ā Such practices endanger children by exploiting them, normalizing child abuse and feeding the market for child sexual abuse material.
As stated in the letter, the coalition believes this threat needs to be studied because it normalizes child abuse, feeds the market for child sexual abuse material, and most importantly, endangers children and their families.
A statement sent to Techstrong.ai from the Massachusetts AGās Office noted protecting the wellbeing of youth is a priority.
“The MA AGO will continue to push for litigation and policy work in these areas to advance justice and equity for not only Massachusettsā young people, but the countryās young people,” the statement read.
As noted in the letter, the coalition is urging Congress to form a commission that would approach the study of this threat through maintaining an up-to-date understanding of the issue.
“This technology is constantly evolving, and to stay ahead of the potential harm it could cause, researchers must stay current on its abilities,” the MA AGO statement explained. “We hope that this letter sends the message that the bipartisan coalition is making the safety and wellbeing of children a national priority, and we believe Congress should do the same by forming a commission to specifically study AIās harmful effects on young people.”
Delgado says because there isn’t much research on the topic, it’s hard to sayĀ how it’s currently being used to endanger children online.
“At this time, we can only point to anecdotal evidence and conjectures but there is definitely a looming threat that exists,” he says. “This threat needs to be studied so we can create policies informed by data. We need to understand the connection between those who watch CSAM and the increased risk of child sexual abuse.”
From Delgado’s perspective, AI developers and tech industry leaders must begin partnering with NGOs, survivor leaders and academics to understand the severity of the problem and find methods of prevention.
“It seems to me that tech industry leaders bring on NGOs as advisors or consultants so they can simply check it off their list,” he says. “It’s true that NGOs and the tech industry speak very different languages. Most of the time we are the child welfare experts and they are the tech experts and we are each on opposite sides of the room in our own bubbles.”
He says everyone–NGOs includedāneeds to learn how to better work with each other if the online sexual abuse and exploitation of children online is to be stopped.Ā He explains technology is constantly changing and evolving so governments need to learn how to be proactive rather than reactive.
“We shouldn’t wait until it becomes a big problem to finally pass legislation to stop AI-aided online exploitation,” Delgato says.