kids, AI, chatbots, AI abuse

Alarm bells are ringing loudly over the discovery of thousands of AI generated images of child sexual abuse material (CSAM) on the giant open-source AI database LAION. The discovery was made by the Stanford Internet Observatory. In turn, the non-profit LAION quickly moved to temporarily shut down its database of billions of images to conduct a safety check.

Concern over CSAM has been growing as more appalling incidents come to light. Recent examples include reports from the UK Safer Internet Centre that school children were using AI to create CSAM of their classmates. Another example are declothing apps—one in Spain with reportedly 50,000 subscribers created fake nude images of young girls.

While the number of images found by the Stanford University-based organization, working in conjunction with Canada’s Centre For Child Protection, amounted to 3,200, a small subset of the roughly 5.8 billion images that LAION scraped from the internet, the concern goes beyond those numbers; given the likelihood that those CSAM images have infected—perhaps irrevocably—AI’s that used LAION images for training. Stability AI’s Stable Diffusion is one AI that trained on a subset of the LAION-5B data in question. Stability AI says it has CSAM safeguards in place, but such safeguards may not be universal across the AI landscape. The Stanford report notes that Stable Diffusion 1.5 is the most popular model for generating explicit material as it lacks the safeguards of the 2.0 version. The Stanford report suggests those safeguards can be watered down in response to market demand: Stable Diffusion 2.1 was trained using both “safe” and “moderately unsafe” material.

A key aspect of the Stanford report highlights the misperception of how AIs generate CSAM. Anti-abuse researchers had thought CSAM images were produced when an AI combined images from two different pools of images—adult pornography and benign photos of children. It now appears CSAM images were in databases all along, a consequence of unfiltered internet vacuuming. The mental turnaround highlights another ongoing concern about whether people truly understand how AI works.

AI creators are rushing to clarify their use of LAION images for AI training, but it’s unclear whether LAION is only the tip of the iceberg, given that there are other sources of imagery available to AI creators. That worry is on the mind of AI executives like Dr. Yair Adato, CEO and co-founder of BRIA, pioneers in the use of visual generative AI.

“The news of illegal photos of children found in in AI training sets is truly disheartening; it exposes the darker side of AI that has long been a concern,” says Adato, noting that AI is only as good or bad as the data that trains it.  “In most cases, scraped data cannot be cleaned retroactively, therefore organizations are advised to use clean, licensed training data from the outset. Embracing responsible AI practices is critical for the industry; the alternative is not an option.”

The Stanford report doesn’t sugar coat the difficulty of removing CSAM from the AI models themselves. “The images and text embeddings could be removed from the model but it is unknown whether this would meaningfully affect the ability of the AI to produce CSAM or to replicate the appearance of specific victims.”

Complicating matters is that CSAM research and solutions is that it is illegal for most institutions to view CSAM material for verification. Stanford employed a number of work-arounds described as “perceptual hash-based detection, cryptographic hash-based detection, and k-nearest neighbor analysis leveraging the image embeddings in the dataset itself.”

Hash sets provided by the National Center For Missing and Exploited Children also were used. Much of Stanford’s work was confirmed by third parties like PhotoDNA and tested against a Thorn CSAM classifier.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Mobility Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY