The possibilities of artificial intelligence (AI) use cases have expanded to reach far back into the past, with machine learning being used to detect the presence of ink on ancient papyrus scrolls, helping researchers read and translate the antique text.

The devastating eruption of Mount Vesuvius on August 24, 79 A.D., completely destroyed not only Pompeii, but also the smaller neighboring city of Herculaneum. The city sank under a 65-foot-thick volcanic layer, which solidified into a dense mass of tuff as it cooled.

Under this, Herculaneum lay practically sealed off from the air, including the Villa dei Papiri, which contained an extensive library with about 1800 papyrus scrolls, burned during the eruption of the volcano.

The texts on the charred scrolls seemed lost forever, until scientists at the University of Kentucky managed to more closely inspect some of the scrolls using a particle accelerator, X-ray tomography and AI.

The result was high-resolution CT scans that still made the writing on the scrolls indecipherable, and scientists launched “The Vesuvius Challenge”, with two companies from the tech industry offering a total of $1 million in prize money for new findings in deciphering the papyri.

About 100 teams are currently dedicated to the task of making the ancient scrolls legible again.

A partial successes has already been achieved thanks to AI–a 21-year-old computer science student from the University of Nebraska, Luke Farritor, has been the most successful so far.

Based on results from other tinkerers, he managed to perfect an algorithm to the point where previously invisible ancient Greek letters and even an entire word became legible on the surface of a papyrus.

The word “Πορφυρας” (porphyras), which translates as “purple” or “purple clothing”, is the first one that can now be read on one of the scrolls.

While there is still a long way to go before the entire collection of papyri can be deciphered, the use of AI has helped researchers reach a milestone.

“It’s a little bit of a misnomer for people to read headlines about how AI was reading anything,” says Dr. Brent Seales, Director of Graduate Studies, Data Science at the Stanley and Karen Pigman College of Engineering, University of Kentucky.

He explains the AI was used to amplify evidence of the ink on the scrolls through the process of tomography, or imaging by sections.

“The AI is not using a large language model—it’s amplifying evidence of ink, not of letters,” Seales says. “It’s actually just learning what the ink looks like in the tomography by learning labels of evidence where you can faintly see with the naked eye or that is even invisible to the naked eye.”

With enough training, where labels are matched with evidence of ink, the AI starts to learn the subtlety in this gap between what humans can see and where there’s actually evidence.

“Most people were so focused on large language models that they think what we’re doing is learning ancient Greek—that’s not happening,” Seales says. “The AI that’s being applied is really just looking for a change in what the papyrus looks like because it’s coated with something.”

What the AI was able to find were traces of ink that were then analyzed by students and researchers to figure out what had been written—in this case, in ancient Greek—which brought forth the opportunity to read the scroll and then make the translation.

“AI makes invisible ink visible, is the way I would put it,” he says.

From Seales’ perspective, the broader approach to doing imaging is wrong because most imaging pathways, including tomography, are geared around what people can see.

“We tune all of the parameters to try to optimize what’s visible to the naked eye,” he says. “We need to be optimizing what’s visible for the machine learning to be able to amplify the signal. Once the AI learns what evidence it needs to be able to amplify – that’s really where you want to be.”

In this way, the algorithm is being applied in a similar way to use cases in health care. Seales explains AI can also be used to spot dysfunctions in radiology scans that the radiologist might not detect.

“The AI has learned to look for these kinds of subtle pieces of information and can learn to identify regions where there actually is subtle evidence of the ink,” he says. “That is the gap we’re exploiting with the AI capabilities.”