Comprising an international team, two computer science scholars have used artificial intelligence technology to discern words hidden in the historic Herculaneum scrolls. These documents, excavated from a volcanic eruption site dating back to 79 AD, are contained in layers of dense volcanic mud and have remained unexposed for centuries. A competition initiated by former GitHub CEO Nat Friedman motivated AI developers to create software that could assist in reading these delicate documents.
The team of researchers, which included students Luke Farritor and Youssef Nader, hailing from the University of Kentucky, used advanced 3D CT scanning tools to visualize the different fragments of the script. Achieving an impressive resolution of four micrometers, they trained cutting-edge AI models to decipher the charred ancient scroll text that led to the eventual discovery of new words.
The Result of Nationwide Competition
Within the competition, one contestant identified patterns resembling cracks in the script, leading to the unveiling of the first Greek letter. This in turn spurred others to venture in identifying more patterns in an attempt to crack entire words. Subsequently, Farritor and Nader were awarded for their proficiency in successfully training AI Models. The AI model deciphered not only the ancient words meaning “purple”, but other Greek examples as well that translate to “achieving” and “similar”.
In his most recent attempts, Youssef Nadel has successfully uncovered partial text from four and a half columns of the document. Although not entirely legible yet, the studious endeavor is still ongoing, and the prospect of winning the grand prize of $700,000 is still in play.
Using AI for a Greater Purpose
While the world marvels at the uncovering of ancient texts preserved underneath the layers of time, another significant development unfolds in the United States. The U.S. Space Force recently suspended internal use of generative AI tools due to concerns regarding data privacy. An internal memo sent to all military personnel referred to as “Guardians” emphasized temporarily halting usage of applications like ChatGPT till further approval by the Space Force's Chief Technology and Innovation Office. This decision aligns with a focus on data aggregation risks and the need to prevent the possibility of secret data leakages.
To harness the power of AI tools responsibly, while safeguarding national interest, the U.S. Space Force has joined an AI task force consisting of other U.S. Department of Defense agencies to examine potential security risks and develop industry-standard practices.