The lecture “Hermeneutics versus Artificial Intelligence,” organized by Cristina Vertan, Alicia Gonzalez Martinez, and Walther v. Hahn, brings together scholars who work in the domain of digital culture heritage to present current approaches to using computational methods on old scripts, highlight its benefit and limitations, and discuss the change of paradigm in humanistic interpretation. They scholars outline how the digitalization campaign over the past few years has made available a large number of manuscripts, as well as how the computational methods used to do so brought them into a new light by making them accessible for viewing. They continue to examine these methods, such as artificial intelligence, and how they often imply the rethinking and adaptation of established paradigms in computer science, and the embedding of principles of the ground method in hermeneutics. The talk addresses a broad spectrum of scripts and languages, among them Arabic, classical Ethiopic, Coptic, Latin, classical Greek, and old German.
In the blog post “Preserving Pre-Modern Terminologies” (Kitab Project, (2020)), Lorenz Niast discusses some of the obstacles that arise in projects of categorization, particularly in the Kitab Project’s OpenITI-Kitab corpus. He suggests that these obstacles arise when they attempt to classify pre-modern Islamicate texts by trying to fit certain pieces of information related to them in our existing metadata categories. Niast goes on to say that many of these texts were produced within intellectual fields with their own distinctive terminologies and practices which often use terms and refer to activities without equivalent in our contemporary world. After offering several examples pulled from this project, he concludes that categories are, on occasion, unable to catch differences for which they are not made.