Skip to content
Branch: master
Find file Copy path
Find file Copy path
Fetching contributors…
Cannot retrieve contributors at this time
64 lines (45 sloc) 6.14 KB

Images of Image Machines. Theory and Practice of Interpretable Machine Learning for the Digital Humanities

While text is still the most important research topic in the digital humanities, over the past ten years images have started to gradually appear on the radar of computational humanists. Recent developments in digital art history in particular have shown that the importance of images for DH research goes beyond ensuring their accessibility through databases and interfaces. In fact, images are where digital humanities and artificial intelligence meet. Most importantly, the automated classification of images on the one hand, and the automated production of images on the other raise a fundamental question at the interface of computer science and the humanities: how is reality represented in machine learning systems? The field of interpretable machine learning is concerned with opening the black box and answering this question.

This two-week workshop will serve as an introduction to the theory and practice of interpretable machine learning. The first week will introduce participants to the field by means of reading, discussing, and replicating foundational results in interpretable machine learning, with a particular focus on the fairness, accountability, and transparency (FAT) of machine learning systems. The second week will be dedicated to hands-on experimentation with image datasets in PyTorch, a popular machine learning framework. While the first week has no prerequisites, the second week requires basic programming skills, preferably in Python.


Readings/materials that are not directly linked here are provided via a shared Google Drive folder distributed to the participants via the workshop Moodle.

First Week

  • TUE 1a: Participant introductions, collection of interests/projects, introduction to the topic: the workshop in 60 minutes
  • TUE 2a: Introduction to tools, frameworks, notebooks, datasets / code
  • WED 3a: Readings and discussion: artificial intelligence and machine learning
    • Agre, Philip E.. "The Soul Gained and Lost. Artificial Intelligence as a Philosophical Project." SEHR 4, no. 2, 1995.
    • Babbage, Charles. "On the Economy of Machinery and Manufactures". In: Babbage's Calculating Engines. Being a Collection of Papers Relating to Them; Their History, and Construction. Cambridge: Cambridge University Press, 2010.
    • Descartes, René. Discourse on Method, Part V. In: Philosophical Essays and Correspondence. Indianapolis, IN: Hackett Publishing, 2000.
    • Turing, Alan M.. "Computing Machinery and Intelligence". Mind 59, no. 236, 1950.
  • WED 4a: Python for machine learning I / code
  • THU 5a: Readings and discussion: interpretable machine learning I
    • Jonas, Eric, and Kording, Konrad Paul. "Could a Neuroscientist Understand a Microprocessor?" PLoSComputBiol 13, no. 1, 2017.
    • Kittler, Friedrich. "Protected Mode". In: Literature, Media, Information Systems. London: Routledge, 2012.
    • Burrell, Jenna. "How the Machine ‘Thinks’: Understanding Opacity in Machine Learning Algorithms". Big Data & Society 3, no. 1, 2016.
    • "But What Is a Neural Network"
    • Selbst, Andrew D. and Barocas, Solon. "The Intuitive Appeal of Explainable Machines". Fordham Law Review 87, 2018.
    • Mittelstadt, Brent et. al. "Explaining Explanations in AI". 2019 ACM Conference on Fairness, Accountability, and Transparency (FAT*).
  • THU Hands on: Python for machine learning II / code
  • FRI 6a: Introduction to image datasets (scraping/cleaning) / code
  • FRI 7a: Building of participant datasets / code
  • SAT 8a: Exploring and understanding image datasets with machine learning / code
  • SAT 9a: Exploring and understanding image datasets with machine learning / code

Second Week

Tutorials/Further Resources

Please see this page.

You can’t perform that action at this time.