Images of Image Machines. Theory and Practice of Interpretable Machine Learning for the Digital Humanities
While text is still the most important research topic in the digital humanities, over the past ten years images have started to gradually appear on the radar of computational humanists. Recent developments in digital art history in particular have shown that the importance of images for DH research goes beyond ensuring their accessibility through databases and interfaces. In fact, images are where digital humanities and artificial intelligence meet. Most importantly, the automated classification of images on the one hand, and the automated production of images on the other raise a fundamental question at the interface of computer science and the humanities: how is reality represented in machine learning systems? The field of interpretable machine learning is concerned with opening the black box and answering this question.
This two-week workshop will serve as an introduction to the theory and practice of interpretable machine learning. The first week will introduce participants to the field by means of reading, discussing, and replicating foundational results in interpretable machine learning, with a particular focus on the fairness, accountability, and transparency (FAT) of machine learning systems. The second week will be dedicated to hands-on experimentation with image datasets in PyTorch, a popular machine learning framework. While the first week has no prerequisites, the second week requires basic programming skills, preferably in Python.
Readings/materials that are not directly linked here are provided via a shared Google Drive folder distributed to the participants via the workshop Moodle.
- TUE 1a: Participant introductions, collection of interests/projects, introduction to the topic: the workshop in 60 minutes
- TUE 2a: Introduction to tools, frameworks, notebooks, datasets / code
- WED 3a: Readings and discussion: artificial intelligence and machine learning
- Agre, Philip E.. "The Soul Gained and Lost. Artificial Intelligence as a Philosophical Project." SEHR 4, no. 2, 1995.
- Babbage, Charles. "On the Economy of Machinery and Manufactures". In: Babbage's Calculating Engines. Being a Collection of Papers Relating to Them; Their History, and Construction. Cambridge: Cambridge University Press, 2010.
- Descartes, René. Discourse on Method, Part V. In: Philosophical Essays and Correspondence. Indianapolis, IN: Hackett Publishing, 2000.
- Turing, Alan M.. "Computing Machinery and Intelligence". Mind 59, no. 236, 1950.
- WED 4a: Python for machine learning I / code
- THU 5a: Readings and discussion: interpretable machine learning I
- Jonas, Eric, and Kording, Konrad Paul. "Could a Neuroscientist Understand a Microprocessor?" PLoSComputBiol 13, no. 1, 2017.
- Kittler, Friedrich. "Protected Mode". In: Literature, Media, Information Systems. London: Routledge, 2012.
- Burrell, Jenna. "How the Machine ‘Thinks’: Understanding Opacity in Machine Learning Algorithms". Big Data & Society 3, no. 1, 2016.
- "But What Is a Neural Network"
- Selbst, Andrew D. and Barocas, Solon. "The Intuitive Appeal of Explainable Machines". Fordham Law Review 87, 2018.
- Mittelstadt, Brent et. al. "Explaining Explanations in AI". 2019 ACM Conference on Fairness, Accountability, and Transparency (FAT*).
- THU Hands on: Python for machine learning II / code
- FRI 6a: Introduction to image datasets (scraping/cleaning) / code
- FRI 7a: Building of participant datasets / code
- SAT 8a: Exploring and understanding image datasets with machine learning / code
- SAT 9a: Exploring and understanding image datasets with machine learning / code
- MON 1b: Recap first week, scraping images with metadata
- MON 2b: Building Blocks of a Machine Learning System / code
- Kurenkov, Andrey. A 'Brief' History of Neural Nets and Deep Learning, 2015.
- TUE 3b: Readings and discussion: fairness, accountability, and transparency
- Crawford, Kate, et. al.. Anatomy of an AI System, 2018.
- Crawford, Kate. "The Trouble with Bias." Keynote at NIPS2017, Long Beach, California, USA, 2017.
- Speer, Robyn. How to Make a Racist AI Without Really Trying, 2017.
- Murphy, Heather "Why Stanford Researchers Tried to Create a 'Gaydar' Machine". New York Times, 2017.
- Mattson, Greggor Artificial Intelligence Discovers Gayface. Sigh, 2017.
- TUE 4b: Applied face detection / code
- WED 5b: Feature visualization / code
- WED Hands on: Project work
- THU 6b: Applied face recognition / code
- THU 7b: Project work
- FRI 8b: Generative adversarial networks
- FRI 9b: Conclusion
Please see this page.