Human-centered Machine Learning (Spring 2020)
Insturctor: Chenhao Tan contact
Office hours: 2:15-3:00pm on Monday, 12:00-1:00pm on Wednesday, or by appointment (ECES 118A)
- Location and time: ECES 112, 1:00-2:15pm on Mondays and Wednesdays
- Syllabus (Must READ if you are taking the course)
Week 1: Introduction
- Jan 13, Introduction
- Jan 15, Ask not what AI can do, but what AI should do: Towards a Framework of Task Delegability. Brian Lubars and Chenhao Tan. NeurIPS 2019.
Week 2: You are not so Smart
- Jan 20, MLK.
- Jan 22, Human Decisions and Machine Predictions.
Jon Kleinberg, Himabindu Lakkaraju, Jure Leskovec, Jens Ludwig, Sendhil Mullainathan.
Quarterly Journal of Economics, 2018.
- Judgment under Uncertainty: Heuristics and Biases. Amos Tversky and Daniel Kahneman, Science, 1974.
- Assessing Human Error Against a Benchmark of Perfection. Ashton Anderson, Jon Kleinberg, Sendhil Mullainathan. KDD 2016.
- Algorithm Aversion: People Erroneously Avoid Algorithms After Seeing Them Err. Berkeley J. Dietvorst, Joseph P. Simmons, and Cade Massey, Journal of Experimental Psychology: General, 2014.
- Podcast: You are not so smart
- The effect of wording on message propagation: Topic- and author-controlled natural experiments on Twitter. Chenhao Tan, Lillian Lee, Bo Pang. ACL 2014.
- Thinking, Fast and Slow. Daniel Kahneman. 2011.
Week 3: Machine-in-the-loop Interactions
- Jan 27, Principles of Mixed-Initiative User Interfaces. Eric Horvitz. In Proceedings of CHI, 1999.
- Jan 29, Towards A Rigorous Science of Interpretable Machine Learning. Finale Doshi-Velez and Been Kim.
- Guidelines for Human-AI Interaction. Saleema Amershi, Dan Weld, Mihaela Vorvoreanu, Adam Fourney, Besmira Nushi, Penny Collisson, Jina Suh, Shamsi Iqbal, Paul N. Bennett, Kori Inkpen, Jaime Teevan, Ruth Kikin-Gil, and Eric Horvitz. CHI 2019.
- Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda. Ashraf Abdul, Jo Vermeulen, Danding Wang, Brian Y. Lim, Mohan Kankanhalli. CHI 2018.
- Interpretable machine learning: definitions, methods, and applications. W. James Murdoch, Chandan Singh, Karl Kumbier, Reza Abbasi-Asl, Bin Yu. PNAS 2019.
- Natural Language Translation at the Intersection of AI and HCI. Spence Gree, Jeffrey Heer, Christopher D. Manning. Queue, 2015.
- A Review of User Interface Design for Interactive Machine Learning. John J. Dudley and Per Ola Kristensson. ACM Transactions on Interactive Intelligent Systems. 2018.
- Beyond binary choices: Integrating individual and social creativity. Gerhard Fischer, Elisa Giaccardi, Hal Eden, Masanori Sugimoto, Yunwen Ye. International Journal of Human-Computer Studies, 2005.
- The Mythos of Model Interpretability, Zachary C. Lipton.
- A Human-Centered Agenda for Intelligible Machine Learning. Jennifer Wortman Vaughan and Hanna Wallach.
- Who is the “Human” in Human-Centered Machine Learning: The Case of Predicting Mental Health from Social Media. Stevie Chancellor, Eric P.S Baumer, and Munmun De Choudhury. CSCW 2019.
- Human Centered Systems in the Perspective of Organizational and Social Informatics. Rob Kling and Leigh Star. 1997.
Replication playground/paper critique due on Jan 31; although I accept this homework one week late.
Week 4: Feature-based Explanations
- Feb 3, Why should I trust you?: Explaining the Predictions of Any Classifier. Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin. KDD 2016.
- Feb 5, Attention is not not explanations. Sarah Wiegreffe, Yuval Pinter. EMNLP 2019.
- A Unified Approach to Interpreting Model Predictions. Scott Lundberg, Su-In Lee. NeurIPS 2017.
- Attention is not explanations Sarthak Jain, Byron C. Wallace. NAACL 2019.
- Is Attention Interpretable?. Sofia Serrano, Noah A. Smith. ACL 2019.
- Many Faces of Feature Importance: Comparing Built-in and Post-hoc Feature Importance in Text Classification. Vivian Lai, Jon Z. Cai, Chenhao Tan. EMNLP 2019.
- DeepXplore: Automated Whitebox Testing of Deep Learning Systems, Pei, Cao, Yang and Jana. In Proceedings of SOSP, 2017.
- Rationalizing Neural Predictions, Tao Lei, Regina Barzilay and Tommi Jaakkola. In Proceedings of EMNLP, 2016.
- Learning Explanatory Rules from Noisy Data. Richard Evans, Edward Grefenstette. Journal of Artificial Intelligence Research, 2018.
- Network Dissection: Quantifying Interpretability of Deep Visual Representations. David Bau, Bolei Zhou, Aditya Khosla, Aude Oliva, Antonio Torralba. In Proceedings of CVPR 2017.
First proposal due on Feb 7
Week 5: First Proposal
- Feb 10, presentation & discussion
- Feb 12, presentation & discussion
First proposal peer feedback due on Feb 14.
Week 6: Example-based explanations
- Feb 17, Examples are not Enough, Learn to Criticize! Criticism for Interpretability. Been Kim, Rajiv Khanna, Oluwasanmi Koyejo. NeurIPS 2016.
- Feb 19, Deep Weighted Averaging Classifiers. Dallas Card, Michael Zhang, Noah A. Smith.
- Case-based explanation of non-case-based learning methods. Rich Caruana, Hooshang Kangarloo, John David N. Dionisio, Usha Sinha, David Johnson. AMIA 1999.
- How Case Based Reasoning Explained Neural Networks: An XAI Survey of Post-Hoc Explanation-by-Example in ANN-CBR Twins. Mark T Keane, Eoin M Kenny. ICCBR 2019.
- Deep k-Nearest Neighbors: Towards Confident, Interpretable and Robust Deep Learning. Nicolas Papernot, Patrick McDaniel.
- Interactive and Interpretable Machine Learning Models for Human Machine Collaboration, Been Kim, PhD thesis.
Second proposal due on Feb 21.
Week 7: Second proposal
- Feb 24, presentation & discussion
- Feb 26, presentation & discussion
Second proposal peer feedback due on Feb 28.
Week 8: Counterfactual explanations
- Mar 2, Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR. Sandra Wachter, Brent Mittelstadt, Chris Russell.
- Mar 4, Explaining Machine Learning Classifiers through Diverse Counterfactual Explanations. Ramaravind Kommiya Mothilal, Amit Sharma, Chenhao Tan. FAT* 2020.
- The Hidden Assumptions Behind Counterfactual Explanations and Principal Reasons. Solon Barocas, Andrew D. Selbst, Manish Raghavan. FAT* 2020.
- Efficient Search for Diverse Coherent Explanations. Chris Russell. FAT* 2019. Team formation due on Mar 7.
Week 9: Adversarial attacks
- Mar 9, Universal Adversarial Triggers for Attacking and Analyzing NLP. Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, Sameer Singh. EMNLP 2019.
- Mar 11, Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods. Dylan Slack, Sophie Hilgard, Emily Jia, Sameer Singh, Himabindu Lakkaraju. AIES 2020.
- Misleading Failures of Partial-input Baselines. Eric Wallace, Shi Feng, and Jordan Boyd-Graber. ACL 2019.
Week 10: Human-AI interaction --- Decision Making
- Mar 16, On Human Predictions with Explanations and Predictions of Machine Learning Models: A Case Study on Deception Detection. Vivian Lai, Chenhao Tan. FAT* 2019.
- Mar 18, The Principles and Limits of Algorithm-in-the-Loop Decision Making. Ben Green, Yiling Chen. CSCW 2019.
- Prediction Policy Problems. Jon Kleinberg, Jens Ludwig, Sendhil Mullainathan, Ziad Obermeyer. American Economic Review, 2015.
- The Tyranny of Data? The Bright and Dark Sides of Data-Driven Decision-Making for Social Good. Bruno Lepri, Jacopo Staiano, David Sangokoya, Emmanuel Letouzé, and Nuria Oliver. Transparent Data Mining for Big and Small Data, 2017.
- Predicting the knowledge–recklessness distinction in the human brain. Iris Vilares, Michael J. Wesley, Woo-Young Ahn, Richard J. Bonnie, Morris Hoffman, Owen D. Jones, Stephen J. Morse, Gideon Yaffe, Terry Lohrenz, and P. Read Montague. PNAS, 2016.
Week 11: Spring break
Week 12: Human-AI interaction --- Creative writing
- Mar 30, Creative Writing with a Machine in the Loop: Case Studies on Slogans and Stories. In Proceedings of IUI, 2017.
- Apr 1, Creative Help: A Story Writing Assistant. Melissa Roemmele, Andrew S. Gordon. In Proceedings of ICIDS, 2015.
- Inside Jokes: Identifying Humorous Cartoon Captions. Dafna Shahaf, Eric Horvitz, Robert Mankoff. In Proceedings of KDD, 2015.
- Hafez: an Interactive Poetry Generation System. Marjan Ghazvininejad, Xing Shi, Jay Priyadarshi, and Kevin Knight. In Proceedings of ACL, 2017 (Demo Track).
Week 13: Project midpoint presentation
- Apr 6, free time to work on projects
- Apr 8, presentation
Project peer feedback due on Mar 23.
Week 14: Human-AI interaction --- Trust
- Apr 13, Trust in automation: Designing for appropriate reliance. John Lee and Katrina See. Human factors, 2004.
- Apr 15, Understanding the Effect of Accuracy on Trust in
Machine Learning Models. Ming Yin, Jennifer Wortman Vaughan, and Hanna Wallach. CHI 2019.
- Let Me Explain: Impact of Personal and Impersonal Explanations on Trust in Recommender Systems. Johannes Kunkel, Tim Donkers, Lisa Michael, Catalin-Mihai Barbu, and Jürgen Ziegler. CHI 2019.
Week 15: Fairness, Accountability, and Transparency
- Apr 20, European Union regulations on algorithmic decision-making and a "right to explanation". Bryce Goodman and Seth Flaxman.
- Apr 22, Roles for Computing in Social Change. Rediet Abebe, Solon Barocas, Jon Kleinberg, Karen Levy, Manish Raghavan, David G. Robinson. FAT* 2020.
- Equality of opportunity in supervised learning. Moritz Hardt, Eric Price, Nathan Srebro. NeurIPS 2016.
- Inherent Trade-Offs in the Fair Determination of Risk Scores. Jon Kleinberg, Sendhil Mullainathan, Manish Raghavan. ITCS 2017.
- Algorithmic decision making and the cost of fairness. Sam Corbett-Davies, Emma Pierson, Avi Feller, Sharad Goel, Aziz Huq. KDD 2017.
- Fairness and Abstraction in Sociotechnical Systems. Andrew D. Selbst, danah boyd, Sorelle A. Friedler, Suresh Venkatasubramanian, Janet Vertesi. FAT* 2019.
Week 16: Final Project Presentation
- TBD, presentations (could be different from normal class time).
Final project report due on May 1.
- TBD, presentations (could be different from normal class time).