Journalistic coverage, process and technological approaches to inclusive analytical practices, sociotechnological theory, and recommended readings.
Please submit pull requests! Human forces keep this up-to-date.
<iframe src="https://docs.google.com/presentation/d/e/2PACX-1vQqfx17JaE4TvpgfCqeSen456NBMu8LIvSOJeXNoc-3DrMqq4EDI_h5p3mBPn7J9ECqT5QCZxfhNenN/embed?start=false&loop=false&delayms=3000" frameborder="0" width="480" height="299" allowfullscreen="true" mozallowfullscreen="true" webkitallowfullscreen="true"></iframe> _Clare Corthell, July 17, 2018_- COMPAS and Machine Bias ProPublica 2016
- “Gaydar” algo and junk science Blaise Aguera y Arcas 2017
- Discrimination in Online Ad Delivery Latanya Sweeney 2013
- Reinforcement of Racism in Dating Apps NPR 2018
- Amazon Delivery Redlining Bloomberg 2016
- Bias detectives: the researchers striving to make algorithms fair Nature 2018
- Amazon Employees demand cancellation of facial recognition tool sales 2018
- The Coded Gaze Joy Buolamwini, MIT Media Lab
- Trump's catch-and-detain policy snares many who call the U.S. home -- "ICE modified a tool officers have been using since 2013 when deciding whether an immigrant should be detained or released on bond. The computer-based Risk Classification Assessment uses statistics to determine an immigrant’s flight risk and danger to society. Previously, the tool automatically recommended either “detain” or “release.” Last year, ICE spokesman Bourke said, the agency removed the “release” recommendation, but he noted that ICE personnel can override it.”
- AINow
- Data & Society
- Data Justic Lab
- FAT* Conference (formerly FATML)
- ICML Workshop: (historically) #Data4Good, Fairness in Machine Learning
- See also the comprehensive FAT* Index
- CS 294: Fairness in Machine Learning UC Berkeley, Instructor Moritz Hardt
- The Trouble with Bias / video Kate Crawford 2018
- TED Machine Intelligence makes morals more important Zeynep Turfeci
- Social and Political Questions Kate Crawford
- Artificial Intelligence’s White Guy Problem Kate Crawford, NYTimes 2016
- Ethics Will Shape the Customer Experience of AI Susan Etlinger
- What is the Problem to Which Fair Machine Learning is the Solution? video Solon Barocas 2017
- NYAI #20: Ethical Algorithms - Bias and Explainability in Machine Learning Systems Kathryn Hume
- Unmasking A.I.'s Bias Problem Fortune 2018
- Big Data's Disparate Impact Selbst & Barocas
- Weapons of Math Destruction Cathy O’Neil
- Automating Inequality Virginia Eubanks
- Frankenstein: Annotated for Scientists, Engineers, and Creators of All Kinds Mary Shelley
- Wind, Sand, and Stars Antoine de Saint Exupery
- Cathy O'Neil: Do Algorithms Perpetuate Human Bias? NPR January 26, 2018
- Episode 74: How to Avoid Bias in Your Machine Learning Models with Clare Corthell The Impact Podcast, Georgian Partners 2018
- Ep 43: Is there Bias in Machine Learning Algorithms? – with guest Dr. Joshua Kroll SparkDialog with Elizabeth Fernandez, May 1 2018
- Quantifying and Reducing Stereotypes in Word Embeddings Bolukbasi et al 2016
- Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings Bolukbasi et al 2016
- Fairness Through Awareness Dwork et al 2011
- Fairness in Machine Learning / NIPS 2017 Tutorial, Solon Barocas & Moritz Hardt 2017
- Equality of Opportunity in Supervised Learning Hardt et al 2016
- Learning Fair Representations Zemel et al 2013
- Kate Crawford, Co-founder of AINow Institute
- Cathy O'Neil, Author of Weapons of Math Destruction
- Moritz Hardt, Co-founder and co-organizer of FATML
- Cynthia Dwork, Harvard Gordon McKay Professor of Computer Science, Cryptography Expert and Author of Differential Privacy and works on fairness in classification.
- Kathryn Hume, Integrate AI Product Strategy and Speaker on history and philosophy of AI
- Virginia Eubanks, Author of Automating Inequality
- Joy Buolamwini of MIT Media Lab & Founder of AI Justice League
- Susan Etlinger, Industry Expert at Altimeter Group
- Solon Barocas, Assistant Professor in the Department of Information Science at Cornell University & Co-founder of FATML (I suggest following his website's speaking calendar!)
- Richard Zemel, Co-Founder and Director of Research, Vector Institute for Artificial Intelligence
- Timnit Gebru Fairness Accountability Transparency and Ethics (FATE) Microsoft Lab, Co-Founder of Black in AI
- Clare Corthell, Industry Speaker on building more inclusive digital products
Submit a Pull Request! I'm reachable at clare at luminantdata.com
.