Skip to content

Latest commit

 

History

History
163 lines (92 loc) · 9.7 KB

README.md

File metadata and controls

163 lines (92 loc) · 9.7 KB

Ethical Issues Surrounding Artificial Intelligence Systems and Big Data

Course description

This course will consider some large questions surrounding ethics in artificial intelligence systems. These questions include:

  • Do computers make decisions in a way that is more fair and less biased than people?

  • What are the political, legal, social, economic and technological forces that govern the digital world? Which forces are in opposition? Which forces are aligned?

  • What role does the government currently play in directing AI technology in the United States? What role could the government play? What are the advantages and disadvantages of different approaches?

Readings and videos listed under each week are to be done before that week's class.

Reading response guidelines and a sample response can be found here.

Additional resources for selected weeks can be found in the worksheets and slides folders.

Week 1: Inspiration

First and last day reflection

Week 2: Machine Learning Foundations

Machine Learning and Human Bias. Google Video.

AI can be sexist and racist — it’s time to make it fair. Zou and Schiebinger. Nature.

Introduction, Weapons of Math Destruction. Cathy O'Neil. Check Moodle for the week's reading.

Week 3: Ethical Intuitions

Advances in AI are used to spot signs of sexuality. The Economist.

Why Stanford Researchers Tried to Create a ‘Gaydar’ Machine. New York Times.

Challenge: Author's Note, Kosinski and Wang

Week 4: Ethical Foundations

Social Philosophy: Marx, Rawls and Nozick. Donald Palmer. Does the Center Hold?

Challenge: Stanford Encyclopedia of Philosophy: Philosophy of Technology

Week 5: Fairness

Machine Bias. Angwin, Larson, Mattu and Kirchner. ProPublica.

A computer program used for bail and sentencing decisions was labeled biased against blacks. It’s actually not that clear. Corbett-Davies, Pierson, Feller, and Goel. The Washington Post.

Challenge: Even Imperfect Algorithms Can Improve the Criminal Justice System. Corbett-Davies, Goel, and Gonzalez-Bailon. The New York Times.

Week 6: Fairness

Semantics derived automatically from language corpora contain human-like biases. Caliskan, Bryson, and Narayanan. Science.

How Vector Space Mathematics Reveals the Hidden Sexism in Language. MIT Technology Review.

Challenge: Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings. Bolukbasi, Chang, Zou, Saligrama, and Kalai. NeurIPS 2016.

Week 7: Fairness

'Three black teenagers' Google search sparks outrage. Guynn. USA Today.

Racism is Poisoning Online Ad Delivery, Says Harvard Professor. MIT Technology Review.

Facebook Lets Advertisers Exclude Users by Race. Angwin and Parris. ProPublica.

Week 8: Privacy

Amazon’s facial recognition matched 28 members of Congress to criminal mugshots. Levin. The Guardian.

Want to Predict the Future of Surveillance? Ask Poor Communities. Eubanks. The American Prospect.

Challenge: Face Off: Law Enforcement Use Of Face Recognition Technology. Lynch. Electronic Frontier Foundation.

Week 9: Privacy

Amazon Echo and the Hot Tub Murder. Dotan and Albergotti. The Information.

How Target Figured Out A Teen Girl Was Pregnant Before Her Father Did . Hill. Forbes.

Congressional Republicans just voted to let ISPs sell your browsing history to advertisers. Lee. Vox.

Challenge: Simple Demographics Often Identify People Uniquely. Sweeney. Health.

Preview of reading (in class): TapPrints: Your Finger Taps Have Fingerprints. Miluzzo, Varshavsky, Balakrishnan, Choudhury. MobiSys.

Week 10: Automated Decision-Making and Interpretability

The Dark Secret at the Heart of AI. Knight. MIT Technology Review.

When Is It Important for an Algorithm to Explain Itself? Hume. Harvard Business Review.

Week 11: Accountability and Regulation

The next big battle over internet freedom is here. Stewart. Vox.

What the government could actually do about Facebook. Stewart. Vox.

Week 12: Diversity

The Tech Industry’s Gender-Discrimination Problem. Kolhatkar. The New Yorker.

The Real Reason Women Quit Tech (and How to Address It). Thomas. Medium.

We tested bots like Siri and Alexa to see who would stand up to sexual harassment. Fessler. Quartz.

Additional readings that touch on issues other than gender:

Why Tech Leadership Has a Bigger Race than Gender Problem. Tiku. Wired.

Why Is Science So Straight?. Suri. The New York Times.

STEM is losing male LGBQ undergrads. Langin. Science.

When disability tech is just a marketing exercise. Eveleth. The Outline.

Why the tech industry needs people with disabilities — and vice versa. Bedford. The Ground Truth Project.

Week 13: AI for Social Good and Reflection

Final presentation instructions

Grading

Your grade will be based equally on the following:

  • 50% Online reading responses. Each week in this course you will write an online response to the readings on Moodle. You must turn in your response before class to receive credit for the week.
  • 50% Participation during class discussions. This class will have weekly in-class discussions. Different students may participate in different ways: for instance by talking in large groups, talking in small groups or listening carefully to others.

The following guidelines will create a comfortable and productive learning environment throughout the semester.

You can expect us:

  • To start and end class on time
  • To reply to emails within 24 hours on weekdays and 48 hours on weekends
  • To assign readings and class activities that will foster engaging discussion

We can expect you:

  • To come to class on time
  • Since we will be discussing complex issues, to come with an open mind
  • To assume the best of others' intentions in discussion
  • To be respectful of others' viewpoints
  • As much as possible, back up comments with evidence, argumentation, or recognition of inherent tradeoffs