Skip to content
No description, website, or topics provided.
Branch: master
Clone or download
Diego Diego
Diego and Diego adding later presentations
Latest commit b886167 May 8, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
presentations_and_critiques adding later presentations May 8, 2019
README.md changing paper 10b Feb 22, 2019

README.md

Spring 2019 - EE 381V - SPEC TPCS IN MACHINE LEARNING (16725)


Course Outline:


Prerequisites

Pre-reqs: At least one graduate course completed in Data Mining/Machine Learning. Online courses do not count.


Scope

This is an advanced, seminar-oriented course. We shall study recently published papers relevant to the development of responsible and trustworthy data driven automated decision systems. Solid background in pattern recognition/machine learning is assumed. Key topics include building explainable ML models, black-box explainability, algorithmic fairness, adversarial ML, robust statistical modeling, and privacy aware data mining. Coursework will mainly involve paper presentations, critiques and discussion, a mini coding-based project and a major term project on developing some aspects of a responsible ML system.


Instructors

  • Instructor: Dr. Joydeep Ghosh
  • TA: Diego Garcia-Olano

Schedule and Format

Tuesday/Thursday: 12:30 - 2 ECJ 1.312

  • Overview 2 classes
  • Explainability (P) 7 classes
  • Fairness (P) 6 classes
  • Assurance (P) 5 classes
  • Guest Speaker 4 classes
  • Minor project 1 class (mid March)
  • Major project 3 classes (late April)

Topics marked by (P) are student-led presentations. Each such class will cover 2 papers, spending 35 minutes per paper as follows: lead group 20 mins, critiquing group, 5 minutes; discussion 10 mins.

Every alternate class marked (P) will include a 5 minute quiz at the beginning of the class.


Explainability

Presentation Order Type Reading Year Venue
1a interpretable models Distill-and-Compare: Auditing Black-Box Models Using Transparent Model Distillation 2018 AAAI
1b interpretable models Learning qualitatively diverse and interpretable rules for classification 2018 arxiv
2a black-box explainability Understanding Black-Box Predictions via Influence Functions 2017 ICML
2b black-box explainability Anchors: High-Precision Model Agnostic Explanations 2018 AAAI
3a black-box explainability Explanations based on the Missing: Towards Contrastive Explanations with Pertinent Negatives 2018 NIPS
3b black-box explainability A Unified Approach to Interpreting Model Predictions 2017 NIPS
4a black-box explainability Local Rule-Based Explanations of Black Box Decision Systems 2018 arxiv
4b neural network oriented Deep Learning for Case-Based Reasoning through Prototypes: A Neural Network that Explains Its Predictions 2018 AAAI
5a neural network oriented Layer-wise relevance propagation for neural networks with local renormalization layers 2016 ICANN
5b neural network oriented Streaming Weak Submodularity: Interpreting Neural Networks on the Fly 2017 NIPS
6a neural network oriented Rationalizing Neural Predictions 2016 EMNLP
6b recourse/causal Actionable Recourse in Linear Classification 2018 arxiv
7a philosophy The mythos of model interpretability 2016 ICML
7b philosophy Towards a rigorous science of interpretable machine learning 2017 arxiv

Fairness

Presentation Order Type Reading Year Venue
8a toolkit AI FAIRNESS 360 2018 Arxiv
8b bias detection Fast Threshold Tests for Detecting Discrimination 2018 AISTATS
9a formalism On formalizing fairness in prediction with machine learning 2018 ICML
9b pre-processing Optimized Pre-Processing for Discrimination Prevention 2017 NIPS
10a in-processing Mitigating Unwanted Biases with Adversarial Learning 2018 AAAI
10b in-processing Classification with Fairness Constraints: A Meta-Algorithm with Provable Guarantees 2019 ACM
10b history 50 Years of Test (Un)fairness: Lessons for Machine Learning 2019 FATML
11a post-processing Equality of Opportunity in Supervised Learning 2016 NIPS
11b post-processing On Fairness and Calibration 2017 NIPS
12a causal Causal Reasoning for Algorithmic Fairness 2018 arxiv
12b temporal Delayed Impact of Fair Machine Learning 2018 ICML
13a ranking Fairness of Exposure in Rankings 2018 KDD
13b ranking Towards a Fair Marketplace: Counterfactual Evaluation of the trade-off between Relevance, Fairness & Satisfaction in Recommendation Systems 2018 ACM
14a Controlling Polarization in Personalization: An Algorithmic Framework 2019 FATML

Assurance

Presentation Order Type Reading Year Venue
15a data integrity Datasheets for Datasets 2018 PMLR
15b data integrity Mitigating poisoning attacks on machine learning models: A data provenance based approach 2017 ACM
16a overview Making Machine Learning Robust Against Adversarial Inputs 2018 ACM
16b causality Reliable Decision Support Using Counterfactual Models 2017 NIPS
17a tool ART sec 1-5 Nicolae, Maria-Irina, et al. 2018. “Adversarial Robustness Toolbox v0.3.0.” 2018
17b tool ART sec 6-10 2018
18a DL specific Towards Deep Learning Models Resistant to Adversarial Attacks 2018 ICLR
18b DL specific Decomposition of uncertainty in Bayesian deep learning for efficient and risk-sensitive learning 2018 ICML
19a outliers Generative Adversarial Active Learning for Unsupervised Outlier Detection 2018 arxiv
19b outliers Theoretical Foundations and Algorithms for Outlier Ensembles 2015 SIGKDD
20a bias mitigation Uncovering and Mitigating Algorithmic Bias through Learned Latent Structure 2019 AAAI
20b bias mitigation Why is my classifier Discriminatory? 2018 NIPS
21a security/privacy Model Reconstruction from Model Explanations 2019 FATML
21b software process/evaluation Model Cards for Model Reporting 2019 FATML
You can’t perform that action at this time.