Despite their remarkable performance on a wide range of visual tasks, machine learning technologies often succumb to data distribution shifts. Consequently, a range of recent work explores techniques for detecting these shifts. Unfortunately, current techniques offer no explanations about what triggers the detection of shifts, thus limiting their utility to provide actionable insights. In this work, we present Concept Bottleneck Shift Detection (CBSD): a novel explainable shift detection method. CBSD provides explanations by identifying and ranking the degree to which high-level human-understandable concepts are affected by shifts. Using two case studies (dSprites and 3dshapes), we demonstrate how CBSD can accurately detect underlying concepts that are affected by shifts and achieve higher detection accuracy compared to state-of-the-art shift detection methods.
This repository contains source code of the system and experimentation results.
Note:
- Core related work include Lipton, Wang, and Smola (2020), Koh et al. (2020), Rabanser, Gunnemann, and Lipton (2019), Kazhdan et al. (2020).
Our shift detection pipeline comprises four step:
- The source and target data are fed to a dimensionality reduction process.
- The reduced representations are analysed using two-sample hypothesis testing, producing p-value and test statistics.
- The resulting p-value and test statistics are used to determine whether a shift exist. We determine a shift exists when there are statistically significant difference in distribution between source and target data.
- CBSD provides explanations, identifying and ranking the degree to which each human-understandable concepts were affected by shifts.
- Python 3.6+
- numpy
- pandas
- matplotlib
- jupyter
- scipy
- scikit-learn
- TensorFlow 2
- torch
- h5py
- plotly
data
: contains datasets used for experimentation.experiments
: contains notebooks that we used to run experiments and visualise results.dSprites
: experimentation codes to generate data and visualise results of dSprites.smallnorb
: experimentation codes to generate data and visualise results of smallnorb.3dshapes
: experimentation codes to generate data and visualise results of 3dshapes.
results
: contains pickled experimentation results.models
: contains pretrained neural network models.scripts
: the source codes and utility functions written fully in Python.
git clone https://github.com/maleakhiw/explaining-dataset-shifts.git
cd explaining-dataset-shifts
pip install -r requirements.txt
Source code to apply shifts, build dimensionality reductors, conduct statistical tests, and experimentation utilities are located inside scripts
. Using the source code, we ran experimentation using the notebooks (data-collection) that is located inside experiments
. To replicate our experiments, please install the requirements and run the notebooks. Alternatively, you can create a new script yourself and import all source codes.
All experimentation data have been pickled and stored inside results
. All pretrained models, including the concept bottleneck models, end-to-end neural networks, trained, and untrained autoencoders are stored inside models
. If you wish to visualise all experimentation results, the easiest way is to run the notebooks (results), where we exported the pickled results and display various tables and plots.
- Maleakhi A. Wijaya (maw219@cam.ac.uk)
- Dmitry Kazhdan (dk525@cam.ac.uk)
- Botty Dimanov (btd26@cam.ac.uk)
- Mateja Jamnik (mateja.jamnik@cl.cam.ac.uk)
@article{DBLP:journals/corr/abs-2104-08952,
author = {Maleakhi A. Wijaya and
Dmitry Kazhdan and
Botty Dimanov and
Mateja Jamnik},
title = {Failing Conceptually: Concept-Based Explanations of Dataset Shift},
journal = {CoRR},
volume = {abs/2104.08952},
year = {2021},
url = {https://arxiv.org/abs/2104.08952},
archivePrefix = {arXiv},
eprint = {2104.08952},
timestamp = {Mon, 26 Apr 2021 17:25:10 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2104-08952.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}