Skip to content
/ moc Public
forked from dandls/moc

Multi-Objective Counterfactuals

License

Notifications You must be signed in to change notification settings

zeta1999/moc

 
 

Repository files navigation

MOC - Multi-Objective Counterfactuals

This repository provides code and examples for generating multi-objective counterfactuals for the following paper:
Dandl, S., Molnar, C., Binder, M., Bischl, B. (2020): Multi-Objective Counterfactual Expalantions.

For all computations, we used either the statistical software R (version ≥ 3.4.4) or Python (version 3.6.9)

Overview

  • Code to reproduce analysis done in the Paper:
    • credit_example: Example R code that generates counterfactuals on the German credit dataset, as used in the Paper.
    • appendix_irace: Code that was used to run iterated F-racing to tune the hyperparameters of MOC. Includes a Makefile.
    • benchmark: Code that was used to generate the benchmark data. Includes a Makefile.
    • benchmark_analysis: R code for the analysis of the benchmark results.
    • helpers: Helper functions.
    • saved_objects: Saved benchmark and irace results to duplicate results without the necessity to rerun experiments.
  • Package Code:

Manual

Download the github repository

git clone https://github.com/susanne-207/moc.git

Statistical Analysis

For the German Credit dataset example shown in the paper, step through this file: german_credit_application.R

For the results of the benchmark study, step through the following file: evaluate_cfexps.R

irace run

Have a look on the Makefile.

make train-models will train the classification models for iterated racing on the tasks derived from OpenML.

make get-evals will return the number of generations to ensure convergence of the hypervolume in most cases for running MOC within iterated F-racing.

make run-irace this will start iterated F-racing using the maximum number of generations and the trained models from the steps before.

make get-generations will return the number of generations necessary to ensure convergence AFTER the other parameters were tuned.

All results are saved in a new folder called saved_objects_rerun.

Rerun Benchmark

Have a look on the Makefile.

make train-models will train the classification models for the benchmark on the tasks derived from OpenML. The id of the tasks are saved in benchmark_task_ids.rds. The models are saved in saved_objects_rerun.

make run-moc will run the benchmark for MOC.

make run-pair will run the benchmark for Pair.

make run-tweaking will run the benchmark for Tweaking.

Recourse and DiCE have a seperate Makefile since they are Python and not R based. At first a virtual environment is necessary using make venv-dice and make venv-recourse. To run the experiments, use make all.

About

Multi-Objective Counterfactuals

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • R 76.2%
  • Jupyter Notebook 19.9%
  • Python 1.9%
  • TeX 1.2%
  • Makefile 0.8%