This is the code repository of paper RUNNER: Responsible UNfair NEuron Repair for Enhancing Deep Neural Network Fairness.
This repository implements RUNNER and a variety of baselines on the Adult dataset. More code on other datasets would be released after acceptance.
For example, run Vanilla train
python main.py --method van --mode dp --lam 1
run RUNNER
python main.py --method NeuronImportance_GapReg --mode eo --lam 5 --neuron_ratio 0.05
The hidden size of MLP is 200. We use Adam as the learning optimizer and the batch size is set to 1000 for the
For easier comparison, we select hyper-parameters for each method to enable the trained models to have relatively close AP values. For example, to achieve this purpose, the
Adult | COMPAS | Credit | LSAC | CelebA (wavy) | CelebA (attractive) | |
---|---|---|---|---|---|---|
DP | 50% | 5% | 5% | 5% | 20% | 5% |
EO | 5% | 5% | 5% | 5% | 50% | 20% |
Different from Vanilla, Oversample, Reweighing, and FairSmote, other methods rely on hyper-parameters setting. We introduce the hyper-parameter settings as follows:
The learning rate for the adversary is 1e-4. The training loss is L =
We follow FairNeuron to conduct a comparison experiment of these hyperparameters. 𝜃 varies between the interval [1e-4, 1] and 𝛾 varies between the interval [0.5, 1]. Note that we use logarithmic coordinates for 𝜃 since its value is sampled proportionally.
The threshold