Implementation of Infogan: Interpretable representation learning by information maximizing generative adversarial nets. with pytorch.
python infoGAN.py --help # for help
python infoGAN.py
# check bin/train.sh as an example.
Training results are under folder specified by --save-path
in args.
For other datasets:
- put it under
data/
- write
get_data
function indata.py
- register it in
opt.py
anddata.py
python info_util.py --help # for help
python info_util.py --model-path </path/to/netG.pt> --cidx <targeted-continuous-idx> --didx <targeted-discrete-idx>
# check bin/traverse.sh as an example.
python -m utils.eval --help # for help
# check bin/eval.sh as an example.
Traverse results on three continuous variables (fixing other variables) and one discrete variables ( fixing the discrete variable and randomly choose other variables ).
- Evaluation with FID; Refactor some functions. Sep 18th. 21.
- More training results on other datasets.
- Adaptive module to apply infoGAN on other GAN models: like Info-ProGAN. Check the
info_util.py
for loss and noise generator used in InfoGAN.
- The results are selected, so the complete results might not as perfect as desired, but infoGAN does present ability to controll the semantic of output.
- InfoGAN cannot achieve perfectly disentangled controll on output, it was not designed to do this. From results on MNIST above, the 3rd continuous factor seems to entangle with the slope of digits.
[1] Chen, Xi, et al. "Infogan: Interpretable representation learning by information maximizing generative adversarial nets." Advances in neural information processing systems. 2016.
[2] https://github.com/Natsu6767/InfoGAN-PyTorch