This project explores the impact of different f-divergences and sampling strategies on the quality and diversity of samples generated by Generative Adversarial Networks (GANs).
In particular, we study f-GAN variants (Jensen–Shannon, Kullback–Leibler, Pearson \chi^2) and evaluate the effect of post-hoc refinement using DGflow.
- Use of a baseline GAN for comparison
- Implementation of f-GAN with different divergences
- Study of sampling strategies:
- Normal sampling
- Soft truncation
- Hard truncation
- DGflow refinement
- Evaluation using:
- FID
- Precision / Recall
.
├── checkpoints/ # Saved models (keep minimal)
├── data/ # Dataset
├── samples/ # Generated samples
├── model.py # Generator & Discriminator architectures
├── train.py # Baseline GAN training
├── train_fgan.py # f-GAN training
├── generate.py # Sample generation
├── sampling_utils.py # Sampling methods (normal, truncation, DGflow)
├── fgan_utils.py # f-divergence functions
├── metrics.py # Evaluation metrics (FID, Precision, Recall)
├── evaluate_all.py # Evaluation pipeline
├── utils.py # Utility functions
├── select_10img.py # Sample selection utility
├── train_feature_extractor.py # Feature extractor for metrics
├── requirements.txt # Dependencies
├── report.pdf # Project report
├── slides.pdf # Presentation slides
└── README.md
git clone <your-repo-url>
cd GANOn Juliet (MesoNet's cluster), you need to:
-
Create a virtual environment for Python:
python -m venv venv
-
Activate the environment:
source venv/bin/activate -
Install the required dependencies:
pip install -r requirements.txt
Among the good practices of data science, we encourage you to use conda or virtualenv to create a Python environment.
To test your code on our platform, you are required to update the requirements.txt file with all the libraries you use.
When your code is evaluated, the following command will be executed:
pip install -r requirements.txtpython train.pypython train_fgan.pypython generate.pypython evaluate_all.pyDGflow improves generated samples by refining latent vectors using discriminator gradients.
Implemented in:
sampling_utils.py
Key features:
- Sample-specific refinement
- No retraining of the generator
- Step size adaptation depending on divergence
- DGflow improves FID across JS, KL, and Pearson divergences
- JS and KL provide more stable gradients and better performance
- Pearson divergence is more sensitive and requires careful tuning
- Truncation methods reduce diversity compared to DGflow
See:
report.pdfslides.pdf
Push the minimal amount of models in the folder checkpoints/.
- Large batch sizes are recommended for reliable evaluation metrics
- Results may vary depending on hyperparameters and dataset
- DGflow requires careful tuning of the step size
- Goodfellow et al., Generative Adversarial Networks, 2014
- Nowozin et al., f-GAN, 2016
- Ansari et al., DGflow, 2021
This project is released under the MIT License.