Assessing Generative Models via Precision and Recall (official repository)
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Failed to load latest commit information.
LICENSE initial commit Sep 4, 2018
README.md adds link to poster Oct 12, 2018
inception.py
inception_network.py
prd_from_image_folders.py adds label argument to name the models to be evaluated in the resulti… Sep 14, 2018
prd_score.py adds handling of corner case where numerical instabilities were leadi… Sep 14, 2018
prd_score_test.py adds wrapper for comfortable computation of PRD curves from folders o… Sep 10, 2018
requirements.txt adds wrapper for comfortable computation of PRD curves from folders o… Sep 10, 2018
requirements_minimal.txt adds wrapper for comfortable computation of PRD curves from folders o… Sep 10, 2018

README.md

Assessing Generative Models via Precision and Recall

Official code for Assessing Generative Models via Precision and Recall by Mehdi S. M. Sajjadi, Olivier Bachem, Mario Lucic, Olivier Bousquet, and Sylvain Gelly, presented at NIPS 2018. The poster can be downloaded here.

Usage

Requirements

A list of required packages is provided in requirements.txt and may be installed by running:

pip install -r requirements.txt

If the embedding is computed manually ([Manual: Compute PRD from any embedding](#Manual: Compute PRD from any embedding)), a minimal set of required packaged may be used, see requirements_minimal.txt.

Automatic: Compute PRD for folders of images on disk

Note that a GPU will significantly speed up the computation of the Inception embeddings, consider installing pip install tensorflow-gpu.

Example: you have a folder of images from your true distribution (e.g., ~/real_images/) and any number of folders of generated images (e.g., ~/generated_images_1/ and ~/generated_images_2/). Note that the number of images in each folder needs to be the same.

  1. Download the pre-trained inception network from here and place it somewhere, e.g. /tmp/prd_cache/inception.pb (Alternate link here. Note that this file needs to be unpacked.)
  2. In a shell, cd to the repository directory and run
python prd_from_image_folders.py --inception_path /tmp/prd_cache/inception.pb --reference_dir ~/real_images/ --eval_dirs ~/generated_images_1/ ~/generated_images_2/ --eval_labels model_1 model_2

For further customization, run ./prd_from_image_folders.py -h to see the list of available options.

Manual: Compute PRD from any embedding

Example: you want to compare the precision and recall of a pair of generative models in some feature embedding to your liking (e.g., Inception activations).

  1. Take your test dataset and generate the same number of data points from each of your generative models to be evaluated.
  2. Compute feature embeddings of both real and generated datasets, e.g. feats_real, feats_gen_1 and feats_gen_2 as numpy arrays each of shape [number_of_data_points, feature_dimensions].
  3. In python, run the following code:
import prd
prd_data_1 = prd.compute_prd_from_embedding(feats_real, feats_gen_1)
prd_data_2 = prd.compute_prd_from_embedding(feats_real, feats_gen_2)
prd.plot([prd_data_1, prd_data_2], ['model_1', 'model_2'])

BibTex citation

@inproceedings{precision_recall_distributions,
  title     = {{Assessing Generative Models via Precision and Recall}},
  author    = {Sajjadi, Mehdi~S.~M. and Bachem, Olivier and Lu{\v c}i{\'c}, Mario and Bousquet, Olivier and Gelly, Sylvain},
  booktitle = {{Advances in Neural Information Processing Systems (NIPS)}},
  year      = {2018}}

Further information

External copyright for: prd_score.py prd_score_test.py inception_network.py
Copyright for remaining files: Mehdi S. M. Sajjadi

License for all files: Apache License 2.0

For any questions, comments or help to get it to run, please don't hesitate to mail us: msajjadi@tue.mpg.de