Skip to content


Switch branches/tags

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?


Failed to load latest commit information.
Latest commit message
Commit time

Pluralistic Image Completion

ArXiv | Project Page | Online Demo | Video(demo)

This repository implements the training, testing and editing tools for "Pluralistic Image Completion" by Chuanxia Zheng, Tat-Jen Cham and Jianfei Cai at NTU. Given one masked image, the proposed Pluralistic model is able to generate multiple and diverse plausible results with various structure, color and texture.

Editing example

Example results

Example completion results of our method on images of face (CelebA), building (Paris), and natural scenes (Places2) with center masks (masks shown in gray). For each group, the masked input image is shown left, followed by sampled results from our model without any post-processing. The results are diverse and plusible.

More results on project page

Getting started


This code was tested with Pytoch 0.4.0, CUDA 9.0, Python 3.6 and Ubuntu 16.04

The GUI program was further verified compatible with PyTorch 1.7.0, CUDA 11.2, Python 3.6 and Ubuntu 20.04.

pip install visdom dominate
  • Clone this repo:
git clone
cd Pluralistic


  • face dataset: 24183 training images and 2824 test images from CelebA and use the algorithm of Growing GANs to get the high-resolution CelebA-HQ dataset
  • building dataset: 14900 training images and 100 test images from Paris
  • natural scenery: original training and val images from Places2
  • object original training images from ImageNet.


  • Train a model (default: random irregular and irregular holes):
python --name celeba_random --img_file your_image_path
  • Set --mask_type in options/ for different training masks. --mask_file path is needed for external irregular mask, such as the irregular mask dataset provided by Liu et al. and Karim lskakov .
  • To view training results and loss plots, run python -m visdom.server and copy the URL http://localhost:8097.
  • Training models will be saved under the checkpoints folder.
  • The more training options can be found in options folder.


  • Test the model
python  --name celeba_random --img_file your_image_path
  • Set --mask_type in options/ to test various masks. --mask_file path is needed for 3. external irregular mask,
  • The default results will be saved under the results folder. Set --results_dir for a new path to save the result.

Pretrained Models

Download the pre-trained models using the following links and put them undercheckpoints/ directory.

Our main novelty of this project is the multiple and diverse plausible results for one given masked image. The center_mask models are trained with images of resolution 256*256 with center holes 128x128, which have large diversity for the large missing information. The random_mask models are trained with random regular and irregular holes, which have different diversity for different mask sizes and image backgrounds.


  • Download the pre-trained models from Google drive and put them undercheckpoints/ directory.

  • Install the PyQt5 for GUI operation

pip install PyQt5
  • Install the face_alignment for extracting human face from in-the-wild images.
pip install face_alignment==1.3.5  

Basic usage is:

python -m visdom.server

The buttons in GUI:

  • Options: Select the model and corresponding dataset for editing.
  • Bush Width: Modify the width of bush for free_form mask.
  • draw/clear: Draw a free_form or rectangle mask for random_model. Clear all mask region for a new input.
  • random: Random load the editing image from the datasets.
  • load: Choose the image from the directory.
  • capture: Capture an image using the camera.
  • crop face: Detect, crop and align human face from the given in-the-wild image.
  • Fill-->: Fill the holes ranges and show it on the right.
  • save: Save the inputs and outputs to the directory.
  • Original/Output: Switch to show the original or output image.
  • Manuals: Display some basic instructions for the demo program.

The steps are as follows:

1. Select a model from 'options'
2. Click the 'random', 'load' or 'capture' button to get an input image.
3. [Optional] If the image is loaded by 'load' or 'capture', and the program is running with human face model, click the 'crop face' button to automatically detect and align face. Do NOT apply 'crop face' if image is loaded by 'random'.
4. If you choose a random model, click the 'draw/clear' button to input free_form mask.
5. If you choose a center model, the center mask has been given.
6. click 'fill' button to get multiple results.
7. click 'save' button to save the results.

Editing Example Results

  • Results (original, input, output) for object removing
  • Results (input, output) for face playing. When mask half or right face, the diversity will be small for the short+long term attention layer will copy information from other side. When mask top or down face, the diversity will be large.


  • Free form mask for various Datasets
  • Higher resolution image completion


This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

This software is for educational and academic research purpose only. If you wish to obtain a commercial royalty bearing license to this software, please contact us at


If you use this code for your research, please cite our paper.

  title={Pluralistic Image Completion},
  author={Zheng, Chuanxia and Cham, Tat-Jen and Cai, Jianfei},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},

  title={Pluralistic Free-From Image Completion},
  author={Zheng, Chuanxia and Cham, Tat-Jen and Cai, Jianfei},
  journal={International Journal of Computer Vision},