Permalink
Browse files

init:

  • Loading branch information...
youyuge34 committed Jan 7, 2019
0 parents commit 57671fa54b0e1690d064613b656b8ee6f7834033
Showing with 2,826 additions and 0 deletions.
  1. +113 −0 .gitignore
  2. +161 −0 LICENSE.md
  3. +172 −0 README.md
  4. +46 −0 config.yml.example
  5. BIN examples/celeba/images/celeba_01.png
  6. BIN examples/celeba/images/celeba_02.png
  7. BIN examples/celeba/images/celeba_03.png
  8. BIN examples/celeba/images/celeba_04.png
  9. BIN examples/celeba/images/celeba_05.png
  10. BIN examples/celeba/masks/celeba_01.png
  11. BIN examples/celeba/masks/celeba_02.png
  12. BIN examples/celeba/masks/celeba_03.png
  13. BIN examples/celeba/masks/celeba_04.png
  14. BIN examples/celeba/masks/celeba_05.png
  15. BIN examples/getchu1/images/s_c832648.jpg
  16. BIN examples/getchu1/images/s_c936243.jpg
  17. BIN examples/getchu1/masks/s_c832648.jpg
  18. BIN examples/getchu1/masks/s_c936243.jpg
  19. BIN examples/places2/images/places2_01.png
  20. BIN examples/places2/images/places2_02.png
  21. BIN examples/places2/images/places2_03.png
  22. BIN examples/places2/images/places2_04.png
  23. BIN examples/places2/images/places2_05.png
  24. BIN examples/places2/images/test2.jpg
  25. BIN examples/places2/masks/places2_01.png
  26. BIN examples/places2/masks/places2_02.png
  27. BIN examples/places2/masks/places2_03.png
  28. BIN examples/places2/masks/places2_04.png
  29. BIN examples/places2/masks/places2_05.png
  30. BIN examples/places2/masks/test1.png
  31. BIN examples/psv/images/psv_01.png
  32. BIN examples/psv/images/psv_02.png
  33. BIN examples/psv/images/psv_03.png
  34. BIN examples/psv/images/psv_04.png
  35. BIN examples/psv/images/psv_05.png
  36. BIN examples/psv/masks/psv_01.png
  37. BIN examples/psv/masks/psv_02.png
  38. BIN examples/psv/masks/psv_03.png
  39. BIN examples/psv/masks/psv_04.png
  40. BIN examples/psv/masks/psv_05.png
  41. +128 −0 main.py
  42. +8 −0 requirements.txt
  43. +14 −0 scripts/download_model.sh
  44. +239 −0 scripts/fid_score.py
  45. +20 −0 scripts/flist.py
  46. +45 −0 scripts/flist_train_split.py
  47. +21 −0 scripts/getchu_crawler.py
  48. +138 −0 scripts/inception.py
  49. +82 −0 scripts/metrics.py
  50. +3 −0 setup.cfg
  51. +1 −0 src/__init__.py
  52. +64 −0 src/config.py
  53. +196 −0 src/dataset.py
  54. +408 −0 src/edge_connect.py
  55. +231 −0 src/loss.py
  56. +46 −0 src/metrics.py
  57. +250 −0 src/models.py
  58. +212 −0 src/networks.py
  59. +217 −0 src/utils.py
  60. +2 −0 test.py
  61. +4 −0 train.py
  62. +5 −0 venv.bat
@@ -0,0 +1,113 @@
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class

# C extensions
*.so

# Distribution / packaging
.Python
env/
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
*.egg-info/
.installed.cfg
*.egg

# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec

# Installer logs
pip-log.txt
pip-delete-this-directory.txt

# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
.hypothesis/

# Translations
*.mo
*.pot

# Django stuff:
*.log
local_settings.py

# Flask stuff:
instance/
.webassets-cache

# Scrapy stuff:
.scrapy

# Sphinx documentation
docs/_build/

# PyBuilder
target/

# Jupyter Notebook
.ipynb_checkpoints

# pyenv
.python-version

# celery beat schedule file
celerybeat-schedule

# SageMath parsed files
*.sage.py

# dotenv
.env

# virtualenv
.venv
venv/
ENV/

# Spyder project settings
.spyderproject
.spyproject

# Rope project settings
.ropeproject

# mkdocs documentation
/site

# mypy
.mypy_cache/
.vscode

# custom
.todo
results/
checkpoints/
datasets/

model/
mask/

*.psd

Large diffs are not rendered by default.

Oops, something went wrong.
172 README.md
@@ -0,0 +1,172 @@
## EdgeConnect: Generative Image Inpainting with Adversarial Edge Learning
[ArXiv](https://arxiv.org/abs/1901.00212) | [BibTex](#citation)
### Introduction:
We develop a new approach for image inpainting that does a better job of reproducing filled regions exhibiting fine details inspired by our understanding of how artists work: *lines first, color next*. We propose a two-stage adversarial model EdgeConnect that comprises of an edge generator followed by an image completion network. The edge generator hallucinates edges of the missing region (both regular and irregular) of the image, and the image completion network fills in the missing regions using hallucinated edges as a priori. Detailed description of the system can be found in our [paper](https://arxiv.org/abs/1901.00212).
<p align='center'>
<img src='https://user-images.githubusercontent.com/1743048/50673917-aac15080-0faf-11e9-9100-ef10864087c8.png' width='870'/>
</p>
(a) Input images with missing regions. The missing regions are depicted in white. (b) Computed edge masks. Edges drawn in black are computed (for the available regions) using Canny edge detector; whereas edges shown in blue are hallucinated by the edge generator network. (c) Image inpainting results of the proposed approach.

## Prerequisites
- Python 3
- PyTorch 1.0
- NVIDIA GPU + CUDA cuDNN

## Installation
- Clone this repo:
```bash
git clone https://github.com/knazeri/edge-connect.git
cd edge-connect
```
- Install PyTorch and dependencies from http://pytorch.org
- Install python requirements:
```bash
pip install -r requirements.txt
```

## Datasets
We use [Places2](http://places2.csail.mit.edu), [CelebA](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html) and [Paris Street-View](https://github.com/pathak22/context-encoder) datasets. To train a model on the full dataset, download datasets from official websites. Our model is trained on the irregular mask dataset provided by [Liu et al.](https://arxiv.org/abs/1804.07723). You can download publically available train/test mask dataset from [their website](http://masc.cs.gmu.edu/wiki/partialconv).

After downloading, run [`scripts/flist.py`](scripts/flist.py) to generate train, test and validation set file lists. For example, to generate the training set file list on Places2 dataset run:
```bash
mkdir datasets
python ./scripts/flist.py --path path_to_places2_traininit_set --output ./datasets/places_train.flist
```

## Getting Started
Download the pre-trained models using the following links and copy them under `./checkpoints` directory.

[Places2](https://drive.google.com/drive/folders/1KyXz4W4SAvfsGh3NJ7XgdOv5t46o-8aa) | [CelebA](https://drive.google.com/drive/folders/1nkLOhzWL-w2euo0U6amhz7HVzqNC5rqb) | [Paris-StreetView](https://drive.google.com/drive/folders/1cGwDaZqDcqYU7kDuEbMXa9TP3uDJRBR1)

Alternatively, you can run the following script to automatically download the pre-trained models:
```bash
bash ./scripts/download_model.sh
```

### 1) Training
To train the model, create a `config.yaml` file similar to the [example config file](https://github.com/knazeri/edge-connect/blob/master/config.yml.example) and copy it under your checkpoints directory. Read the [configuration](#model-configuration) guide for more information on model configuration.

EdgeConnect is trained in three stages: 1) training the edge model, 2) training the inpaint model and 3) training the joint model. To train the model:
```bash
python train.py --model [stage] --checkpoints [path to checkpoints]
```

For example to train the edge model on Places2 dataset under `./checkpoints/places2` directory:
```bash
python train.py --model 1 --checkpoints ./checkpoints/places2
```

Convergence of the model differs from dataset to dataset. For example Places2 dataset converges in one of two epochs, while smaller datasets like CelebA require almost 40 epochs to converge. You can set the number of training iterations by changing `MAX_ITERS` value in the configuration file.


### 2) Testing
To test the model, create a `config.yaml` file similar to the [example config file](config.yml.example) and copy it under your checkpoints directory. Read the [configuration](#model-configuration) guide for more information on model configuration.

You can test the model on all three stages: 1) edge model, 2) inpaint model and 3) joint model. In each case, you need to provide an input image (image with a mask) and a grayscale mask file. Please make sure that the mask file covers the entire mask region in the input image. To test the model:
```bash
python test.py \
--model [stage] \
--checkpoints [path to checkpoints] \
--input [path to input directory or file] \
--mask [path to masks directory or mask file] \
--output [path to the output directory]
```

We provide some test examples under `./examples` directory. Please download the [pre-trained models](#getting-started) and run:
```bash
python test.py \
--checkpoints ./checkpoints/places2
--input ./examples/places2/images
--mask ./examples/places2/mask
--output ./checkpoints/results
```
This script will inpaint all images in `./examples/places2/images` using their corresponding masks in `./examples/places2/mask` directory and saves the results in `./checkpoints/results` directory. By default `test.py` script is run on stage 3 (`--model=3`).

### 3) Evaluating
To evaluate the model, you need to first run the model in [test mode](#testing) against your validartion set and save the results on disk. We provide a utility [`./scripts/metrics.py`](scripts/metrics.py) to evaluate the model using PSNR, SSIM and Mean Absolute Error:

```bash
python ./scripts/metrics.py --data-path [path to validation set] --output-path [path to model output]
```

To measure the Fréchet Inception Distance (FID score) run [`./scripts/fid_score.py`](scripts/fid_score.py). We utilize the PyTorch implementation of FID [from here](https://github.com/mseitzer/pytorch-fid) which uses the pretrained weights from PyTorch's Inception model.

```bash
python ./scripts/fid_score.py --path [path to validation, path to model output] --gpu [GPU id to use]
```

### Model Configuration

The model configuration is stored in a [`config.yaml`](config.yml.example) file under your checkpoints directory. The following tables provide the documentation for all the options available in the configuration file:

#### General Model Configurations

Option | Description
----------------| -----------
MODE | 1: train, 2: test, 3: eval
MODEL | 1: edge model, 2: inpaint model, 3: edge-inpaint model, 4: joint model
MASK | 1: random block, 2: half, 3: external, 4: external + random block, 5: external + random block + half
EDGE | 1: canny, 2: external
NMS | 0: no non-max-suppression, 1: non-max-suppression on the external edges
SEED | random number generator seed
GPU | list of gpu ids, comma separated list e.g. [0,1]
DEBUG | 0: no debug, 1: debugging mode
VERBOSE | 0: no verbose, 1: output detailed statistics in the output console

#### Loading Train, Test and Validation Sets Configurations

Option | Description
----------------| -----------
TRAIN_FLIST | text file containing training set files list
VAL_FLIST | text file containing validation set files list
TEST_FLIST | text file containing test set files list
TRAIN_EDGE_FLIST| text file containing training set external edges files list (only with EDGE=2)
VAL_EDGE_FLIST | text file containing validation set external edges files list (only with EDGE=2)
TEST_EDGE_FLIST | text file containing test set external edges files list (only with EDGE=2)
TRAIN_MASK_FLIST| text file containing training set masks files list (only with MASK=3, 4, 5)
VAL_MASK_FLIST | text file containing validation set masks files list (only with MASK=3, 4, 5)
TEST_MASK_FLIST | text file containing test set masks files list (only with MASK=3, 4, 5)

#### Training Mode Configurations

Option |Default| Description
-----------------------|-------|------------
LR | 0.0001| learning rate
D2G_LR | 0.1 | discriminator/generator learning rate ratio
BETA1 | 0.0 | adam optimizer beta1
BETA2 | 0.9 | adam optimizer beta2
BATCH_SIZE | 8 | input batch size
INPUT_SIZE | 256 | input image size for training. (0 for original size)
SIGMA | 2 | standard deviation of the Gaussian filter used in Canny edge detector </br>(0: random, -1: no edge)
MAX_ITERS | 2e7 | maximum number of iterations to train the model
EDGE_THRESHOLD | 0.5 | edge detection threshold (0-1)
L1_LOSS_WEIGHT | 1 | l1 loss weight
FM_LOSS_WEIGHT | 10 | feature-matching loss weight
STYLE_LOSS_WEIGHT | 1 | style loss weight
CONTENT_LOSS_WEIGHT | 1 | perceptual loss weight
INPAINT_ADV_LOSS_WEIGHT| 0.01 | adversarial loss weight
GAN_LOSS | nsgan | **nsgan**: non-saturating gan, **lsgan**: least squares GAN, **hinge**: hinge loss GAN
GAN_POOL_SIZE | 0 | fake images pool size
SAVE_INTERVAL | 1000 | how many iterations to wait before saving model (0: never)
EVAL_INTERVAL | 0 | how many iterations to wait before evaluating the model (0: never)
LOG_INTERVAL | 10 | how many iterations to wait before logging training loss (0: never)
SAMPLE_INTERVAL | 1000 | how many iterations to wait before saving sample (0: never)
SAMPLE_SIZE | 12 | number of images to sample on each samling interval

## License
Licensed under a [Creative Commons Attribution-NonCommercial 4.0 International](https://creativecommons.org/licenses/by-nc/4.0/).

Except where otherwise noted, this content is published under a [CC BY-NC](https://creativecommons.org/licenses/by-nc/4.0/) license, which means that you can copy, remix, transform and build upon the content as long as you do not use the material for commercial purposes and give appropriate credit and provide a link to the license.


## Citation
If you use this code for your research, please cite our paper <a href="https://arxiv.org/abs/1901.00212">EdgeConnect: Generative Image Inpainting with Adversarial Edge Learning</a>:

```
@inproceedings{nazeri2019edgeconnect,
title={EdgeConnect: Generative Image Inpainting with Adversarial Edge Learning},
author={Nazeri, Kamyar and Ng, Eric and Joseph, Tony and Qureshi, Faisal and Ebrahimi, Mehran},
journal={arXiv preprint},
year={2019},
}
```
@@ -0,0 +1,46 @@
MODE: 1 # 1: train, 2: test, 3: eval
MODEL: 1 # 1: edge model, 2: inpaint model, 3: edge-inpaint model, 4: joint model
MASK: 3 # 1: random block, 2: half, 3: external, 4: (external, random block), 5: (external, random block, half)
EDGE: 1 # 1: canny, 2: external
NMS: 1 # 0: no non-max-suppression, 1: applies non-max-suppression on the external edges by multiplying by Canny
SEED: 10 # random seed
GPU: [0] # list of gpu ids
DEBUG: 0 # turns on debugging mode
VERBOSE: 0 # turns on verbose mode in the output console

TRAIN_FLIST: ./datasets/places2_train.flist
VAL_FLIST: ./datasets/places2_val.flist
TEST_FLIST: ./datasets/places2_test.flist

TRAIN_EDGE_FLIST: ./datasets/places2_edges_train.flist
VAL_EDGE_FLIST: ./datasets/places2_edges_val.flist
TEST_EDGE_FLIST: ./datasets/places2_edges_test.flist

TRAIN_MASK_FLIST: ./datasets/masks_train.flist
VAL_MASK_FLIST: ./datasets/masks_val.flist
TEST_MASK_FLIST: ./datasets/masks_test.flist

LR: 0.0001 # learning rate
D2G_LR: 0.1 # discriminator/generator learning rate ratio
BETA1: 0.0 # adam optimizer beta1
BETA2: 0.9 # adam optimizer beta2
BATCH_SIZE: 8 # input batch size for training
INPUT_SIZE: 256 # input image size for training 0 for original size
SIGMA: 2 # standard deviation of the Gaussian filter used in Canny edge detector (0: random, -1: no edge)
MAX_ITERS: 2e6 # maximum number of iterations to train the model

EDGE_THRESHOLD: 0.5 # edge detection threshold
L1_LOSS_WEIGHT: 1 # l1 loss weight
FM_LOSS_WEIGHT: 10 # feature-matching loss weight
STYLE_LOSS_WEIGHT: 1 # style loss weight
CONTENT_LOSS_WEIGHT: 1 # perceptual loss weight
INPAINT_ADV_LOSS_WEIGHT: 0.01 # adversarial loss weight

GAN_LOSS: nsgan # nsgan | lsgan | hinge
GAN_POOL_SIZE: 0 # fake images pool size

SAVE_INTERVAL: 1000 # how many iterations to wait before saving model (0: never)
SAMPLE_INTERVAL: 1000 # how many iterations to wait before sampling (0: never)
SAMPLE_SIZE: 12 # number of images to sample
EVAL_INTERVAL: 0 # how many iterations to wait before model evaluation (0: never)
LOG_INTERVAL: 10 # how many iterations to wait before logging training status (0: never)
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Diff not rendered.
Oops, something went wrong.

0 comments on commit 57671fa

Please sign in to comment.