Skip to content
Implementation of paper which uses a swish-gated residual U-net to color line-art anime drawings
Python Shell
Branch: master
Clone or download
Latest commit fd4ddd7 Aug 19, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
scripts changed img_util import to create a namespace for easier swapping of … Apr 5, 2019
src Typos Apr 17, 2019
.gitignore Updated requirements.txt and .gitiginore Mar 20, 2019
README.md Update README.md Aug 19, 2019
requirements.txt Updated requirements.txt and .gitiginore Mar 20, 2019

README.md

Anime Sketch Coloring with Swish-Gated Residual U-Net

Paper authors: Gang Liu, Xin Chen, Yanzhong Hu

Implementation authors: Alexander Koumis, Amlesh Sivanantham, Pradeep Lam, Georgio Pizzorni

This is an implementation of the paper Anime Sketch Coloring with Swish-Gated Residual U-Net, which uses uses deep learning to colorize anime line-art (sketches).

BIG shout out to the paper authors Xin Chen and Gang Liu for helping us with implementation details that we initially got wrong.

Setup

Use the requirements.txt file to install the necessary depedencies for this project.

$ pip install -r requirements.txt

Training the SGRU Model

Before proceeding, make sure the data folder has the following structure.

data/
├── images/
│   ├── images_bw/
│   └── images_rgb/
└── vgg_19.ckpt

You can populate the image directories using using your own dataset (see utility script scripts/process_dir.py for creating sketch/RGB pairs). You can find the pretrained vgg_19.ckpt checkpoint file here. Once you have placed the dataset and VGG checkpoint in your data directory, start the training procedure using the src/train.py script:

$ ./train.py ${DATA_DIR} ${OUTPUT_DIR}

Run ./train.py --help to see the options available for changing hyperparamenters from the command line.

The output directory will have the following structure

${OUTPUT_DIR}/
└── ${EXP_NAME:-TIME_STAMP}/
    ├── images/
    ├── events.out.tfevents (tensorboard)
    └── model.ckpt

Evaluting pre-trained model

To evaluate a trained model, you must train the weights yourself using the above procedure, or use our pretrained weights available here. Place your self-trained model.ckpt.* files inside of a directory $CKPT_DIR or unzip our provided checkpoint.zip file into a directory $CKPT_DIR.

After you have placed the weights checkpoint files in $CKPT_DIR, you can use the script src/evaluate.py to see the results on your own image (written as sketch_image.jpg below). The best results occur when the input image is 256x256 (similar to the dataset the model was trained on). To display the results, run the script like this:

./evaluate.py sketch_image.jpg $CKPT_DIR/model.ckpt --show

To save the results of into a directory, run the script with the --output-dir argument:

./evaluate.py sketch_image.jpg $CKPT_DIR/model.ckpt --output-dir $OUTPUT_DIR

To show and save the results, use both the --show and --output-dir arguments.

Results

If your model trains correctly it should generate results like this: Results In the above image, the first column is the input sketch, and the rest are generated by the network given the sketch.

You can’t perform that action at this time.