Skip to content

mphirke/Google-Colab-waifu2x-chainer

 
 

Repository files navigation

Google Colab implementation of waifu2x-chainer for people that do not have access to NVIDIA GPUs (CUDA).

This is the Google Colab implementation of tsurumeso's chainer implementation of waifu2x, for people like me that do not have access to NVIDIA GPUs. Google Colab has CUDA version 10.00, so it is neccessary that the CuPy installed is of the same version.

Work In Progress

The following is taken from tsurumeso's documentation -

waifu2x-chainer

This is a Chainer implementation of waifu2x [1]. Note that the training procedure of waifu2x-chainer may be slightly different from original waifu2x.

Summary

  • 2D character picture (Kagamine Rin) is licensed under CC BY-NC by piapro [2].

Requirements

  • Chainer
  • CuPy (for GPU support)
  • Matplotlib (for benchmark)
  • ONNX-Chainer (for ONNX model export)
  • Pillow
  • Wand (for training)

Installation

Install Python packages

pip install chainer
pip install pillow

Enable GPU support

Install CuPy precompiled binary package which includes the latest version of cuDNN library.
See: CuPy Installation Guide

Getting waifu2x-chainer

git clone https://github.com/tsurumeso/waifu2x-chainer.git

Testing

cd waifu2x-chainer
python waifu2x.py

Usage

Omitting --gpu (-g) option run on CPU.

Noise reduction

python waifu2x.py --method noise --noise_level 1 --input path/to/image/or/directory --arch VGG7 --gpu 0

python waifu2x.py -m noise -n 0 -i path/to/image/or/directory -a 0 -g 0
python waifu2x.py -m noise -n 2 -i path/to/image/or/directory -a 0 -g 0
python waifu2x.py -m noise -n 3 -i path/to/image/or/directory -a 0 -g 0

2x upscaling

python waifu2x.py --method scale --input path/to/image/or/directory --arch VGG7 --gpu 0

python waifu2x.py -m scale -i path/to/image/or/directory -a 0 -g 0

Noise reduction + 2x upscaling

python waifu2x.py --method noise_scale --noise_level 1 --input path/to/image/or/directory --arch VGG7 --gpu 0

python waifu2x.py -m noise_scale -n 0 -i path/to/image/or/directory -a 0 -g 0
python waifu2x.py -m noise_scale -n 2 -i path/to/image/or/directory -a 0 -g 0
python waifu2x.py -m noise_scale -n 3 -i path/to/image/or/directory -a 0 -g 0

Train your own model

Install Wand

sudo apt install libmagickwand-dev
pip install wand

For more details, please refer template training script at appendix/linux or appendix/windows . In my case, 5000 JPEG images are used for pretraining and 1000 noise-free-PNG images for finetuning.

Convert Chainer models to ONNX and Caffe models

Install ONNX-Chainer

pip install onnx-chainer

Run script

cd appendix
python convert_models.py

Results are saved at the same directory of the original models (e.g. models/vgg7/anime_style_scale_rgb.npz to models/vgg7/anime_style_scale_rgb.caffemodel).
Note: Since chainer.CaffeFunction does not currently support Slice layer, some models skip conversion to caffemodel.

References

About

Google Colab implementation of Chainer implementation of waifu2x for people like me that don't have local access to NVIDIA GPUs (CUDA)

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Jupyter Notebook 81.4%
  • Python 17.4%
  • Other 1.2%