This repository provides the official implementation for the following paper:
Colorizing line art is a pivotal task in the production of hand-drawn cel animation. In this work, we introduce a new learning-based inclusion matching pipeline, which directs the network to comprehend the inclusion relationships between segments. To facilitate the training of our network, we also propose a unique dataset PaintBucket-Character. This dataset includes rendered line arts alongside their colorized counterparts, featuring various 3D characters.
- 2024.04.25: Light-weighted model released.
- 2024.04.12: Support multiple ground-truth inference.
- 2024.04.08: Model inference updated. Support all resolutions and unclosed line art images.
- 2024.03.30: Checkpoint and training code of our BasicPBC are released.
- 2024.03.29: This repo is created.
- Add trapped-ball segmentation module for unclosed line arts inference.
- Add a lightweight BasicPBC which can process images in 2K resolution without encountering Out-Of-Memory (OOM) error on 16GB RAM GPU.
-
Clone the repo
git clone https://github.com/ykdai/BasicPBC.git
-
Install dependent packages
cd BasicPBC pip install -r requirements.txt
-
Install BasicPBC
Please run the following commands in the BasicPBC root path to install BasicPBC:python setup.py develop
The details of our dataset can be found at this page. Dataset can be downloaded using the following links.
Google Drive | Baidu Netdisk | Number | Description | |
---|---|---|---|---|
PaintBucket-Character Train/Test | link | link | 11,345/3,000 | 3D rendered frames for training and testing. Our dataset is a mere 2GB in size, so feel free to download it and enjoy exploring. 😆😆 |
PaintBucket-Real Test | / | / | 200 | Hand-drawn frames for testing. |
Due to copyright issues, we do not provide download links for the real hand-drawn dataset. Please contact us through the e-mail if you want to use it. These hand-drawn frames are only for evaluation and not for any commercial activities.
You can download the pretrained checkpoints from the following links. Please place it under the ckpt
folder and unzip it, then you can run the basicsr/test.py
for inference.
Google Drive | Baidu Netdisk | |
---|---|---|
BasicPBC | link | link |
BasicPBC-Light | link | link |
To estimate the colorized frames with our checkpoint trained on PaintBucket-Character, you can run the basicsr/test.py
by using:
python basicsr/test.py -opt options/test/basicpbc_pbch_test_option.yml
Or you can test the lightweight model by:
python basicsr/test.py -opt options/test/basicpbc_light_test_option.yml
The colorized results will be saved at results/
.
To play with your own data, put your anime clip(s) under dataset/test/
. The clip folder should contain at least one colorized gt
frame and line
of all frames.
We also provide two simple examples: laughing_girl
and smoke_explosion
.
To play with your own data, put your anime clip(s) under dataset/test/
. The clip folder should contain at least one colorized gt
frame and line
of all frames.
We also provide two simple examples: laughing_girl
and smoke_explosion
.
├── dataset
├── test
├── laughing_girl
├── gt
├── 0000.png
├── line
├── 0000.png
├── 0001.png
├── ...
├── smoke_explosion
├── gt
├── line
To inference on laughing_girl
, run inference_line_frames.py
by using:
python inference_line_frames.py --path dataset/test/laughing_girl
Or run this to try with smoke_explosion
:
python inference_line_frames.py --path dataset/test/smoke_explosion/ --mode nearest
Find results under results/
.
inference_line_frames.py
provides several arguments for different use cases.
--mode
can be eitherforward
ornearest
. By default,forward
processes your frames sequentially. If setnearest
, frames will be predicted from the nearest gt. e.g. Given gt 0000.png and 0005.png, line 0003.png will be colored according to 0004.png and 0004.png is colored according to 0005.png.python inference_line_frames.py --path dataset/test/smoke_explosion/ --mode nearest
--seg_type
isdefault
if not specified. It's fast and simple, but not work if your line contains unclosed region.trappedball
is robust to this case(acknowledge @hepesu/LineFiller). To decide which one to use, you can first setdefault
together with--save_color_seg
. It will produce colorized segmentation results. If you find out that some segments are not seperated properly, switch totrappedball
.python inference_line_frames.py --path dataset/test/smoke_explosion/ --seg_type trappedball
--use_light_model
will use the light-weighted model for inference. Add this if working on low memory GPU. Notice that this argument may produce poorer results than the base model.--multi_clip
is used if you would like to inference on many clips at the same time. Put all clips within a single folder underdataset/test/
, e.g.:In this case, run:├── dataset ├── test ├── your_clip_folder ├── clip01 ├── clip02 ├── ...
python inference_line_frames.py --path dataset/test/your_clip_folder/ --multi_clip
Training with single GPU
To train a model with your own data/model, you can edit the options/train/basicpbc_pbch_train_option.yml
and run the following command.
To train a model with your own data/model, you can edit the options/train/basicpbc_pbch_train_option.yml
and run the following command.
python basicsr/train.py -opt options/train/basicpbc_pbch_train_option.yml
Training with multiple GPU
You can run the following command for multiple GPU training:
CUDA_VISIBLE_DEVICES=0,1 bash scripts/dist_train.sh 2 options/train/basicpbc_pbch_train_option.yml
├── BasicPBC
├── assets
├── basicsr
├── archs
├── data
├── losses
├── metrics
├── models
├── ops
├── utils
├── dataset
├── train
├── PaintBucket_Char
├── test
├── PaintBucket_Char
├── PaintBucket_Real
├── experiments
├── options
├── test
├── train
├── paint
├── raft
├── results
├── scripts
This project is licensed under S-Lab License 1.0. Redistribution and use of the dataset and code for non-commercial purposes should follow this license.
If you find this work useful, please cite:
@article{InclusionMatching2024,
title = {Learning Inclusion Matching for Animation Paint Bucket Colorization},
author = {Dai, Yuekun and Zhou, Shangchen and Li, Qinyue and Li, Chongyi and Loy, Chen Change},
journal = {CVPR},
year = {2024},
}
If you have any question, please feel free to reach me out at ydai005@e.ntu.edu.sg
.