Skip to content

IJCAI2023 - Collaborative Neural Rendering using Anime Character Sheets

License

Notifications You must be signed in to change notification settings

megvii-research/IJCAI2023-CoNR

Repository files navigation

English | 中文

Collaborative Neural Rendering using Anime Character Sheets

Our paper is accepted by Special Track of IJCAI2023 (three reviews with "accept" ratings), the revision of the paper is available. -> poster

2023/4/18: The dataset is now avaliable in CoNR_Dataset! 🎉

image image image

Introduction

This project is the official implement of Collaborative Neural Rendering using Anime Character Sheets, which aims to genarate vivid dancing videos from hand-drawn anime character sheets (ACS).1 Watch more demos and details in our firmly recommended video in BiliBili or YouTube. Our FAQ on Zhihu (in Chinese) explains the ideas underpinning CoNR.

Usage

Prerequisites

  • NVIDIA GPU + CUDA + CUDNN
  • Python 3.6

Installation

  • Clone this repository
git clone https://github.com/megvii-research/CoNR
  • Dependencies

To install all the dependencies, please run the following commands.

cd CoNR
pip install -r requirements.txt
  • Download Weights Download weights from Google Drive. Alternatively, you can download from Baidu Netdisk (password:RDxc).
mkdir weights && cd weights
gdown https://drive.google.com/uc?id=1M1LEpx70tJ72AIV2TQKr6NE_7mJ7tLYx
gdown https://drive.google.com/uc?id=1YvZy3NHkJ6gC3pq_j8agcbEJymHCwJy0
gdown https://drive.google.com/uc?id=1AOWZxBvTo9nUf2_9Y7Xe27ZFQuPrnx9i
gdown https://drive.google.com/uc?id=19jM1-GcqgGoE1bjmQycQw_vqD9C5e-Jm

Prepare Inputs

We provide two Ultra-Dense Pose sequences for two characters. You can generate more UDPs via 3D models and motions refers to our paper, or use MMD2UDP(Thanks to @KurisuMakise004 ). Baidu Netdisk (password:RDxc)

# for short hair girl
gdown https://drive.google.com/uc?id=11HMSaEkN__QiAZSnCuaM6GI143xo62KO
unzip short_hair.zip
mv short_hair/ poses/

# for double ponytail girl
gdown https://drive.google.com/uc?id=1WNnGVuU0ZLyEn04HzRKzITXqib1wwM4Q
unzip double_ponytail.zip
mv double_ponytail/ poses/

We provide sample inputs of anime character sheets. You can also draw more by yourself. Character sheets need to be cut out from the background and in png format. Baidu Netdisk (password:RDxc)

# for short hair girl
gdown https://drive.google.com/uc?id=1r-3hUlENSWj81ve2IUPkRKNB81o9WrwT
unzip short_hair_images.zip
mv short_hair_images/ character_sheet/

# for double ponytail girl
gdown https://drive.google.com/uc?id=1XMrJf9Lk_dWgXyTJhbEK2LZIXL9G3MWc
unzip double_ponytail_images.zip
mv double_ponytail_images/ character_sheet/

RUN!

streamlit run streamlit.py --server.port=8501

then open your browser and visit localhost:8501, follow the instructions to generate video.

  • via terminal
mkdir {dir_to_save_result}

python -m torch.distributed.launch \
--nproc_per_node=1 train.py --mode=test \
--world_size=1 --dataloaders=2 \
--test_input_poses_images={dir_to_poses} \
--test_input_person_images={dir_to_character_sheet} \
--test_output_dir={dir_to_save_result} \
--test_checkpoint_dir={dir_to_weights}

ffmpeg -r 30 -y -i {dir_to_save_result}/%d.png -r 30 -c:v libx264 output.mp4 -r 30

Citation

@inproceedings{lin2023conr,
  title={Collaborative Neural Rendering using Anime Character Sheets},
  author={Lin, Zuzeng and Huang, Ailin and Huang, Zhewei},
  booktitle = {Proceedings of the Thirty-First International Joint Conference on Artificial Intelligence (IJCAI)},
  year={2023}
}

Footnotes

  1. Zuzeng Lin is not involved in the submissions to AAAI23, and IJCAI23, which are done by other authors alone, after seeing the heartbreaking reviews from CVPR22 and ECCV22. He explored the idea of assisting anime creation with AI and proposed CoNR as a baseline to solve consistency and artistic control issues at the end of the year 2020. Other authors revised his draft to its present state and made demo videos after he quit, but they are not involved in the successor versions. He appreciates the discussion with many people who are interested in this project, and Live3D public beta users in Sept. 2021.