Skip to content
/ LVCD Public

The official code of paper "LVCD: Reference-based Lineart Video Colorization with Diffusion Models"

Notifications You must be signed in to change notification settings

luckyhzt/LVCD

Repository files navigation

LVCD: Reference-based Lineart Video Colorization with Diffusion Models

ACM Transactions on graphics & SIGGRAPH Asia 2024

Project page | arXiv

Zhitong Huang $^1$, Mohan Zhang $^2$, Jing Liao $^{1*}$

$^1$: City University of Hong Kong, Hong Kong SAR, China    $^2$: WeChat, Tencent Inc., Shenzhen, China
$^*$: Corresponding author

Abstract:

We propose the first video diffusion framework for reference-based lineart video colorization. Unlike previous works that rely solely on image generative models to colorize lineart frame by frame, our approach leverages a large-scale pretrained video diffusion model to generate colorized animation videos. This approach leads to more temporally consistent results and is better equipped to handle large motions. Firstly, we introduce Sketch-guided ControlNet which provides additional control to finetune an image-to-video diffusion model for controllable video synthesis, enabling the generation of animation videos conditioned on lineart. We then propose Reference Attention to facilitate the transfer of colors from the reference frame to other frames containing fast and expansive motions. Finally, we present a novel scheme for sequential sampling, incorporating the Overlapped Blending Module and Prev-Reference Attention, to extend the video diffusion model beyond its original fixed-length limitation for long video colorization. Both qualitative and quantitative results demonstrate that our method significantly outperforms state-of-the-art techniques in terms of frame and video quality, as well as temporal consistency. Moreover, our method is capable of generating high-quality, long temporal-consistent animation videos with large motions, which is not achievable in previous works.

Installation

conda create -n lvcd python=3.10.0
conda activate lvcd
pip3 install -r requirements/pt2.txt

Download pretrained models

  1. Download the pretrained SVD weights and put it as ./checkpoints/svd.safetensors
  2. Download the finetuned weights for Sketch-guided ControlNet and put is as ./checkpoints/lvcd.ckpt

Inference

All the code for inference is placed under ./inference/, where the jupyter notebook sample.ipynb demonstrates how to sample the videos. Two testing clips are also provided.

Training

Dataset preparation

Download the training set from here including the .zip, .z01 to .z07, and train_clips_hist.json files.

Unzip the zip files and put the json file under the root directory of the dataset as .../Animation_video/train_clips_hist.json.

About

The official code of paper "LVCD: Reference-based Lineart Video Colorization with Diffusion Models"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published