Skip to content

Latest commit

 

History

History
110 lines (79 loc) · 3.33 KB

README.md

File metadata and controls

110 lines (79 loc) · 3.33 KB

StyleInV: A Temporal Style Modulated Inversion Network for Unconditional Video Generation

This repository contains the official PyTorch implementation of the following paper:

StyleInV: A Temporal Style Modulated Inversion Network for Unconditional Video Generation
Yuhan Wang, Liming Jiang, Chen Change Loy
In ICCV 2023.

From MMLab@NTU affiliated with S-Lab, Nanyang Technological University

[Paper] | [Project Page] | [Video]

Main experiment result 256x256

From left to right: DeeperForensics, FaceForensics, SkyTimelapse, TaiChi

Initial-frame conditioned and style transfer 256x256

From left to right: In-the-wild image, pSp inversion, raw animation, style transfer

Updates

  • [09/11/2023] Source code is available. Tutorial on the environment, usage, and data/model preparation is on the way.
  • [08/2023] Accepted by ICCV 2023. The code is coming soon!

Training Pipeline

1. StyleGAN2 pretraining

train_stylegan2.py

2. pSp pretraining for StyleInV initialization

train_psp.py

3. StyleInV training

train_styleinv.py

4. Finetuning-based style transfer

  1. Download stylegan2-celebvhq256-fid5.00.pkl and save it to pretrained_models/psp/celebv_hq_256/stylegan2-celebvhq256-fid5.00.pkl
  2. For the auxiliary models, download model_ir_se50.pth and save it to pretrained_models/psp/model_ir_se50.pth. The weight for perceptual loss can be automatically downloaded.
  3. Prepare a fine-tune dataset, where each image should be cropped according to this or this
  4. Start finetuning
python -u train_stylegan2.py \
--outdir=experiments/stylegan2/transfer/celebvhq-arcane \
--gpus=4 \
--data=[your fine-tune dataset directory] \
--mirror=1 \
--cfg=paper256 \
--aug=ada \
--snap=20 \
--resume=pretrained_models/psp/celebv_hq_256/stylegan2-celebvhq256-fid5.00.pkl \
--transfer=True \
--no-metric=True \
--finetune-g-res=64 \
--perceptual-weight=30 \
--identity-weight=1

Inference

1. Generate a video dataset

generate_styleinv_video.py

2. Compute the quantitative metrics

scripts/calc_metrics_video.py

3. Animation and style transfer

generate_animation.py

Citation

If you find our repo useful for your research, please consider citing our paper:

@InProceedings{wang2023styleinv,
    title = {{StyleInV}: A Temporal Style Modulated Inversion Network for Unconditional Video Generation},
    author = {Wang, Yuhan and Jiang, Liming and Loy, Chen Change},
    booktitle = {ICCV},
    year = {2023}
}   

Acknowledgement

This codebase is maintained by Yuhan Wang.

This repo is built on top of following works: