teaser.mp4
Try out our Hugging Face Space!
Powered by ZeroGPU - Special thanks to Hugging Face 🙌
- Install conda environment:
conda env create -f flipsketch.yml
- Download T2V LoRA Model from HuggingFace
git lfs install
git clone https://huggingface.co/Hmrishav/t2v_sketch-lora
- Place LoRA checkpoint under root folder:
mv t2v_sketch-lora/checkpoint-2500 ./checkpoint-2500/
- Run app
python app.py
To use the codebase with PyTorch2.0, modify here to import from text2vid_torch2.py
instead of text2vid_modded.py
We use a T2V model fine-tuned on sketch animations, and condition it to follow an input sketch.
We perform attention composition with reference noise from the input sketch.
If you find FlipSketch useful, consider citing our work:
@misc{bandyopadhyay2024flipsketch,
title={FlipSketch: Flipping static Drawings to Text-Guided Sketch Animations},
author={Hmrishav Bandyopadhyay and Yi-Zhe Song},
year={2024},
eprint={2411.10818},
archivePrefix={arXiv},
primaryClass={cs.GR},
url={https://arxiv.org/abs/2411.10818},
}