Skip to content

xuejunzhang2002/DragNUWA

 
 

Repository files navigation

DragNUWA

DragNUWA enables users to manipulate backgrounds or objects within images directly, and the model seamlessly translates these actions into camera movements or object motions, generating the corresponding video.

See our paper: DragNUWA: Fine-grained Control in Video Generation by Integrating Text, Image, and Trajectory

Open in Spaces Open in Colab

DragNUWA 1.5 (Updated on Jan 8, 2024)

DragNUWA 1.5 uses Stable Video Diffusion as a backbone to animate an image according to specific path.

Please refer to assets/DragNUWA1.5/figure_raw for raw gifs.

DragNUWA 1.0 (Original Paper)

DragNUWA 1.0 utilizes text, images, and trajectory as three essential control factors to facilitate highly controllable video generation from semantic, spatial, and temporal aspects.

Getting Start

Setting Environment

git clone https://github.com/ProjectNUWA/DragNUWA.git
cd DragNUWA

conda create -n DragNUWA python=3.8
conda activate DragNUWA
pip install -r environment.txt

Download Pretrained Weights

Download the Pretrained Weights to models/ directory or directly run bash models/Download.sh.

Drag and Animate!

python DragNUWA_demo.py

It will launch a gradio demo, and you can drag an image and animate it!

Acknowledgement

We appreciate the open source of the following projects: Stable Video DiffusionHugging FaceUniMatch

Citation

@article{yin2023dragnuwa,
  title={Dragnuwa: Fine-grained control in video generation by integrating text, image, and trajectory},
  author={Yin, Shengming and Wu, Chenfei and Liang, Jian and Shi, Jie and Li, Houqiang and Ming, Gong and Duan, Nan},
  journal={arXiv preprint arXiv:2308.08089},
  year={2023}
}

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%