Liao Shen Tianqi Liu Huiqiang Sun Xinyi Ye Baopu Li Jianming Zhang Zhiguo Cao✉
✉Corresponding Autor
git clone https://github.com/leoShen917/DreamMover.git
cd DreamMover
conda create -n mover python=3.8.5
conda activate mover
pip install -r requirement.txt
You can download the pretrained model Stable Diffusion v1.5 from Huggingface, and specify the model_path
to your local directory.
[Optional] You can download the fine-tuned vae model from Huggingface for better performance.
To start the Gradio UI of DreamMover, run the following in your environment:
python gradio_ui.py
Then, by default, you can access the UI at http://127.0.0.1:7860.
To start with, run the following command to train a Lora for image pair:
python lora/train_dreambooth_lora.py --pretrained_model_name_or_path [model_path] --instance_data_dir [img_path] --output_dir [lora_path] --instance_prompt [prompt] --lora_rank 16
After that, we now can run the main code:
python main.py \
--prompt [prompt] --img_path [img_path] --model_path [model_path] --vae_path [vae_path] --lora_path [lora_path] --save_dir [save_dir] --Time 33
The script also supports the following options:
--prompt
: Prompt of the image pair(default: "")--img_path
: Path of the image pair--model_path
: Pretrained model path (default: "runwayml/stable-diffusion-v1-5")--vae_path
: vae model path (default= "default")--lora_path
: lora model path (the output path of train_lora)--save_dir
: path of the output images (default= "./results")--Time
: the frames of generated video
If you find our work useful in your research, please consider to cite our paper:
@article{shen2024dreammover,
title={DreamMover: Leveraging the Prior of Diffusion Models for Image Interpolation with Large Motion},
author={Shen, Liao and Liu, Tianqi and Sun, Huiqiang and Ye, Xinyi and Li, Baopu and Zhang, Jianming and Cao, Zhiguo},
journal={arXiv preprint arXiv:2409.09605},
year={2024}
}
This code borrows heavily from DragDiffusion, DiffMorpher and Diffusers. We thank the respective authors for open sourcing their method.