Kaiwen Zhang Yifan Zhou Xudong Xu Xingang Pan✉ Bo Dai
✉Corresponding Author
To install the requirements, run the following in your environment first:
pip install -r requirements.txtTo run the code with CUDA properly, you can comment out torch and torchvision in requirement.txt, and install the appropriate version of torch and torchvision according to the instructions on PyTorch.
You can also download the pretrained model Stable Diffusion v2.1-base from Huggingface, and specify the model_path to your local directory.
To start the Gradio UI of DiffMorpher, run the following in your environment:
python app.pyThen, by default, you can access the UI at http://127.0.0.1:7860.
You can also run the code with the following command:
python main.py \
--image_path_0 [image_path_0] --image_path_1 [image_path_1] \
--prompt_0 [prompt_0] --prompt_1 [prompt_1] \
--output_path [output_path] \
--use_adain --use_reschedule --save_interThe script also supports the following options:
-
--image_path_0: Path of the first image (default: "") -
--prompt_0: Prompt of the first image (default: "") -
--image_path_1: Path of the second image (default: "") -
--prompt_1: Prompt of the second image (default: "") -
--model_path: Pretrained model path (default: "stabilityai/stable-diffusion-2-1-base") -
--output_path: Path of the output image (default: "") -
--save_lora_dir: Path of the output lora directory (default: "./lora") -
--load_lora_path_0: Path of the lora directory of the first image (default: "") -
--load_lora_path_1: Path of the lora directory of the second image (default: "") -
--use_adain: Use AdaIN (default: False) -
--use_reschedule: Use reschedule sampling (default: False) -
--lamb: Hyperparameter$\lambda \in [0,1]$ for self-attention replacement, where a larger$\lambda$ indicates more replacements (default: 0.6) -
--fix_lora_value: Fix lora value (default: LoRA Interpolation, not fixed) -
--save_inter: Save intermediate results (default: False) -
--num_frames: Number of frames to generate (default: 50) -
--duration: Duration of each frame (default: 50)
Examples:
python main.py \
--image_path_0 ./assets/Trump.jpg --image_path_1 ./assets/Biden.jpg \
--prompt_0 "A photo of an American man" --prompt_1 "A photo of an American man" \
--output_path "./results/Trump_Biden" \
--use_adain --use_reschedule --save_interpython main.py \
--image_path_0 ./assets/vangogh.jpg --image_path_1 ./assets/pearlgirl.jpg \
--prompt_0 "An oil painting of a man" --prompt_1 "An oil painting of a woman" \
--output_path "./results/vangogh_pearlgirl" \
--use_adain --use_reschedule --save_interpython main.py \
--image_path_0 ./assets/lion.png --image_path_1 ./assets/tiger.png \
--prompt_0 "A photo of a lion" --prompt_1 "A photo of a tiger" \
--output_path "./results/lion_tiger" \
--use_adain --use_reschedule --save_interTo evaluate the effectiveness of our methods, we present MorphBench, the first benchmark dataset for assessing image morphing of general objects. You can download the dataset from Google Drive or Baidu Netdisk.
The code related to the DiffMorpher algorithm is licensed under LICENSE.
However, this project is mostly built on the open-sourse library diffusers, which is under a separate license terms Apache License 2.0. (Cheers to the community as well!)
@article{zhang2023diffmorpher,
title={DiffMorpher: Unleashing the Capability of Diffusion Models for Image Morphing},
author={Zhang, Kaiwen and Zhou, Yifan and Xu, Xudong and Pan, Xingang and Dai, Bo},
journal={arXiv preprint arXiv:2312.07409},
year={2023}
}