Skip to content

MC-E/DragonDiffusion

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Chong Mou, Xintao Wang, Jiechong Song, Ying Shan, Jian Zhang

Project page arXiv arXiv


DragonDiffusion.mp4

🚩 New Features/Updates

  • [2024/02/26] DiffEditor is accepted by CVPR 2024.
  • [2024/02/05] Releasing the paper of DiffEditor.
  • [2024/02/04] Releasing the code of DragonDiffusion and DiffEditor.
  • [2024/01/15] DragonDiffusion is accepted by ICLR 2024 (Spotlight).
  • [2023/07/06] Paper of DragonDiffusion is available here.

Introduction

DragonDiffusion is a turning-free method for fine-grained image editing. The core idea of DragonDiffusion comes from score-based diffusion. It can perform various editing tasks, including object moving, object resizing, object appearance replacement, content dragging, and object pasting. DiffEditor further improves the editing accuracy and flexibility of DragonDiffusion.

🔥🔥🔥 Main Features

Appearance Modulation

Appearance Modulation can change the appearance of an object in an image. The final appearance can be specified by a reference image.

Object Moving & Resizing

Object Moving can move an object in the image to a specified location.

Face Modulation

Face Modulation can transform the outline of one face into the outline of another reference face.

Content Dragging

Content Dragging can perform image editing through point-to-point dragging.

Object Pasting

Object Pasting can paste a given object onto a background image.

🔧 Dependencies and Installation

pip install -r requirements.txt
pip install dlib==19.14.0

⏬ Download Models

All models will be automatically downloaded. You can also choose to download manually from this url.

💻 How to Test

Inference requires at least 16GB of GPU memory for editing a 768x768 image.
We provide a quick start on gradio demo.

python app.py

Related Works

[1] Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold

[2] DragDiffusion: Harnessing Diffusion Models for Interactive Point-based Image Editing

[3] Emergent Correspondence from Image Diffusion

[4] Diffusion Self-Guidance for Controllable Image Generation

[5] IP-Adapter: Text Compatible Image Prompt Adapter for Text-to-Image Diffusion Models

🤗 Acknowledgements

We appreciate the foundational work done by score-based diffusion and DragGAN.

BibTeX

@article{mou2023dragondiffusion,
  title={Dragondiffusion: Enabling drag-style manipulation on diffusion models},
  author={Mou, Chong and Wang, Xintao and Song, Jiechong and Shan, Ying and Zhang, Jian},
  journal={arXiv preprint arXiv:2307.02421},
  year={2023}
}
@article{mou2023diffeditor,
  title={DiffEditor: Boosting Accuracy and Flexibility on Diffusion-based Image Editing},
  author={Mou, Chong and Wang, Xintao and Song, Jiechong and Shan, Ying and Zhang, Jian},
  journal={arXiv preprint arXiv:2402.02583},
  year={2023}
}

About

ICLR 2024 (Spotlight)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages