Skip to content

Unofficial implementation of DragDiffusion: Harnessing Diffusion Models for Interactive Point-based Image Editing.

License

Notifications You must be signed in to change notification settings

WeilunWang/DragDiffusion

Repository files navigation

Architecture of DragDiffuison

DragDiffusion

Unofficial implementation of DragDiffusion: Harnessing Diffusion Models for Interactive Point-based Image Editing.

# requirements
conda env create -f environment.yml
conda activate DragDiffusion
pip install -r requirements.txt
# To obtain the feature from StableDiffusion Unet (WIP)
mv assets/unet_2d_condition.py YOUR_CONDA_ENV/site-packages/diffusers/models/
# run demo
python visualizer_drag_gradio.py

TODO

  • drag process
  • mask
  • Gradio GUI
  • imgui GUI

StableDiffusion Pre-Trained Model

Following Diffusers to obtain pre-trained StableDiffuison model.

References

About

Unofficial implementation of DragDiffusion: Harnessing Diffusion Models for Interactive Point-based Image Editing.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages