Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Conditional Diffusion Distillation #8309

Open
2 tasks done
MKFMIKU opened this issue May 29, 2024 · 0 comments
Open
2 tasks done

Add Conditional Diffusion Distillation #8309

MKFMIKU opened this issue May 29, 2024 · 0 comments

Comments

@MKFMIKU
Copy link
Contributor

MKFMIKU commented May 29, 2024

Model/Pipeline/Scheduler description

Conditional Diffusion Distillation (CoDi) is a new diffusion generation method recently proposed by Google Research and Johns Hopkins University. Accepted by CVPR24, CoDi is based on consistency models and offers a significant advancement in accelerating latent diffusion models. This method enables faster generation in just 1-4 steps.

Key Features:

  • Parameter-Efficient Distillation: CoDi is the first method that allows users to accelerate any diffusion model by simply loading a pre-trained acceleration ControlNet.
  • No Architectural Changes Required: The process does not require modifications to the diffusion scheduler or model architecture, ensuring seamless integration.
  • Enhanced Performance: For example, models like stablediffusionapi/juggernaut-reborn can be accelerated to generate results in 4 steps without the need for distillation of the juggernaut-reborn model.

image_1

The difference between Conditional Diffusion Distillation and recent LCM-LORA is listed below

Conditional Diffusion Distillation (CoDi) LCM-LORA
Scheduler Anything (Euler is tested) LCM
Adapter ControlNet LORA
Full-training Available None
Backbone SD1.5 (including its variant like juggernaut-reborn) SD and SXL

Open source status

  • The model implementation is available.
  • The model weights are available (Only relevant if addition is not a scheduler).

Provide useful links for the implementation

project page: https://fast-codi.github.io
paper: https://arxiv.org/abs/2310.01407

@MKFMIKU will submit a PR for providing the trainng code in PyTorch and a rough pretrained model.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant