You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Conditional Diffusion Distillation (CoDi) is a new diffusion generation method recently proposed by Google Research and Johns Hopkins University. Accepted by CVPR24, CoDi is based on consistency models and offers a significant advancement in accelerating latent diffusion models. This method enables faster generation in just 1-4 steps.
Key Features:
Parameter-Efficient Distillation: CoDi is the first method that allows users to accelerate any diffusion model by simply loading a pre-trained acceleration ControlNet.
No Architectural Changes Required: The process does not require modifications to the diffusion scheduler or model architecture, ensuring seamless integration.
Enhanced Performance: For example, models like stablediffusionapi/juggernaut-reborn can be accelerated to generate results in 4 steps without the need for distillation of the juggernaut-reborn model.
The difference between Conditional Diffusion Distillation and recent LCM-LORA is listed below
Conditional Diffusion Distillation (CoDi)
LCM-LORA
Scheduler
Anything (Euler is tested)
LCM
Adapter
ControlNet
LORA
Full-training
Available
None
Backbone
SD1.5 (including its variant like juggernaut-reborn)
SD and SXL
Open source status
The model implementation is available.
The model weights are available (Only relevant if addition is not a scheduler).
Model/Pipeline/Scheduler description
Conditional Diffusion Distillation (CoDi) is a new diffusion generation method recently proposed by Google Research and Johns Hopkins University. Accepted by CVPR24, CoDi is based on consistency models and offers a significant advancement in accelerating latent diffusion models. This method enables faster generation in just 1-4 steps.
Key Features:
stablediffusionapi/juggernaut-reborn
can be accelerated to generate results in 4 steps without the need for distillation of the juggernaut-reborn model.The difference between Conditional Diffusion Distillation and recent LCM-LORA is listed below
Open source status
Provide useful links for the implementation
project page: https://fast-codi.github.io
paper: https://arxiv.org/abs/2310.01407
@MKFMIKU will submit a PR for providing the trainng code in PyTorch and a rough pretrained model.
The text was updated successfully, but these errors were encountered: