Skip to content

Wrong amount of steps for some schedulers with base+refiner ensemble when use_karras_sigmas=True #5546

@TimothyAlexisVass

Description

@TimothyAlexisVass

Describe the bug

DEISMultistepScheduler, UniPCMultistepScheduler, DPMSolverSinglestepScheduler, EulerDiscreteScheduler, DPMSolverMultistepScheduler, LMSDiscreteScheduler: Does 33 steps total instead of 40
Base: 25, Refiner: 8

HeunDiscreteScheduler: Does 33 steps total and reports strange steps
Base: 25/49 (tqdm progress), Refiner: 8

KDPM2DiscreteScheduler: Does 34 steps total and reports strange steps
Base: 26/51, Refiner: 8
also got RuntimeWarning: divide by zero encountered in log (scheduling_k_dpm_2_ancestral_discrete.py:315)

PNDMScheduler: Reports strange steps (this happens also when use_karras_sigmas=False and with any num_inference_steps it's reported as 5/6, 10/11, 15/16, etc..)
Base: 32/33, Refiner: 8

KDPM2AncestralDiscreteScheduler: Does 33 steps total and reports strange steps
Base: 25/50, Refiner: 8
also got RuntimeWarning: divide by zero encountered in log (scheduling_k_dpm_2_ancestral_discrete.py:304)

Only DDPMScheduler, EulerAncestralDiscreteScheduler and DDIMScheduler works as expected out of the ones I tested...
Meaning, they all did 32 steps with the base model and 8 steps with the refiner.

Have gotten divide by zero warning in more than those reported here... So I added a PR #5543 for that.

Reproduction

from diffusers import AutoencoderKL, StableDiffusionXLPipeline, StableDiffusionXLImg2ImgPipeline
import torch

vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
base = StableDiffusionXLPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0",
    vae=vae,
    torch_dtype=torch.float16,
    variant="fp16",
    use_safetensors=True,
)
_ = base.to("cuda")

refiner = StableDiffusionXLImg2ImgPipeline.from_pretrained("stabilityai/stable-diffusion-xl-refiner-1.0",
    vae=vae,
    text_encoder_2=base.text_encoder_2,
    torch_dtype=torch.float16,
    variant="fp16",
    use_safetensors=True,
)

_ = refiner.to("cuda")

prompt="LOVE"
num_inference_steps = 40

for scheduler in base.scheduler.compatibles:
    scheduler_name = scheduler.__name__
    if scheduler_name not in ("DPMSolverSDEScheduler"):
        refiner.scheduler = base.scheduler = scheduler.from_config(base.scheduler.config, use_karras_sigmas=True)
        print("Generating with", scheduler_name)
        latents = base(prompt, num_inference_steps=num_inference_steps, denoising_end=0.8, output_type="latent").images
        image = refiner(prompt, num_inference_steps=num_inference_steps, denoising_start=0.8, image=latents).images[0]
        display(image)

Logs

No response

System Info

diffusers 0.22.0.dev0
python 3.10.12

Who can help?

@yiyixuxu @DN6 @patrickvonplaten @sayakpaul @patrickvonplaten

Metadata

Metadata

Assignees

Labels

bugSomething isn't workingstaleIssues that haven't received updates

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions