Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix bug in panorama pipeline when using dpmsolver scheduler #3499

Merged
merged 1 commit into from May 23, 2023

Conversation

Isotr0py
Copy link
Contributor

@Isotr0py Isotr0py commented May 21, 2023

Related issue: #3494

  • Fix PanoramaPipeline which generates corrupted images when using dpmsolver scheduler.

In dpmsolver scheduler, self.model_outputs is used to save diffusion model output in previous and current timestep, and calculate prev_sample.

model_output = self.convert_model_output(model_output, timestep, sample)
for i in range(self.config.solver_order - 1):
self.model_outputs[i] = self.model_outputs[i + 1]
self.model_outputs[-1] = model_output
if self.config.algorithm_type in ["sde-dpmsolver", "sde-dpmsolver++"]:
noise = randn_tensor(
model_output.shape, generator=generator, device=model_output.device, dtype=model_output.dtype
)
else:
noise = None
if self.config.solver_order == 1 or self.lower_order_nums < 1 or lower_order_final:
prev_sample = self.dpm_solver_first_order_update(
model_output, timestep, prev_timestep, sample, noise=noise
)
elif self.config.solver_order == 2 or self.lower_order_nums < 2 or lower_order_second:
timestep_list = [self.timesteps[step_index - 1], timestep]
prev_sample = self.multistep_dpm_solver_second_order_update(
self.model_outputs, timestep_list, prev_timestep, sample, noise=noise
)
else:
timestep_list = [self.timesteps[step_index - 2], self.timesteps[step_index - 1], timestep]
prev_sample = self.multistep_dpm_solver_third_order_update(
self.model_outputs, timestep_list, prev_timestep, sample
)

However, since we crop panorama image into several blocks and use scheduler.step in loop, self.model_outputs became outputs in previous and current blocks.
As a result, when using dpmsolver scheduler in PanoramaPipeline, we should collect rematch self.model_outputs in each block to right order to generate normal image.

test code

import torch
from diffusers import StableDiffusionPanoramaPipeline, DPMSolverMultistepScheduler

seed = 33
model_ckpt = "stabilityai/stable-diffusion-2-base"
prompt = "a photo of the dolomites"

pipe = StableDiffusionPanoramaPipeline.from_pretrained(model_ckpt, torch_dtype=torch.float16).to("cuda")
pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
pipe.to("cuda")
generator = torch.Generator(device="cuda").manual_seed(seed)
image = pipe(prompt, generator=generator, num_inference_steps=25,width=1024,height=512).images[0]
display(image)

Before

DPM++_2M

After

DPM++2M

@HuggingFaceDocBuilderDev
Copy link

HuggingFaceDocBuilderDev commented May 21, 2023

The documentation is not available anymore as the PR was closed or merged.

@Isotr0py Isotr0py changed the title Fix panorama pipeline when using dpmsolver scheduler Fix bug in panorama pipeline when using dpmsolver scheduler May 21, 2023
@patrickvonplaten
Copy link
Contributor

@sayakpaul can you take a look for Panorama pipeline?

Copy link
Member

@sayakpaul sayakpaul left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great catch! And thanks for fixing!

@sayakpaul sayakpaul merged commit 2f997f3 into huggingface:main May 23, 2023
7 checks passed
@Isotr0py Isotr0py deleted the panaroma branch May 23, 2023 05:23
yoonseokjin pushed a commit to yoonseokjin/diffusers that referenced this pull request Dec 25, 2023
AmericanPresidentJimmyCarter pushed a commit to AmericanPresidentJimmyCarter/diffusers that referenced this pull request Apr 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants