Skip to content

RuntimeError with LyCORIS, Batch Inference and skip_guidance_layers #10085

@japppie

Description

@japppie

Describe the bug

A RuntimeError occurs when using the following combination:

  • SD3
  • Batch inference (num_images_per_prompt > 1)
  • LyCORIS
  • skip_guidance_layers is set

The error message is: "RuntimeError: The size of tensor a (2) must match the size of tensor b (4) at non-singleton dimension 0"

It seems that batch inference (num_images_per_prompt > 1) does not work in conjunction with skip_guidance_layers.

Reproduction

This code snippet produces the error:

self.pipe = StableDiffusion3Pipeline.from_pretrained(
    "stabilityai/stable-diffusion-3-medium-diffusers", 
    torch_dtype=torch.bfloat16
    )

self.pipe.scheduler = FlowMatchEulerDiscreteScheduler.from_config(
    self.pipe.scheduler.config,
    timestep_spacing="trailing",
    shift=3.0
    )
self.pipe.to("cuda")

lora_scale = 1.0
wrapper, _ = create_lycoris_from_weights(lora_scale, my_lora, self.pipe.transformer)
wrapper.merge_to()

image = self.pipe(
    prompt=request.prompt,
    num_inference_steps=request.num_inference_steps,
    num_images_per_prompt=2,  # Batch inference
    output_type="pil",
    generator=torch.Generator(device="cuda").manual_seed(42),
    guidance_scale=request.guidance_scale,
    width=request.width,
    height=request.height,
    skip_guidance_layers=[7, 8, 9],  # Doesn't seem to work with batching
).images[0]

Commenting out skip_guidance_layers resolves the error.

Expected behavior

Batch inference should work correctly even when skip_guidance_layers is used with LyCORIS.

Logs

No response

System Info

Environment

Who can help?

@sayakpaul

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions