Skip to content

lpw_stable_diffusion_xl custom pipeline doesn't work with from_single_file #7666

Closed
@nawka12

Description

@nawka12

Describe the bug

I tested lpw_stable_diffusion_xl, it works with StableDiffusionXLPipeline.from_pretrained but doesn't work with StableDiffusionXLPipeline.from_single_file. I tried to delete the truncated prompt to see if it's really truncating the prompt or just an ignorable log. Here are the results:

  • The prompt open mouth, and aesthetic tags are truncated.

image

image

  • After removing the truncated prompt

image

image

Here's that same prompt and settings generated with from_pretrained using the same model, but the diffuser version:

  • open mouth, and aesthetic tags included

image

image

  • without the truncated prompts

image

Reproduction

pipe = StableDiffusionXLPipeline.from_single_file(
model_path,
torch_dtype=torch.float16,
custom_pipeline="lpw_stable_diffusion_xl",
use_safetensors=True,
)
pipe.to('cuda')

Logs

indices sequence length is longer than the specified maximum sequence length for this model (92 > 77). Running this sequence through the model will result in indexing errors
The following part of your input was truncated because CLIP can only handle sequences up to 77 tokens: [', open mouth, masterpiece, best quality, very aesthetic, absurdres,']

System Info

  • diffusers version: 0.28.0.dev0
  • Platform: Windows-10-10.0.22631-SP0
  • Python version: 3.10.11
  • PyTorch version (GPU?): 2.2.2+cu118 (True)
  • Huggingface_hub version: 0.22.2
  • Transformers version: 4.39.3
  • Accelerate version: 0.28.0
  • xFormers version: not installed
  • Using GPU in script?: yes
  • Using distributed or parallel set-up in script?: no

Who can help?

@yiyixuxu @sayakpaul

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions