Skip to content

How to use Playground2.5 to train lora with own dataset to generate pictures of a specific style? #9731

@hjw-0909

Description

@hjw-0909

Describe the bug

Hi,

I have been working on training models using the same dataset as "stabilityai/stable-diffusion-xl-base-1.0" with the script examples/text_to_image/train_text_to_image_lora_sdxl.py, and I achieved quite promising results.

Now, I am trying to further improve the performance by switching to Dreambooth. I am currently using playground2.5 with examples/dreambooth/train_dreambooth_lora_sdxl.py. However, after multiple parameter tuning attempts, the performance is still not as good as the SDXL base model.

I am unsure what might be causing this.

Reproduction

image

Logs

No response

System Info

  • 🤗 Diffusers version: 0.31.0.dev0
  • Platform: Linux-5.14.0-284.25.1.el9_2.x86_64-x86_64-with-glibc2.17
  • Running on Google Colab?: No
  • Python version: 3.8.20
  • PyTorch version (GPU?): 2.2.0 (True)
  • Flax version (CPU?/GPU?/TPU?): not installed (NA)
  • Jax version: not installed
  • JaxLib version: not installed
  • Huggingface_hub version: 0.25.2
  • Transformers version: 4.45.2
  • Accelerate version: 1.0.1
  • PEFT version: 0.13.2
  • Bitsandbytes version: 0.44.1
  • Safetensors version: 0.4.5
  • xFormers version: not installed
  • Accelerator: NVIDIA H800, 81559 MiB
  • Using GPU in script?:
  • Using distributed or parallel set-up in script?:

Who can help?

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingstaleIssues that haven't received updates

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions