Skip to content

Text encoder still not working correctly with LoRa Dreambooth training script #31

@JohnnyRacer

Description

@JohnnyRacer

Hello, I am getting much better results using the --train_text_encoder flag with the Dreambooth script. However, the actual outputed LoRa .pt files from models trained with train_text_encoder gives very bad results after using monkeypatch to generate images. I suspect that the text encoder's weights are still not saved properly. I tried to save the pipeline directly after each epoch from within the training script, but loading it using diffusers gives me strange errors about torch not being able to parse the linear layers. Does anyone have similar experiences with training the text encoder or have any idea why this is happening?

Images sampled from within the training loop (train_text_encoder enabled) :
2
2e
3

Images sampled after model was monkeypatch with the trained LoRa weights (train_text_encoder enabled) :
bad1
bad2
bad3

The images doesn't seem to correlate with the samples generated while training and has very little cohesiveness with the training images used.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions