-
Notifications
You must be signed in to change notification settings - Fork 499
Text encoder still not working correctly with LoRa Dreambooth training script #31
Description
Hello, I am getting much better results using the --train_text_encoder flag with the Dreambooth script. However, the actual outputed LoRa .pt files from models trained with train_text_encoder gives very bad results after using monkeypatch to generate images. I suspect that the text encoder's weights are still not saved properly. I tried to save the pipeline directly after each epoch from within the training script, but loading it using diffusers gives me strange errors about torch not being able to parse the linear layers. Does anyone have similar experiences with training the text encoder or have any idea why this is happening?
Images sampled from within the training loop (train_text_encoder enabled) :



Images sampled after model was monkeypatch with the trained LoRa weights (train_text_encoder enabled) :



The images doesn't seem to correlate with the samples generated while training and has very little cohesiveness with the training images used.