Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Apply to Checkpoint #18

Closed
d8ahazard opened this issue Dec 11, 2022 · 6 comments
Closed

Apply to Checkpoint #18

d8ahazard opened this issue Dec 11, 2022 · 6 comments
Labels
good first issue Good for newcomers

Comments

@d8ahazard
Copy link

Hi @cloneofsimo. Could you kindly provide an example of applying lora weights to a diffusers model and saving for conversion to a standard ckpt file?

Still having some troubles in this department.

@cloneofsimo
Copy link
Owner

Ok ill make a detailed blogpost on this but I'm really onto other things right now so give me a week or so please!

@qunash
Copy link

qunash commented Dec 12, 2022

Hi @cloneofsimo, could you also provide some details of your training process? I'm trying to test this method by training a new style, but so far the results aren't as good as in your examples. Interested in info on the parameters you used, training dataset, etc.

@cloneofsimo
Copy link
Owner

@d8ahazard ok, so I've just made another mode that converts to ckpt format that works with v2.0 model.

Here is a full example with lora_illust.pt, LoRA that was trained on 10 illust images for 30000 steps with learning rate of 1e-4 :

lora_add --path_1 stabilityai/stable-diffusion-2-base --path_2 lora_illust.pt --mode upl-ckpt-v2 --alpha 1.2 --output_path merged_model.ckpt

I'm not sure if this works with v1.x tho, because I haven't trained lora on them yet. Put the checkpoint to for example, A1111's webui, and you get :

ckpt_illust_demo

Which is exactly same output and configuration that you can find in the recently updated notebook scripts/run_inference.ipynb:

image

(So this proves that transformation works perfectly I guess?)

@cloneofsimo
Copy link
Owner

@qunash Train with relatively large learning rate! This seems to be the common problem that everyone is having.
By the way, I didn't play around with the parameters enough to conclusively say anything about "best training configurations" other than the fact that it works reasonably well.

@cloneofsimo
Copy link
Owner

All the loras in the example were trained with large learning rate, without prior preservation.

@qunash
Copy link

qunash commented Dec 12, 2022

Thanks for the info!

@cloneofsimo cloneofsimo added the good first issue Good for newcomers label Dec 12, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Good for newcomers
Projects
None yet
Development

No branches or pull requests

3 participants