Add Option to use Target Model in LCM-LoRA Scripts#6537
Add Option to use Target Model in LCM-LoRA Scripts#6537dg845 wants to merge 4 commits intohuggingface:mainfrom
Conversation
| # Checks if the accelerator has performed an optimization step behind the scenes | ||
| if accelerator.sync_gradients: | ||
| # 12. If using a target model, update its parameters via EMA. | ||
| update_ema(target_unet.parameters(), unet.parameters(), args.ema_decay) |
There was a problem hiding this comment.
Are you sure this works? I had errors with the LoRA parameters.
There was a problem hiding this comment.
both are lora weights, so they should work?
There was a problem hiding this comment.
I haven't been able to test this fully yet, it's possible that this runs into the errors mentioned in #6505 (comment)
There was a problem hiding this comment.
I believe the current implementation is correctly updating the LoRA parameters.
| # ----Latent Consistency Distillation (LCD) Specific Arguments---- | ||
| parser.add_argument( | ||
| "--use_target_model", | ||
| action="store_true", |
There was a problem hiding this comment.
Should this default to false so existing users are not surprised?
There was a problem hiding this comment.
The target model will be used only if the --use_target_model flag is specified (so existing script calls should work as before).
patil-suraj
left a comment
There was a problem hiding this comment.
The code looks good to me. Could you explain the behind this ? Do you have any experiments that demonstrate the use of this ?
|
#6505 (comment) reports seeing a lot of training instability when training using the current LCM-LoRA script. @jon-chuang, would you be willing to share more details about the training instability? |
|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
|
I don’t have conclusive evidence that this fix will mitigate, but what I observed was some divergent results on LCM-LoRA training runs. Anw, I think it’s a zero-cost opt-in feature that may produce better results for some users. I will definitely try the EMA once it is merged and can report on further results. |
|
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
|
@patil-suraj a gentle ping. |
|
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
What does this PR do?
This PR enables a target model to be optionally used in the LCM-LoRA distillation scripts via the
--use_target_modelargument.Follow up to #6505.
Before submitting
documentation guidelines, and
here are tips on formatting docstrings.
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@patrickvonplaten
@patil-suraj
@sayakpaul
@jon-chuang
@shuminghu