-
Notifications
You must be signed in to change notification settings - Fork 473
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
v 0.0.6 #54
Merged
Merged
v 0.0.6 #54
Commits on Dec 13, 2022
-
Add parameter to control rank of decomposition (#28)
* ENH: allow controlling rank of approximation * Training script accepts lora_rank
Configuration menu - View commit details
-
Copy full SHA for 7e78c8d - Browse repository at this point
Copy the full SHA 7e78c8dView commit details
Commits on Dec 14, 2022
-
Configuration menu - View commit details
-
Copy full SHA for 6aee5f3 - Browse repository at this point
Copy the full SHA 6aee5f3View commit details -
Configuration menu - View commit details
-
Copy full SHA for 9f31bd0 - Browse repository at this point
Copy the full SHA 9f31bd0View commit details
Commits on Dec 15, 2022
-
Fix lora inject, added weight self apply lora (#39)
* Develop (#34) * Add parameter to control rank of decomposition (#28) * ENH: allow controlling rank of approximation * Training script accepts lora_rank * feat : statefully monkeypatch different loras + example ipynb + readme Co-authored-by: brian6091 <brian6091@gmail.com> * release : version 0.0.4, now able to tune rank, now add loras dynamically * readme : add brain6091's discussions * fix:inject lora in to_out module list * feat: added weight self apply lora * chore: add import copy * fix: readded r Co-authored-by: Simo Ryu <35953539+cloneofsimo@users.noreply.github.com> Co-authored-by: brian6091 <brian6091@gmail.com> Co-authored-by: SimoRyu <cloneofsimo@korea.ac.kr>
Configuration menu - View commit details
-
Copy full SHA for fececf3 - Browse repository at this point
Copy the full SHA fececf3View commit details -
Configuration menu - View commit details
-
Copy full SHA for 65438b5 - Browse repository at this point
Copy the full SHA 65438b5View commit details -
Configuration menu - View commit details
-
Copy full SHA for 4975cfa - Browse repository at this point
Copy the full SHA 4975cfaView commit details -
Configuration menu - View commit details
-
Copy full SHA for 9ca7bc8 - Browse repository at this point
Copy the full SHA 9ca7bc8View commit details -
Configuration menu - View commit details
-
Copy full SHA for 6a3ad97 - Browse repository at this point
Copy the full SHA 6a3ad97View commit details -
Configuration menu - View commit details
-
Copy full SHA for 40ad282 - Browse repository at this point
Copy the full SHA 40ad282View commit details
Commits on Dec 16, 2022
-
Fix save_steps, max_train_steps, and logging (#45)
* v 0.0.5 (#42) * Add parameter to control rank of decomposition (#28) * ENH: allow controlling rank of approximation * Training script accepts lora_rank * feat : statefully monkeypatch different loras + example ipynb + readme * Fix lora inject, added weight self apply lora (#39) * Develop (#34) * Add parameter to control rank of decomposition (#28) * ENH: allow controlling rank of approximation * Training script accepts lora_rank * feat : statefully monkeypatch different loras + example ipynb + readme Co-authored-by: brian6091 <brian6091@gmail.com> * release : version 0.0.4, now able to tune rank, now add loras dynamically * readme : add brain6091's discussions * fix:inject lora in to_out module list * feat: added weight self apply lora * chore: add import copy * fix: readded r Co-authored-by: Simo Ryu <35953539+cloneofsimo@users.noreply.github.com> Co-authored-by: brian6091 <brian6091@gmail.com> Co-authored-by: SimoRyu <cloneofsimo@korea.ac.kr> * Revert "Fix lora inject, added weight self apply lora (#39)" (#40) This reverts commit fececf3. * fix : rank bug in monkeypatch * fix cli fix * visualizatio on effect of LR Co-authored-by: brian6091 <brian6091@gmail.com> Co-authored-by: Davide Paglieri <paglieridavide@gmail.com> * Fix save_steps, max_train_steps, and logging Corrected indenting so checking save_steps, max_train_steps, and updating logs are performed every step instead at the end of an epoch. Co-authored-by: Simo Ryu <35953539+cloneofsimo@users.noreply.github.com> Co-authored-by: brian6091 <brian6091@gmail.com> Co-authored-by: Davide Paglieri <paglieridavide@gmail.com>
Configuration menu - View commit details
-
Copy full SHA for a386525 - Browse repository at this point
Copy the full SHA a386525View commit details -
* v 0.0.5 (#42) * Add parameter to control rank of decomposition (#28) * ENH: allow controlling rank of approximation * Training script accepts lora_rank * feat : statefully monkeypatch different loras + example ipynb + readme * Fix lora inject, added weight self apply lora (#39) * Develop (#34) * Add parameter to control rank of decomposition (#28) * ENH: allow controlling rank of approximation * Training script accepts lora_rank * feat : statefully monkeypatch different loras + example ipynb + readme Co-authored-by: brian6091 <brian6091@gmail.com> * release : version 0.0.4, now able to tune rank, now add loras dynamically * readme : add brain6091's discussions * fix:inject lora in to_out module list * feat: added weight self apply lora * chore: add import copy * fix: readded r Co-authored-by: Simo Ryu <35953539+cloneofsimo@users.noreply.github.com> Co-authored-by: brian6091 <brian6091@gmail.com> Co-authored-by: SimoRyu <cloneofsimo@korea.ac.kr> * Revert "Fix lora inject, added weight self apply lora (#39)" (#40) This reverts commit fececf3. * fix : rank bug in monkeypatch * fix cli fix * visualizatio on effect of LR Co-authored-by: brian6091 <brian6091@gmail.com> Co-authored-by: Davide Paglieri <paglieridavide@gmail.com> * Enable resume training unet/text encoder (#48) * Enable resume training unet/text encoder New flags --resume_text_encoder --resume_unet accept the paths to .pt files to resume. Make sure to change the output directory from the previous training session, or else .pt files will be overwritten since training does not resume from previous global step. * Load weights from .pt with inject_trainable_lora Adds new loras argument to inject_trainable_lora function which accepts path to a .pt file containing previously trained weights. Co-authored-by: Simo Ryu <35953539+cloneofsimo@users.noreply.github.com> Co-authored-by: brian6091 <brian6091@gmail.com> Co-authored-by: Davide Paglieri <paglieridavide@gmail.com>
Configuration menu - View commit details
-
Copy full SHA for 6767142 - Browse repository at this point
Copy the full SHA 6767142View commit details -
Configuration menu - View commit details
-
Copy full SHA for 24af4c8 - Browse repository at this point
Copy the full SHA 24af4c8View commit details -
Configuration menu - View commit details
-
Copy full SHA for 046422c - Browse repository at this point
Copy the full SHA 046422cView commit details -
Configuration menu - View commit details
-
Copy full SHA for 0a92e62 - Browse repository at this point
Copy the full SHA 0a92e62View commit details -
Configuration menu - View commit details
-
Copy full SHA for 4abbf90 - Browse repository at this point
Copy the full SHA 4abbf90View commit details -
Configuration menu - View commit details
-
Copy full SHA for d0c4cc5 - Browse repository at this point
Copy the full SHA d0c4cc5View commit details
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.