Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

v 0.0.6 #54

Merged
merged 16 commits into from
Dec 16, 2022
Merged

v 0.0.6 #54

merged 16 commits into from
Dec 16, 2022

Commits on Dec 13, 2022

  1. Add parameter to control rank of decomposition (#28)

    * ENH: allow controlling rank of approximation
    
    * Training script accepts lora_rank
    brian6091 committed Dec 13, 2022
    Configuration menu
    Copy the full SHA
    7e78c8d View commit details
    Browse the repository at this point in the history

Commits on Dec 14, 2022

  1. Configuration menu
    Copy the full SHA
    6aee5f3 View commit details
    Browse the repository at this point in the history
  2. Configuration menu
    Copy the full SHA
    9f31bd0 View commit details
    Browse the repository at this point in the history

Commits on Dec 15, 2022

  1. Fix lora inject, added weight self apply lora (#39)

    * Develop (#34)
    
    * Add parameter to control rank of decomposition (#28)
    
    * ENH: allow controlling rank of approximation
    
    * Training script accepts lora_rank
    
    * feat : statefully monkeypatch different loras + example ipynb + readme
    
    Co-authored-by: brian6091 <brian6091@gmail.com>
    
    * release : version 0.0.4, now able to tune rank, now add loras dynamically
    
    * readme : add brain6091's discussions
    
    * fix:inject lora in to_out module list
    
    * feat: added weight self apply lora
    
    * chore: add import copy
    
    * fix: readded r
    
    Co-authored-by: Simo Ryu <35953539+cloneofsimo@users.noreply.github.com>
    Co-authored-by: brian6091 <brian6091@gmail.com>
    Co-authored-by: SimoRyu <cloneofsimo@korea.ac.kr>
    4 people committed Dec 15, 2022
    Configuration menu
    Copy the full SHA
    fececf3 View commit details
    Browse the repository at this point in the history
  2. Configuration menu
    Copy the full SHA
    65438b5 View commit details
    Browse the repository at this point in the history
  3. Configuration menu
    Copy the full SHA
    4975cfa View commit details
    Browse the repository at this point in the history
  4. Configuration menu
    Copy the full SHA
    9ca7bc8 View commit details
    Browse the repository at this point in the history
  5. fix cli fix

    cloneofsimo committed Dec 15, 2022
    Configuration menu
    Copy the full SHA
    6a3ad97 View commit details
    Browse the repository at this point in the history
  6. Configuration menu
    Copy the full SHA
    40ad282 View commit details
    Browse the repository at this point in the history

Commits on Dec 16, 2022

  1. Fix save_steps, max_train_steps, and logging (#45)

    * v 0.0.5 (#42)
    
    * Add parameter to control rank of decomposition (#28)
    
    * ENH: allow controlling rank of approximation
    
    * Training script accepts lora_rank
    
    * feat : statefully monkeypatch different loras + example ipynb + readme
    
    * Fix lora inject, added weight self apply lora (#39)
    
    * Develop (#34)
    
    * Add parameter to control rank of decomposition (#28)
    
    * ENH: allow controlling rank of approximation
    
    * Training script accepts lora_rank
    
    * feat : statefully monkeypatch different loras + example ipynb + readme
    
    Co-authored-by: brian6091 <brian6091@gmail.com>
    
    * release : version 0.0.4, now able to tune rank, now add loras dynamically
    
    * readme : add brain6091's discussions
    
    * fix:inject lora in to_out module list
    
    * feat: added weight self apply lora
    
    * chore: add import copy
    
    * fix: readded r
    
    Co-authored-by: Simo Ryu <35953539+cloneofsimo@users.noreply.github.com>
    Co-authored-by: brian6091 <brian6091@gmail.com>
    Co-authored-by: SimoRyu <cloneofsimo@korea.ac.kr>
    
    * Revert "Fix lora inject, added weight self apply lora (#39)" (#40)
    
    This reverts commit fececf3.
    
    * fix : rank bug in monkeypatch
    
    * fix cli fix
    
    * visualizatio on effect of LR
    
    Co-authored-by: brian6091 <brian6091@gmail.com>
    Co-authored-by: Davide Paglieri <paglieridavide@gmail.com>
    
    * Fix save_steps, max_train_steps, and logging
    
    Corrected indenting so checking save_steps, max_train_steps, and updating logs are performed every step instead at the end of an epoch.
    
    Co-authored-by: Simo Ryu <35953539+cloneofsimo@users.noreply.github.com>
    Co-authored-by: brian6091 <brian6091@gmail.com>
    Co-authored-by: Davide Paglieri <paglieridavide@gmail.com>
    4 people committed Dec 16, 2022
    Configuration menu
    Copy the full SHA
    a386525 View commit details
    Browse the repository at this point in the history
  2. Enable resuming (#52)

    * v 0.0.5 (#42)
    
    * Add parameter to control rank of decomposition (#28)
    
    * ENH: allow controlling rank of approximation
    
    * Training script accepts lora_rank
    
    * feat : statefully monkeypatch different loras + example ipynb + readme
    
    * Fix lora inject, added weight self apply lora (#39)
    
    * Develop (#34)
    
    * Add parameter to control rank of decomposition (#28)
    
    * ENH: allow controlling rank of approximation
    
    * Training script accepts lora_rank
    
    * feat : statefully monkeypatch different loras + example ipynb + readme
    
    Co-authored-by: brian6091 <brian6091@gmail.com>
    
    * release : version 0.0.4, now able to tune rank, now add loras dynamically
    
    * readme : add brain6091's discussions
    
    * fix:inject lora in to_out module list
    
    * feat: added weight self apply lora
    
    * chore: add import copy
    
    * fix: readded r
    
    Co-authored-by: Simo Ryu <35953539+cloneofsimo@users.noreply.github.com>
    Co-authored-by: brian6091 <brian6091@gmail.com>
    Co-authored-by: SimoRyu <cloneofsimo@korea.ac.kr>
    
    * Revert "Fix lora inject, added weight self apply lora (#39)" (#40)
    
    This reverts commit fececf3.
    
    * fix : rank bug in monkeypatch
    
    * fix cli fix
    
    * visualizatio on effect of LR
    
    Co-authored-by: brian6091 <brian6091@gmail.com>
    Co-authored-by: Davide Paglieri <paglieridavide@gmail.com>
    
    * Enable resume training unet/text encoder (#48)
    
    * Enable resume training unet/text encoder
    
    New flags --resume_text_encoder --resume_unet accept the paths to .pt files to resume.
    Make sure to change the output directory from the previous training session, or else .pt files will be overwritten since training does not resume from previous global step.
    
    * Load weights from .pt with inject_trainable_lora
    
    Adds new loras argument to inject_trainable_lora function which accepts path to a .pt file containing previously trained weights.
    
    Co-authored-by: Simo Ryu <35953539+cloneofsimo@users.noreply.github.com>
    Co-authored-by: brian6091 <brian6091@gmail.com>
    Co-authored-by: Davide Paglieri <paglieridavide@gmail.com>
    4 people committed Dec 16, 2022
    Configuration menu
    Copy the full SHA
    6767142 View commit details
    Browse the repository at this point in the history
  3. Configuration menu
    Copy the full SHA
    24af4c8 View commit details
    Browse the repository at this point in the history
  4. feat : pivotal tuning

    cloneofsimo committed Dec 16, 2022
    Configuration menu
    Copy the full SHA
    046422c View commit details
    Browse the repository at this point in the history
  5. Configuration menu
    Copy the full SHA
    0a92e62 View commit details
    Browse the repository at this point in the history
  6. v 0.0.6

    cloneofsimo committed Dec 16, 2022
    Configuration menu
    Copy the full SHA
    4abbf90 View commit details
    Browse the repository at this point in the history
  7. Configuration menu
    Copy the full SHA
    d0c4cc5 View commit details
    Browse the repository at this point in the history