-
Notifications
You must be signed in to change notification settings - Fork 479
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support peft LoRA adapters #335
Support peft LoRA adapters #335
Conversation
tests/test_peft.py
Outdated
|
||
|
||
NOSAFE_PEFT_REPO = "timdettmers/guanaco-7b" | ||
SAFE_PEFT_REPO = "artek0chumak/guanaco-7b" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you please replace this with a smaller pair of adapters? The current ones are too large to download in CI, this slows down tests by ~30%.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Create smaller peft weights for bloom-560m
…339) Before this PR, `free_disk_space_for()` was able to remove **(a)** only entire cached revisions (= git commits/branches) and **(b)** only from the repository we're loading right now. This PR allows this functions to remove arbitrary files separately from any repositories. This is useful for transition to Petals 1.2.0+, since it now uses original repos instead of the ones with converted models (see #323). In particular, the cache for `bigscience/bloom-petals` is now deprecated and should be removed in favor of `bigscience/bloom`. This is also useful as a way to free space before loading LoRA adapters (#335).
d639331
to
27bb946
Compare
45cf7f1
to
2244e7d
Compare
Review / comments:
Note to @borzunov : the tests do indeed take longer now; the reason behind this is that we run 2 more "heavy" tests that verify full model exact match with adapters. We can probably skip them if this is a problem. |
…tek0chumak/petals into artek0chumak/peft_safetensors
Thank you for mentioning this! Change LoRA weights in hub, currently they have non-zero parameters. |
…tek0chumak/petals into artek0chumak/peft_safetensors
…ors' into artek0chumak/peft_safetensors
No description provided.