New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support PEFT models in Transformers flavor #11120
Conversation
Signed-off-by: B-Step62 <yuki.watanabe@databricks.com>
Signed-off-by: B-Step62 <yuki.watanabe@databricks.com>
Documentation preview for edaf8ba will be available when this CircleCI job completes successfully. More info
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks Yuki! Looks great overall! Left some comments.
"following the Transformers behavior.", | ||
) | ||
save_peft_adaptor(path, built_pipeline.model) | ||
save_pretrained = False |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the behavior of save_pretrained()
for peft model is only saving the peft configs + lora weights. Basically peft_model.save_pretrained()
will do the same job: https://huggingface.co/docs/peft/en/package_reference/peft_model#peft.PeftModel.save_pretrained
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea indeed save_peft_adaptor
does the exact same job (as you pointed below). The rest of the process with save_pretrained=False
is more for metadata logging, such as the repo and commit hash for the base model (while PEFT config stores the base model repo, we need to log commit hash in MLModel as well for security requirement).
mlflow/transformers/peft.py
Outdated
# However, when the PEFT config is the one for "prompt learning", there is not adaptor class | ||
# and the PeftModel class directly wraps the base model. | ||
if peft_config := peft_config_map.get(model.active_adapter): | ||
if not peft_config.is_prompt_learning: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we can combine this if clause with the above line.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
unless it looks ugly. Long condition + assignment expression + autoformatting can lead to unreadable code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah turns out the assignment expression doesn't add the variable to the scope of next condition, i.e. this doesn't work.
if peft_config := sth and peft_config.xxx
I can create peft_config
outside the condition then it looks clearer with combined conditions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@B-Step62 I think if peft_config := sth and peft_config.xxx
is equivalent to if peft_config := (sth and peft_config.xxx)
. if (peft_config := sth) and peft_config.xxx
should work. It looks like we no longer need this tho.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor nits :) Looks great!
Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com> Signed-off-by: Yuki Watanabe <31463517+B-Step62@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
The disk is almost full: https://github.com/mlflow/mlflow/actions/runs/7910664074/job/21593617127?pr=11120 We still have ~5GB disk space but I feel like we'll hit the limit soon. Is it possible to clean up cache after each test module runs? |
|
||
def get_peft_base_model(model): | ||
"""Extract the base model from a PEFT model.""" | ||
peft_config = model.peft_config.get(model.active_adapter) if model.peft_config else None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is model.peft_config
a dict? Does it always exist? If so, can we remove if model.peft_config else None
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah I see, it can be None?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a bit confusing but model.peft_config
is a dictionary that maps adapter name like "lora" to PeftConfig
object. As far as I read existing adapter classes, the property should not be None, but just wanted to be safe as there is no validation for it to be non-null.
@harupy I feel removing HF cache is a bit tedious because we need to manage when to clean and when to not. Previously we had Instead, I'm wondering if we can
If this sounds good, I can do that as a follow-up:) |
@B-Step62 Using |
@harupy sorry I was wrong. The |
Signed-off-by: B-Step62 <yuki.watanabe@databricks.com> Signed-off-by: Yuki Watanabe <31463517+B-Step62@users.noreply.github.com> Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com>
Signed-off-by: B-Step62 <yuki.watanabe@databricks.com> Signed-off-by: Yuki Watanabe <31463517+B-Step62@users.noreply.github.com> Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com>
Signed-off-by: B-Step62 <yuki.watanabe@databricks.com> Signed-off-by: Yuki Watanabe <31463517+B-Step62@users.noreply.github.com> Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com>
Signed-off-by: B-Step62 <yuki.watanabe@databricks.com> Signed-off-by: Yuki Watanabe <31463517+B-Step62@users.noreply.github.com> Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com>
Signed-off-by: B-Step62 <yuki.watanabe@databricks.com> Signed-off-by: Yuki Watanabe <31463517+B-Step62@users.noreply.github.com> Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com>
🛠 DevTools 🛠
Install mlflow from this PR
Checkout with GitHub CLI
Related Issues/PRs
#9256
What changes are proposed in this pull request?
Add support for PEFT models in the Transformer flavor, on top of the recent
save_pretrained
feature. Basically, when PEFT model is used for the pipeline, we only save the PEFT adapter weight and ignore base model weights.Tracker
This PR is filed for the PEFT feature branch. More changes are needed be done before merging the feature branch to master:
save_pretrained
flag and implement saving/loading logic (PR).How is this PR tested?
A sample PEFT pipeline is added to the test suite. More comprehensive tests will be conducted as a part of final bug-bash before merging the faeture branch.
Does this PR require documentation update?
Documentation and Tutorials will be updated as a follow-up in next sprint.
Release Notes
Is this a user-facing change?
Support Parameter-Efficient Fine-Tuning (PEFT) models in Transformers flavor.
What component(s), interfaces, languages, and integrations does this PR affect?
Components
area/artifacts
: Artifact stores and artifact loggingarea/build
: Build and test infrastructure for MLflowarea/deployments
: MLflow Deployments client APIs, server, and third-party Deployments integrationsarea/docs
: MLflow documentation pagesarea/examples
: Example codearea/model-registry
: Model Registry service, APIs, and the fluent client calls for Model Registryarea/models
: MLmodel format, model serialization/deserialization, flavorsarea/recipes
: Recipes, Recipe APIs, Recipe configs, Recipe Templatesarea/projects
: MLproject format, project running backendsarea/scoring
: MLflow Model server, model deployment tools, Spark UDFsarea/server-infra
: MLflow Tracking server backendarea/tracking
: Tracking Service, tracking client APIs, autologgingInterface
area/uiux
: Front-end, user experience, plotting, JavaScript, JavaScript dev serverarea/docker
: Docker use across MLflow's components, such as MLflow Projects and MLflow Modelsarea/sqlalchemy
: Use of SQLAlchemy in the Tracking Service or Model Registryarea/windows
: Windows supportLanguage
language/r
: R APIs and clientslanguage/java
: Java APIs and clientslanguage/new
: Proposals for new client languagesIntegrations
integrations/azure
: Azure and Azure ML integrationsintegrations/sagemaker
: SageMaker integrationsintegrations/databricks
: Databricks integrationsHow should the PR be classified in the release notes? Choose one:
rn/none
- No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" sectionrn/breaking-change
- The PR will be mentioned in the "Breaking Changes" sectionrn/feature
- A new user-facing feature worth mentioning in the release notesrn/bug-fix
- A user-facing bug fix worth mentioning in the release notesrn/documentation
- A user-facing documentation change worth mentioning in the release notes