Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support PEFT models in Transformers flavor #11120

Merged
merged 5 commits into from Feb 15, 2024
Merged

Conversation

B-Step62
Copy link
Collaborator

@B-Step62 B-Step62 commented Feb 14, 2024

🛠 DevTools 🛠

Open in GitHub Codespaces

Install mlflow from this PR

pip install git+https://github.com/mlflow/mlflow.git@refs/pull/11120/merge

Checkout with GitHub CLI

gh pr checkout 11120

Related Issues/PRs

#9256

What changes are proposed in this pull request?

Add support for PEFT models in the Transformer flavor, on top of the recent save_pretrained feature. Basically, when PEFT model is used for the pipeline, we only save the PEFT adapter weight and ignore base model weights.

Tracker

This PR is filed for the PEFT feature branch. More changes are needed be done before merging the feature branch to master:

  • Introduce save_pretrained flag and implement saving/loading logic (PR).
  • Block registering model to MLflow Model Registry (OSS Model Registry, Databricks Workspace Model Registry, UC Model Registry)
  • Implement an API to download the weight files to the existing weight-less model (so it can be registered without re-logging).
  • [This PR] Support PEFT model.
  • Update documentation and examples

How is this PR tested?

  • Existing unit/integration tests
  • New unit/integration tests
  • Manual tests

A sample PEFT pipeline is added to the test suite. More comprehensive tests will be conducted as a part of final bug-bash before merging the faeture branch.

Does this PR require documentation update?

  • No. You can skip the rest of this section.
  • Yes. I've updated:
    • Examples
    • API references
    • Instructions

Documentation and Tutorials will be updated as a follow-up in next sprint.

Release Notes

Is this a user-facing change?

  • No. You can skip the rest of this section.
  • Yes. Give a description of this change to be included in the release notes for MLflow users.

Support Parameter-Efficient Fine-Tuning (PEFT) models in Transformers flavor.

What component(s), interfaces, languages, and integrations does this PR affect?

Components

  • area/artifacts: Artifact stores and artifact logging
  • area/build: Build and test infrastructure for MLflow
  • area/deployments: MLflow Deployments client APIs, server, and third-party Deployments integrations
  • area/docs: MLflow documentation pages
  • area/examples: Example code
  • area/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registry
  • area/models: MLmodel format, model serialization/deserialization, flavors
  • area/recipes: Recipes, Recipe APIs, Recipe configs, Recipe Templates
  • area/projects: MLproject format, project running backends
  • area/scoring: MLflow Model server, model deployment tools, Spark UDFs
  • area/server-infra: MLflow Tracking server backend
  • area/tracking: Tracking Service, tracking client APIs, autologging

Interface

  • area/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev server
  • area/docker: Docker use across MLflow's components, such as MLflow Projects and MLflow Models
  • area/sqlalchemy: Use of SQLAlchemy in the Tracking Service or Model Registry
  • area/windows: Windows support

Language

  • language/r: R APIs and clients
  • language/java: Java APIs and clients
  • language/new: Proposals for new client languages

Integrations

  • integrations/azure: Azure and Azure ML integrations
  • integrations/sagemaker: SageMaker integrations
  • integrations/databricks: Databricks integrations

How should the PR be classified in the release notes? Choose one:

  • rn/none - No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" section
  • rn/breaking-change - The PR will be mentioned in the "Breaking Changes" section
  • rn/feature - A new user-facing feature worth mentioning in the release notes
  • rn/bug-fix - A user-facing bug fix worth mentioning in the release notes
  • rn/documentation - A user-facing documentation change worth mentioning in the release notes

Signed-off-by: B-Step62 <yuki.watanabe@databricks.com>
Signed-off-by: B-Step62 <yuki.watanabe@databricks.com>
Copy link

github-actions bot commented Feb 14, 2024

Documentation preview for edaf8ba will be available when this CircleCI job completes successfully.

More info

@github-actions github-actions bot added area/models MLmodel format, model serialization/deserialization, flavors rn/feature Mention under Features in Changelogs. labels Feb 14, 2024
Signed-off-by: B-Step62 <yuki.watanabe@databricks.com>
Copy link
Collaborator

@chenmoneygithub chenmoneygithub left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks Yuki! Looks great overall! Left some comments.

"following the Transformers behavior.",
)
save_peft_adaptor(path, built_pipeline.model)
save_pretrained = False
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the behavior of save_pretrained() for peft model is only saving the peft configs + lora weights. Basically peft_model.save_pretrained() will do the same job: https://huggingface.co/docs/peft/en/package_reference/peft_model#peft.PeftModel.save_pretrained

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea indeed save_peft_adaptor does the exact same job (as you pointed below). The rest of the process with save_pretrained=False is more for metadata logging, such as the repo and commit hash for the base model (while PEFT config stores the base model repo, we need to log commit hash in MLModel as well for security requirement).

mlflow/transformers/__init__.py Show resolved Hide resolved
mlflow/transformers/__init__.py Show resolved Hide resolved
mlflow/transformers/peft.py Show resolved Hide resolved
# However, when the PEFT config is the one for "prompt learning", there is not adaptor class
# and the PeftModel class directly wraps the base model.
if peft_config := peft_config_map.get(model.active_adapter):
if not peft_config.is_prompt_learning:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we can combine this if clause with the above line.

Copy link
Member

@harupy harupy Feb 15, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

unless it looks ugly. Long condition + assignment expression + autoformatting can lead to unreadable code.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah turns out the assignment expression doesn't add the variable to the scope of next condition, i.e. this doesn't work.

if peft_config := sth and peft_config.xxx

I can create peft_config outside the condition then it looks clearer with combined conditions.

Copy link
Member

@harupy harupy Feb 15, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@B-Step62 I think if peft_config := sth and peft_config.xxx is equivalent to if peft_config := (sth and peft_config.xxx). if (peft_config := sth) and peft_config.xxx should work. It looks like we no longer need this tho.

mlflow/transformers/peft.py Outdated Show resolved Hide resolved
mlflow/transformers/peft.py Show resolved Hide resolved
tests/transformers/test_transformers_peft_model.py Outdated Show resolved Hide resolved
tests/transformers/helper.py Outdated Show resolved Hide resolved
Signed-off-by: B-Step62 <yuki.watanabe@databricks.com>
Copy link
Member

@BenWilson2 BenWilson2 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor nits :) Looks great!

Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com>
Signed-off-by: Yuki Watanabe <31463517+B-Step62@users.noreply.github.com>
Copy link
Member

@harupy harupy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@harupy
Copy link
Member

harupy commented Feb 15, 2024

The disk is almost full:

image

https://github.com/mlflow/mlflow/actions/runs/7910664074/job/21593617127?pr=11120

We still have ~5GB disk space but I feel like we'll hit the limit soon. Is it possible to clean up cache after each test module runs?


def get_peft_base_model(model):
"""Extract the base model from a PEFT model."""
peft_config = model.peft_config.get(model.active_adapter) if model.peft_config else None
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is model.peft_config a dict? Does it always exist? If so, can we remove if model.peft_config else None?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah I see, it can be None?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a bit confusing but model.peft_config is a dictionary that maps adapter name like "lora" to PeftConfig object. As far as I read existing adapter classes, the property should not be None, but just wanted to be safe as there is no validation for it to be non-null.

@B-Step62
Copy link
Collaborator Author

B-Step62 commented Feb 15, 2024

We still have ~5GB disk space but I feel like we'll hit the limit soon. Is it possible to clean up cache after each test module runs?

@harupy I feel removing HF cache is a bit tedious because we need to manage when to clean and when to not. Previously we had @skipcleancache annotation to control whether or not to clean cache, as some models are used multiple times), but it creates dependency to the test order which is not super great:(

Instead, I'm wondering if we can

  1. use tmp_path as a destination of mlflow.pyfunc.load_model -> Currently we don't specify it and the downloaded artifacts remain in the disk (which eat up ~20GB presumably).
  2. explore smaller models - Some models are big (~GB) so can be replaced with smaller ones.

If this sounds good, I can do that as a follow-up:)

@harupy
Copy link
Member

harupy commented Feb 15, 2024

@B-Step62 Using tmp_path as a destination of mlflow.pyfunc.load_model sounds good! Let's do this :)

@B-Step62 B-Step62 merged commit 6be68f7 into mlflow:peft Feb 15, 2024
62 checks passed
@B-Step62 B-Step62 deleted the support-peft branch February 15, 2024 06:22
@B-Step62
Copy link
Collaborator Author

@harupy sorry I was wrong. The load_model doesn't create the copy of models, because it uses local artifact repo which just reuses the saved artifacts for loading models. After some research, it seems the actual cause is the pytest's behavior that keeps tmp_path for 3 recent test sessions rather than cleaning up immediately. Filed a PR to clean up the tmp dir after each test case.

B-Step62 added a commit to B-Step62/mlflow that referenced this pull request Feb 19, 2024
Signed-off-by: B-Step62 <yuki.watanabe@databricks.com>
Signed-off-by: Yuki Watanabe <31463517+B-Step62@users.noreply.github.com>
Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com>
B-Step62 added a commit that referenced this pull request Feb 23, 2024
Signed-off-by: B-Step62 <yuki.watanabe@databricks.com>
Signed-off-by: Yuki Watanabe <31463517+B-Step62@users.noreply.github.com>
Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com>
B-Step62 added a commit to B-Step62/mlflow that referenced this pull request Feb 26, 2024
Signed-off-by: B-Step62 <yuki.watanabe@databricks.com>
Signed-off-by: Yuki Watanabe <31463517+B-Step62@users.noreply.github.com>
Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com>
B-Step62 added a commit that referenced this pull request Feb 27, 2024
Signed-off-by: B-Step62 <yuki.watanabe@databricks.com>
Signed-off-by: Yuki Watanabe <31463517+B-Step62@users.noreply.github.com>
Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com>
B-Step62 added a commit that referenced this pull request Feb 28, 2024
Signed-off-by: B-Step62 <yuki.watanabe@databricks.com>
Signed-off-by: Yuki Watanabe <31463517+B-Step62@users.noreply.github.com>
Co-authored-by: Ben Wilson <39283302+BenWilson2@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/models MLmodel format, model serialization/deserialization, flavors rn/feature Mention under Features in Changelogs.
Projects
Status: Merged
Development

Successfully merging this pull request may close these issues.

None yet

4 participants