Skip to content

feat: add model#load_model method#3780

Merged
aarnphm merged 6 commits intobentoml:mainfrom
parano:add-load-model-method
Apr 21, 2023
Merged

feat: add model#load_model method#3780
aarnphm merged 6 commits intobentoml:mainfrom
parano:add-load-model-method

Conversation

@parano
Copy link
Copy Markdown
Member

@parano parano commented Apr 20, 2023

What does this PR address?

A slightly nicer way for loading model instances without thinking about which module to use - which is common when building custom runners or testing saved models. Here's an example of usage without vs. with this method added:

Let's say we saved a CLIP model from the transformer:

from transformers import CLIPModel, CLIPProcessor

model = CLIPModel.from_pretrained("openai/clip-vit-base-patch32")
processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32")

bentoml.transformers.save_model('CLIP:clip-vit-base-patch32', model, custom_objects={'processor': processor})

Without this method, loading the model and processor looks like this:

MODEL_TAG = 'clip:clip-vit-base-patch32'
model = bentoml.transformers.load_model(MODEL_TAG)
model_obj = bentoml.models.get(MODEL_TAG)
processor = model_obj.custom_objects['processor']

After this PR:

model_obj = bentoml.models.get('clip:clip-vit-base-patch32')
model = model_obj.load_model()
processor = model_obj.custom_objects['processor']

Fixes #(issue)

Before submitting:

@parano parano requested a review from a team as a code owner April 20, 2023 23:15
@parano parano requested review from bojiang and removed request for a team April 20, 2023 23:15
Comment thread src/bentoml/_internal/models/model.py
@codecov
Copy link
Copy Markdown

codecov Bot commented Apr 20, 2023

Codecov Report

Merging #3780 (efc501e) into main (0107ead) will not change coverage.
The diff coverage is 0.00%.

Impacted file tree graph

@@          Coverage Diff          @@
##            main   #3780   +/-   ##
=====================================
  Coverage   0.00%   0.00%           
=====================================
  Files        154     154           
  Lines      12613   12619    +6     
=====================================
- Misses     12613   12619    +6     
Impacted Files Coverage Δ
src/bentoml/_internal/models/model.py 0.00% <0.00%> (ø)

... and 1 file with indirect coverage changes

Co-authored-by: Sauyon Lee <2347889+sauyon@users.noreply.github.com>
aarnphm
aarnphm previously approved these changes Apr 20, 2023
Comment thread src/bentoml/_internal/models/model.py Outdated
aarnphm
aarnphm previously approved these changes Apr 20, 2023
Signed-off-by: Aaron <29749331+aarnphm@users.noreply.github.com>
@aarnphm
Copy link
Copy Markdown
Contributor

aarnphm commented Apr 21, 2023

@parano I added a quick tests for testing this behaviour in transformers tests.

You can run it with pytest tests/integration/frameworks/test_frameworks.py tests/integration/frameworks/test_transformers_unit.py --framework transformers --capture=tee-sys

aarnphm
aarnphm previously approved these changes Apr 21, 2023
Signed-off-by: Aaron <29749331+aarnphm@users.noreply.github.com>
@aarnphm aarnphm changed the title feat: add model#load_model method feat: add model#load_model method Apr 21, 2023
@aarnphm aarnphm merged commit d600fd1 into bentoml:main Apr 21, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants