Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Avoid overriding model_type in TasksManager #1647

Merged
merged 13 commits into from Feb 6, 2024

Conversation

fxmarty
Copy link
Collaborator

@fxmarty fxmarty commented Jan 16, 2024

This is bad practice as PretrainedConfig.model_type is a class attribute.

We also dissociate _SUPPORTED_MODEL_TYPE by library, as suggested by @JingyaHuang.

Both

optimum-cli export onnx -m sentence-transformers/all-MiniLM-L6-v2 minilm_st_onnx --library-name sentence_transformers

and

optimum-cli export onnx -m sentence-transformers/all-MiniLM-L6-v2 minilm_onnx --library-name transformers

work as expected.

Still need to specify transformers in the ORTModel with export=True (we do not support sentence-transformers).

Copy link
Collaborator

@JingyaHuang JingyaHuang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

May I start testing with sub package like neuron?

optimum/exporters/tasks.py Show resolved Hide resolved
tests/exporters/exporters_utils.py Show resolved Hide resolved
@fxmarty
Copy link
Collaborator Author

fxmarty commented Jan 17, 2024

@JingyaHuang Sure, feel free to test so that this PR does not break too many things. What I expect to break is the usage of TasksManager.create_register in case you want to register models for export for libraries other than transformers (that is default): https://github.com/huggingface/optimum-neuron/blob/f81c3654c358eb2acfc1d294f4e43725ea822044/optimum/exporters/neuron/model_configs.py#L69

e.g. you would need @register_in_tasks_manager("whatever", *DIFFUSERS_TASKS, library_name="diffusers"). Apart from neuron, is it used elsewhere?

@JingyaHuang
Copy link
Collaborator

AFAIK, so far the register is only used under neuron. @fxmarty

Copy link
Collaborator

@JingyaHuang JingyaHuang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the PR, left some questions and nits. Testing for neuron as well, will update you in a bit.

optimum/exporters/onnx/__main__.py Show resolved Hide resolved
@@ -398,7 +402,13 @@ def main_export(
"Could not infer the pad token id, which is needed in this case, please provide it with the --pad_token_id argument"
)

model_type = "stable-diffusion" if "stable-diffusion" in task else model.config.model_type.replace("_", "-")
if "stable-diffusion" in task:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about stable-diffusion-xl don't we need an extra case for it?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

optimum/exporters/onnx/utils.py Show resolved Hide resolved
@@ -366,6 +370,7 @@ def get_stable_diffusion_models_for_export(
onnx_config_constructor = TasksManager.get_exporter_config_constructor(
model=pipeline.text_encoder_2,
exporter="onnx",
library_name="diffusers",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same

JingyaHuang
JingyaHuang previously approved these changes Jan 31, 2024
Copy link
Collaborator

@JingyaHuang JingyaHuang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks for the PR!

@JingyaHuang JingyaHuang self-requested a review January 31, 2024 11:55
@JingyaHuang JingyaHuang dismissed their stale review January 31, 2024 11:59

Inference bugged due to the changes in infer_library_from_model

Copy link
Collaborator

@JingyaHuang JingyaHuang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, let's get it merged once the CIs are good! Thanks for iterating the issues occurs for subpackages @fxmarty !

@JingyaHuang
Copy link
Collaborator

Any update on this? 👀

@fxmarty
Copy link
Collaborator Author

fxmarty commented Feb 6, 2024

Merging as failing tests are unrelated & fixed in #1683 & microsoft/onnxruntime#19421

@fxmarty fxmarty merged commit 3b4f5ac into huggingface:main Feb 6, 2024
47 of 62 checks passed
young-developer pushed a commit to young-developer/optimum that referenced this pull request May 10, 2024
* avoid modifying model_type

* cleanup

* fix test

* fix test

* fix library detection local model

* fix merge

* make library_name non-optional

* fix warning

* trigger ci

* fix library detection
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants