Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How can I export onnx-model for Qwen/Qwen-7B? #1703

Open
smile2game opened this issue Feb 20, 2024 · 1 comment
Open

How can I export onnx-model for Qwen/Qwen-7B? #1703

smile2game opened this issue Feb 20, 2024 · 1 comment
Labels
onnx Related to the ONNX export

Comments

@smile2game
Copy link

Feature request

I need to export the model named qwen to accelerate.
optimum-cli export onnx --model Qwen/Qwen-7B qwen_optimum_onnx/ --trust-remote-code

Motivation

I want to export the model qwen to use onnxruntime

Your contribution

I can give the input and output.

@fxmarty fxmarty added the onnx Related to the ONNX export label Feb 26, 2024
@fxmarty
Copy link
Collaborator

fxmarty commented Feb 26, 2024

@smile2game Thank you. Qwen is not natively supported in Transformers (but Qwen2 is huggingface/transformers#28436). I tried running the export for Qwen-7B and we get:

Traceback (most recent call last):
  File "/home/felix/miniconda3/envs/fx/bin/optimum-cli", line 8, in <module>
    sys.exit(main())
  File "/home/felix/optimum/optimum/commands/optimum_cli.py", line 163, in main
    service.run()
  File "/home/felix/optimum/optimum/commands/export/onnx.py", line 261, in run
    main_export(
  File "/home/felix/optimum/optimum/exporters/onnx/__main__.py", line 351, in main_export
    onnx_export_from_model(
  File "/home/felix/optimum/optimum/exporters/onnx/convert.py", line 1035, in onnx_export_from_model
    raise ValueError(
ValueError: Trying to export a qwen model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as `custom_onnx_configs`. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the model type qwen to be supported natively in the ONNX export.

which is expected. Have you checked: https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#customize-the-export-of-transformers-models-with-custom-modeling?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
onnx Related to the ONNX export
Projects
None yet
Development

No branches or pull requests

2 participants