Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Gemma Onnx suuport #1728

Closed
2 of 4 tasks
Kaya-P opened this issue Feb 27, 2024 · 5 comments
Closed
2 of 4 tasks

Gemma Onnx suuport #1728

Kaya-P opened this issue Feb 27, 2024 · 5 comments
Labels
bug Something isn't working

Comments

@Kaya-P
Copy link

Kaya-P commented Feb 27, 2024

System Info

My system currently is 
python = 3.8 
optimum-intel : optimum-1.18.0.dev0

Who can help?

@JingyaHuang @echarlaix

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
  • My own task or dataset (give details below)

Reproduction (minimal, reproducible, runnable)

I am following the provided examples on #1714 but am running into some issues.
when I run
optimum-cli export onnx -m google/gemma-2b gemma_onnx

I get the following error
image

When I execute the python script provided I get the following error

image

Expected behavior

The expected behavior is for the model to compile and be stored in onnx form

@Kaya-P Kaya-P added the bug Something isn't working label Feb 27, 2024
@fxmarty
Copy link
Collaborator

fxmarty commented Feb 27, 2024

Answered on the other thread

@tchen48
Copy link

tchen48 commented Mar 5, 2024

@fxmarty can you give me your version of optimum and transformers? I am running in the latest version and still have the error:

ValueError: Trying to export a gemma model, that is a custom or unsupported architecture, but no custom onnx configuration was passed as custom_onnx_configs. Please refer to https://huggingface.co/docs/optimum/main/en/exporters/onnx/usage_guides/export_a_model#custom-export-of-transformers-models for an example on how to export custom models. Please open an issue at https://github.com/huggingface/optimum/issues if you would like the model type gemma to be supported natively in the ONNX export.

I am using the same command that you are using

@andyyeh75
Copy link

@fxmarty I encountered the same issue with this project as well while adding 'google/gemma-2b-it' as a LLM option.
openvino-llm-chatbot-rag

cc @tchen48

@IlyasMoutawwakil
Copy link
Member

you'll need to install from main pip install optimum@git+https://github.com/huggingface/optimum.git for gemma support

@fxmarty
Copy link
Collaborator

fxmarty commented Mar 19, 2024

Closing as supported on main. Please install from source till the next release.

@fxmarty fxmarty closed this as completed Mar 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

5 participants