-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue with converting to ONNX format for LLM models #95
Comments
Hi @ratan, thank you for trying out turnkey! We currently have a limitation with respect to auto-regressive models: when you call I took the Mistral-7B-Instruct example and replaced the
Here is an additional example from turnkey's model corpus showing how to get a single ONNX file with a desired input shape: https://github.com/onnx/turnkeyml/blob/main/models/transformers/mistral_7b.py @ratan does this workaround work for your use case? cc @danielholanda for visibility |
Hi @jeremyfowers thanks for quick reply. I tried the suggestions you have mentioned and it worked!! Replacing the Below sample works for Mistral-7B-Instruct-v0.2
Below sample works for LLM360/CrystalChat, microsoft/phi-2 and adept/fuyu-8b models
You may close this issue. Thanks again. |
I'm glad to have been able to help! Please reach out if you have any more questions :) |
crystal_chat_logs.txt
I have tried to convert few models to ONNX format and I am facing below issues:
The text was updated successfully, but these errors were encountered: