We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Llama.cpp has proven quite convininent for many of us, I wish you could add the possibility of taking those models as inputs. Many thanks!
The text was updated successfully, but these errors were encountered:
@rcontesti - we have a work-in-progress prototype to convert fp16/fp32 version of llama
https://github.com/pytorch/executorch/tree/main/extension/gguf_util
Sorry, something went wrong.
Once again many thanks @mergennachin
No branches or pull requests
Llama.cpp has proven quite convininent for many of us, I wish you could add the possibility of taking those models as inputs. Many thanks!
The text was updated successfully, but these errors were encountered: