-
Notifications
You must be signed in to change notification settings - Fork 93
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why only the official Llama model from Meta? #10
Comments
I see! Can you please share the exception raised while trying this Llama model? The use of the official Llama model was motivated by the aim to conduct a comparative analysis with GPT models. I thought the performance comparison would be fair if we use official Llama model instead of using a quantized version of it. |
and
|
Issue was with the tokenizer that was used to download the model. Apparently, PMC_Llama required 'LlamaTokenizer' and for meta Llama 'AutoTokenizer' was used. In addition, PMC_Llama required to configure an additional 'legacy' flag.
All video demos in README are updated based on these changes and also I have cut a new release of KG-RAG to reflect these changes. I am closing this issue, since this should address the download of PMC_Llama. Feel free to re-open it, if you hit any wall. |
I am trying to use
https://github.com/chaoyi-wu/PMC-LLaMA
https://huggingface.co/axiong/PMC_LLaMA_13B
instead of the official Llama, and the repo doesn't let me.
Is there a reason why only the official Llama model from Meta is allowed?
Thanks.
The text was updated successfully, but these errors were encountered: