Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

release of LLAMA-I #9

Closed
bino282 opened this issue Feb 25, 2023 · 2 comments
Closed

release of LLAMA-I #9

bino282 opened this issue Feb 25, 2023 · 2 comments

Comments

@bino282
Copy link

bino282 commented Feb 25, 2023

Do you have plan to release instruction model LLAMA-I?

@glample
Copy link
Contributor

glample commented Feb 27, 2023

LLAMA-I was only a very experimental model and we barely started working on the instruction fine-tuning, so it will not be released in the short term. We will consider it in the future when we have made more progress in this direction.

@Franck-Dernoncourt
Copy link

https://crfm.stanford.edu/2023/03/13/alpaca.html :

We introduce Alpaca 7B, a model fine-tuned from the LLaMA 7B model on 52K instruction-following demonstrations. Alpaca behaves similarly to OpenAI’s text-davinci-003, while being surprisingly small and easy/cheap to reproduce (<$600).
Web Demo   GitHub

Weights aren't released yet either though.

Liyang90 pushed a commit to Liyang90/llama that referenced this issue Jul 20, 2023
Add int8 weight-only quantized Linear Layer
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants