-
Notifications
You must be signed in to change notification settings - Fork 998
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Leveraging Llama 2 #75
Comments
Don't we have all the training data to just do that on our own? The fine-tuning shouldn't be that hard to get training. |
ugh maybe not #46 . I haven't read a self-instruct paper. Isn't it just doing inference to generate more training data? Maybe jsonformer is involved. idk edit: Okay so no jsonformer. GPT-4 was used for self-instruct:
Maybe this code will be shared? It should be relatively trivial (Thanks to the nuances described in the paper) with some tinkering anyway. |
Hey @TomExMachina all the training data is at https://github.com/ShishirPatil/gorilla/tree/main/data/apibench All files with the |
Yes! We will release a LLaMA v2 as soon as we can get our hands on some compute! |
Hi @ShishirPatil is the training code being released as well? Thanks! |
Is there any How-To guide to fine-tune/training for those unfamiliar with the topic but would like to contribute? |
@tonxxd There is a community contributed PR in the works here #59 Thanks for your interest @yordis ! If you are interested in contributing APIs, we have a README https://github.com/ShishirPatil/gorilla/tree/main/data#how-to-contribute Let me know if you have any follow up questions! |
I don’t see any existing discussion about leveraging Meta’s new Llama 2 model. Curious if you guys have any plans in the making for using this new base model in gorilla.
The text was updated successfully, but these errors were encountered: