Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training recipe?? #14

Closed
milsun opened this issue Mar 14, 2023 · 4 comments
Closed

Training recipe?? #14

milsun opened this issue Mar 14, 2023 · 4 comments

Comments

@milsun
Copy link

milsun commented Mar 14, 2023

The blog says training recipe is too released in the code, but I cannot find it. Can you update the repo with code used for training the model, along with required dependencies/guide, etc, to help us do the same, maybe with bigger models.
Thanks for this awesome repo.

@lxuechen
Copy link
Collaborator

Hi, thanks for your interest!

We will release the training code once the Hugging Face interface to LLaMA becomes stable (merged into main).

Our fine-tuning procedure is standard and was performed with huggingface's trainer. You can see our hyperparameters here.

@Hiusam
Copy link

Hiusam commented Mar 15, 2023

Hi, do you fine-tune the model using Next Token Prediction as the pre-training? And how much CUDA memory does the training cost? Is it possible to do the training using only one 80G A100 GPU?

@rtaori
Copy link
Contributor

rtaori commented Mar 15, 2023

Hi all,

We have released the training code, see https://github.com/tatsu-lab/stanford_alpaca#fine-tuning. Please open a new issue for any further questions/concerns.

@pGit1
Copy link

pGit1 commented Mar 25, 2023

@rtaori as far as I can tell from your code it looks like standard teacher forcing (aka next token prediction), is this accurate?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants