Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Code for training llama pro? #2

Open
yhyu13 opened this issue Jan 7, 2024 · 8 comments
Open

Code for training llama pro? #2

yhyu13 opened this issue Jan 7, 2024 · 8 comments

Comments

@yhyu13
Copy link

yhyu13 commented Jan 7, 2024

Hi,

If I get it correctly, you have used code from https://github.com/allenai/open-instruct as base.

Would you release the full code of reproducing llama2 pro 8B?

Thanks!

@hills-code
Copy link
Collaborator

Yes, of course. I will organize the code recently. Thanks for your interest.

@raghavgarg97
Copy link

raghavgarg97 commented Jan 9, 2024

Hey @hills-code ,could you also add code for converting a model by adding identity blocks for training ?
I am excited to use similar techniques for other open-source models like Qwen!!
Thanks!:)

@hills-code
Copy link
Collaborator

Hey @hills-code ,could you also add code for converting a model by adding identity blocks for training ? I am excited to use similar techniques for other open-source models like Qwen!! Thanks!:)

I have added the block expansion script under the folder scripts. You can check it for reference. Hope it will be helpful!

@raghavgarg97
Copy link

i will check that out,Thanks!

@raghavgarg97
Copy link

raghavgarg97 commented Jan 10, 2024

@hills-code i was able to get it to work! Btw you only train the added layers and not even the lm head?
and also do you think directly going for SFT by skipping pretraining will work?

@yhyu13
Copy link
Author

yhyu13 commented Jan 12, 2024

@raghavgarg97 No, I don't think you can skip the "post-training" for extra blocks on some new corpus (e.g. bigcode, as mentioned in the paper) before applying tuning bc the purpose of LLaMA Pro is sort of "add new ability w/o forgetting by introducing more layers". The new ability gained is training new layers on new corpus.

@Abolfazl-kr
Copy link

Abolfazl-kr commented Jan 13, 2024

does anybody know how to continue pre-training LLaMA Pro? @yhyu13 @raghavgarg97 @yxgeee @hills-code

@yhyu13
Copy link
Author

yhyu13 commented Jan 16, 2024

nah,code has not been released yet

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants