Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Your shared model trained on LLAMA2 is not trained on Lora, It's full-finetuned model. #25

Closed
jason9693 opened this issue Aug 7, 2024 · 1 comment

Comments

@jason9693
Copy link

jason9693 commented Aug 7, 2024

In you paper,
You said llama2 was trained on LLAMA with fewer parameters,
But your shared model (princeton-nlp/AutoCompressor-Llama-2-7b-6k) seems just full fine-tuned model.
https://huggingface.co/princeton-nlp/AutoCompressor-Llama-2-7b-6k/tree/main

If the models is lora-trained, It must be use PEFT library,
but the that model is not using PEFT, just loaded on only transformers library.

Can you explain about this?

screenshot
@jason9693 jason9693 changed the title Your shared model with LLAMA-2 is not trained on Lora, It's full-finetuned model. Your shared model trained on LLAMA2 is not trained on Lora, It's full-finetuned model. Aug 7, 2024
@CodeCreator
Copy link
Member

We merge the LoRA weights into the main model at the end of training and before uploading to make it easier for folks to use the model without installing PEFT. See the method here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants