You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If the models is lora-trained, It must be use PEFT library,
but the that model is not using PEFT, just loaded on only transformers library.
Can you explain about this?
The text was updated successfully, but these errors were encountered:
jason9693
changed the title
Your shared model with LLAMA-2 is not trained on Lora, It's full-finetuned model.
Your shared model trained on LLAMA2 is not trained on Lora, It's full-finetuned model.
Aug 7, 2024
We merge the LoRA weights into the main model at the end of training and before uploading to make it easier for folks to use the model without installing PEFT. See the method here.
In you paper,
You said llama2 was trained on LLAMA with fewer parameters,
But your shared model (princeton-nlp/AutoCompressor-Llama-2-7b-6k) seems just full fine-tuned model.
https://huggingface.co/princeton-nlp/AutoCompressor-Llama-2-7b-6k/tree/main
If the models is lora-trained, It must be use PEFT library,
but the that model is not using PEFT, just loaded on only transformers library.
Can you explain about this?
The text was updated successfully, but these errors were encountered: