-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Usage] Error while using finetuned model #1519
Comments
It seems like you fine tuned using LoRa, which means you have to use the merge_lora_weights.py script (located under scripts/ folder) before trying inference. |
@itay1542 I have ran the merge_lora_weights.py and created the merged one. but getting an error, File "/home/skadmin/cx-research/core/Llava/llava/eval/run_llava.py", line 117, in eval_model here is the code: |
@itay1542 I have fine tuned the model without lora, Now what should i do for the inference. Kindly guide me |
Describe the issue
Issue: I have fine tuned the llava-v1.5-7b. And in the output directory I got some files.
then I tried inference using this folder as model_path and base model as liuhaotian/llava-v1.5-7b. I am getting error.
Command:
Log:
Screenshots:
You may attach screenshots if it better explains the issue.
The text was updated successfully, but these errors were encountered: