You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for your excellent work? I noticed that your train scripts employed the "CodeLlama-${MODEL_SIZE}-Python-hf" weights. I also constructed a small dataset for training but I used your "llm-agents/tora-code-7b-v1.0" model weights to continue finetune on my data. But when I inference with the finetuned weights. There seems to be wrong and the generated output is "". So I wonder if the training script should be modified if I wanted to use the tora-code-7b-v1.0 weights. Could you help me? Thanks a lot!
The text was updated successfully, but these errors were encountered:
Thanks for your kindly help. It is because the vllm library cannot support the safetensors file saved by accelerator library. And when will the training data can be released? Thanks a lot!
Great to hear you were able to solve the issue! As for the full training data, we're pushing to expedite the release, but I don't have a specific timeline due to the internal review process. Please stay tuned for updates!
Thanks for your excellent work? I noticed that your train scripts employed the "CodeLlama-${MODEL_SIZE}-Python-hf" weights. I also constructed a small dataset for training but I used your "llm-agents/tora-code-7b-v1.0" model weights to continue finetune on my data. But when I inference with the finetuned weights. There seems to be wrong and the generated output is "". So I wonder if the training script should be modified if I wanted to use the tora-code-7b-v1.0 weights. Could you help me? Thanks a lot!
The text was updated successfully, but these errors were encountered: