-
Notifications
You must be signed in to change notification settings - Fork 460
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to train model with databricks-dolly-15k.jsonl dataset format. #13
Comments
.. |
@TapendraBaduwal you should probably wait till the models fully trained and then ask for this, but there was a mention sft in one of the closed issues |
Our model can largely be plugged and played in repos that support llama 2 (including BitsandBytes and SFT repos like FastChat. For your case, you need to find a training script that supports databricks-dolly-15k.jsonl dataset format and change the model name to our released checkpoint. Just make sure you have the latest version of HF transformers to support MQA. We are working on fine-tuning our model as well. Will be releasing something, probably next week. |
@jzhang38 For fine-tune i am using Parameter-Efficient Fine-Tuning (PEFT) . PEFT supports the QLoRa method to fine-tune a small fraction of the LLM parameters with 4-bit quantization. By merge the adapter weights. It is the right way to fine-tune this tiny model ? |
How to train model with databricks-dolly-15k.jsonl dataset format.
Can we Finetuning using BitsandBytes and SFT ?
The text was updated successfully, but these errors were encountered: