We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
python train_qlora.py --train_args_json chatGLM_6B_QLoRA.json --model_name_or_path /rainbow/zhangjunfeng/bert_models/pytorch/chatglm2-6b --train_data_path /rainbow/zhangjunfeng/ChatGLM-Efficient-Tuning/data/rb.jsonl --lora_rank 4 --lora_dropout 0.05 --compute_dtype fp32
python=3.9 peft==0.4.0 bitsandbytes==0.41.0 其他的按requirements.txt安装
The text was updated successfully, but these errors were encountered:
正常训练完确实只有这两个文件,checkpoints里的文件会多一些。 所以你是不是训练的步数很少,连保存checkpoint的step都没有达到?
Sorry, something went wrong.
确实是数据太少了导致没有checkpoints
No branches or pull requests
python=3.9
peft==0.4.0
bitsandbytes==0.41.0
其他的按requirements.txt安装
The text was updated successfully, but these errors were encountered: