Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chatglm2-6b lora微调是不是有问题 #27

Closed
Alwin4Zhang opened this issue Jul 26, 2023 · 2 comments
Closed

chatglm2-6b lora微调是不是有问题 #27

Alwin4Zhang opened this issue Jul 26, 2023 · 2 comments

Comments

@Alwin4Zhang
Copy link

image lora微调只有这2个文件,而且模型没有任何微调过的效果
python train_qlora.py --train_args_json chatGLM_6B_QLoRA.json --model_name_or_path /rainbow/zhangjunfeng/bert_models/pytorch/chatglm2-6b --train_data_path /rainbow/zhangjunfeng/ChatGLM-Efficient-Tuning/data/rb.jsonl --lora_rank 4 --lora_dropout 0.05 --compute_dtype fp32

python=3.9
peft==0.4.0
bitsandbytes==0.41.0
其他的按requirements.txt安装

@shuxueslpi
Copy link
Owner

正常训练完确实只有这两个文件,checkpoints里的文件会多一些。
所以你是不是训练的步数很少,连保存checkpoint的step都没有达到?

@Alwin4Zhang
Copy link
Author

正常训练完确实只有这两个文件,checkpoints里的文件会多一些。 所以你是不是训练的步数很少,连保存checkpoint的step都没有达到?

确实是数据太少了导致没有checkpoints

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants