We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
只有7b,14b,72b。这个怎么办呢?
The text was updated successfully, but these errors were encountered:
You can change the path with 32b-chat, and we will update the new model config soon.
Sorry, something went wrong.
好的谢谢!我自己尝试把微调后32b的qwen模型转成hf格式,然后运行命令如下python run.py --datasets cmmlu_gen --hf-path /root/autodl-tmp/Qwen1.5-32B-Chat --model-kwargs device_map='auto' --tokenizer-kwargs padding_side='left' truncation='left' trust_remote_code=True --max-seq-len 300 --max-out-len 5 --batch-size 8 --num-gpus 1 。请问我还需要添加其他参数设置吗?比如模板什么的?
kennymckormick
No branches or pull requests
Describe the feature
只有7b,14b,72b。这个怎么办呢?
Will you implement it?
The text was updated successfully, but these errors were encountered: