-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
多机多卡 模型微调 #666
Comments
没用deepspeed启动过,不清楚具体问题。其次,你的训练代码应该是旧的,最新代码有额外的模型保存处理,建议pull最新代码。模型保存一般是rank0节点,存在通信问题,卡住是一直卡住不动吗? |
尝试过单节点训练会卡住吗? |
看你的报错,应该是多节点之间同步出现了异常,导致的程序崩溃。 |
可能是多机直接的通信问题,建议从机器与环境配置排查。 |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your consideration. |
Closing the issue, since no updates observed. Feel free to re-open if you need any further assistance. |
最新的代码没有跑过,参考belle的代码跑通了。 |
提交前必须检查以下项目
问题类型
模型训练与精调
基础模型
LLaMA-7B
操作系统
Linux
详细描述问题
使用deepspeed进行多机多卡微调,训练正常,但是模型保存的时候会一直卡住。
主节点保存了global_step, 从节点有文件目录,目录下无模型文件。
依赖情况(代码类问题务必提供)
No response
运行日志或截图
主节点文件目录
从节点文件目录
The text was updated successfully, but these errors were encountered: