Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

多机多卡 模型微调 #666

Closed
5 tasks done
STHSF opened this issue Jun 25, 2023 · 10 comments
Closed
5 tasks done

多机多卡 模型微调 #666

STHSF opened this issue Jun 25, 2023 · 10 comments
Labels

Comments

@STHSF
Copy link

STHSF commented Jun 25, 2023

提交前必须检查以下项目

  • 请确保使用的是仓库最新代码(git pull),一些问题已被解决和修复。
  • 由于相关依赖频繁更新,请确保按照Wiki中的相关步骤执行
  • 我已阅读FAQ章节并且已在Issue中对问题进行了搜索,没有找到相似问题和解决方案
  • 第三方插件问题:例如llama.cpptext-generation-webuiLlamaChat等,同时建议到对应的项目中查找解决方案
  • 模型正确性检查:务必检查模型的SHA256.md,模型不对的情况下无法保证效果和正常运行

问题类型

模型训练与精调

基础模型

LLaMA-7B

操作系统

Linux

详细描述问题

使用deepspeed进行多机多卡微调,训练正常,但是模型保存的时候会一直卡住。
主节点保存了global_step, 从节点有文件目录,目录下无模型文件。

依赖情况(代码类问题务必提供)

No response

运行日志或截图

image
主节点文件目录
image
从节点文件目录
image

@iMountTai
Copy link
Collaborator

没用deepspeed启动过,不清楚具体问题。其次,你的训练代码应该是旧的,最新代码有额外的模型保存处理,建议pull最新代码。模型保存一般是rank0节点,存在通信问题,卡住是一直卡住不动吗?

@STHSF
Copy link
Author

STHSF commented Jun 26, 2023

我重新clone了最新的代码(最新代码代码有点问题,已提PR),然后重新尝试使用deepspeed进行多机多卡的微调尝试。
1、以上问题依然存在,第一次迭代结束,模型保存时依然会卡住,一段时间之后会因为timeout而报错
2、deepspeed也会在从节点上保存global_step的结果(我也是新手,不确定一定是对的)
卡住和报错的截图如下:
image
image
执行脚本如下:
`lr=1e-4
lora_rank=8
lora_alpha=16
lora_trainable="q_proj,v_proj"
modules_to_save="embed_tokens,lm_head"
lora_dropout=0.05

pretrained_model=/mnt/datawarehouse/LLM/llama-hf/llama-7b-hf
chinese_tokenizer_path=/home/liyu/workshop/LLM/Chinese-LLaMA-Alpaca/scripts/merge_tokenizer/merged_tokenizer_hf
dataset_dir=/home/liyu/workshop/LLM/Chinese-LLaMA-Alpaca/data/
per_device_train_batch_size=1
per_device_eval_batch_size=1
training_steps=100
gradient_accumulation_steps=1
output_dir=/mnt/datawarehouse/llmModel/belle/llama-7b-dsp
validation_file=/home/liyu/workshop/LLM/Chinese-LLaMA-Alpaca/data/alpaca_data_zh_51k.json

deepspeed_config_file=ds_zero2_no_offload.json

torchrun --nproc_per_node=2 --nnode=2 --node_rank=1
--master_addr=192.168.11.37 --master_port=9901 run_clm_sft_with_peft.py
--deepspeed ${deepspeed_config_file}
--model_name_or_path ${pretrained_model}
--tokenizer_name_or_path ${chinese_tokenizer_path}
--dataset_dir ${dataset_dir}
--validation_split_percentage 0.001
--per_device_train_batch_size ${per_device_train_batch_size}
--per_device_eval_batch_size ${per_device_eval_batch_size}
--do_train
--do_eval False
--seed $RANDOM
--fp16
--num_train_epochs 1
--lr_scheduler_type cosine
--learning_rate ${lr}
--warmup_ratio 0.03
--weight_decay 0
--logging_strategy steps
--logging_steps 10
--save_strategy steps
--save_total_limit 3
--evaluation_strategy steps
--eval_steps 10
--save_steps 10
--gradient_accumulation_steps ${gradient_accumulation_steps}
--preprocessing_num_workers 8
--max_seq_length 512
--output_dir ${output_dir}
--overwrite_output_dir
--ddp_timeout 30000
--logging_first_step True
--lora_rank ${lora_rank}
--lora_alpha ${lora_alpha}
--trainable ${lora_trainable}
--modules_to_save ${modules_to_save}
--lora_dropout ${lora_dropout}
--torch_dtype float16
--validation_file ${validation_file}
--gradient_checkpointing
--ddp_find_unused_parameters False
`

@airaria
Copy link
Contributor

airaria commented Jun 26, 2023

尝试过单节点训练会卡住吗?

@STHSF
Copy link
Author

STHSF commented Jun 26, 2023

单节点多卡没问题,多节点多卡不行。
单节点的模型保存情况:
image

@X-Buring
Copy link

看你的报错,应该是多节点之间同步出现了异常,导致的程序崩溃。
我现在也在尝试多节点训练,2机训练成功了一次,4机训练也是这样崩溃了,是在训练的时候socket timeout了,导致不同节点之间无法同步导致的崩溃,还没在查原因。而且我现在多机训练比单机还慢,还没搞明白是什么原因

@airaria
Copy link
Contributor

airaria commented Jun 28, 2023

可能是多机直接的通信问题,建议从机器与环境配置排查。

@github-actions
Copy link

github-actions bot commented Jul 5, 2023

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your consideration.

@github-actions github-actions bot added the stale label Jul 5, 2023
@github-actions
Copy link

github-actions bot commented Jul 9, 2023

Closing the issue, since no updates observed. Feel free to re-open if you need any further assistance.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Jul 9, 2023
@ymcui ymcui closed this as completed Jul 10, 2023
@ReverseSystem001
Copy link

单节点多卡没问题,多节点多卡不行。 单节点的模型保存情况: image

解决了吗? 我的多机多卡运行时一直报timeout的错: [E ProcessGroupNCCL.cpp:828] [Rank 0] Watchdog caught collective operation timeout: WorkNCCL(SeqNum=2, OpType=BROADCAST, Timeout(ms)=1800000) ran for 1809337 milliseconds before timing out.

@STHSF
Copy link
Author

STHSF commented Mar 4, 2024

最新的代码没有跑过,参考belle的代码跑通了。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

6 participants