Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix training steps #669

Merged
merged 4 commits into from
Jun 26, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 3 additions & 4 deletions scripts/training/run_pt.sh
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,7 @@ dataset_dir=path/to/pt/data/dir
data_cache=temp_data_cache_dir
per_device_train_batch_size=1
per_device_eval_batch_size=1
training_steps=100
gradient_accumulation_steps=1
gradient_accumulation_steps=8
output_dir=output_dir

deepspeed_config_file=ds_zero2_no_offload.json
Expand All @@ -29,7 +28,7 @@ torchrun --nnodes 1 --nproc_per_node 1 run_clm_pt_with_peft.py \
--do_train \
--seed $RANDOM \
--fp16 \
--max_steps ${training_steps} \
--num_train_epochs 1 \
--lr_scheduler_type cosine \
--learning_rate ${lr} \
--warmup_ratio 0.05 \
Expand All @@ -38,7 +37,7 @@ torchrun --nnodes 1 --nproc_per_node 1 run_clm_pt_with_peft.py \
--logging_steps 10 \
--save_strategy steps \
--save_total_limit 3 \
--save_steps 500 \
--save_steps 200 \
--gradient_accumulation_steps ${gradient_accumulation_steps} \
--preprocessing_num_workers 8 \
--block_size 512 \
Expand Down
9 changes: 4 additions & 5 deletions scripts/training/run_sft.sh
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,7 @@ chinese_tokenizer_path=path/to/chinese/llama/tokenizer/dir
dataset_dir=path/to/sft/data/dir
per_device_train_batch_size=1
per_device_eval_batch_size=1
training_steps=100
gradient_accumulation_steps=1
gradient_accumulation_steps=8
output_dir=output_dir
peft_model=path/to/peft/model/dir
validation_file=validation_file_name
Expand All @@ -30,7 +29,7 @@ torchrun --nnodes 1 --nproc_per_node 1 run_clm_sft_with_peft.py \
--do_eval \
--seed $RANDOM \
--fp16 \
--max_steps ${training_steps} \
--num_train_epochs 1 \
--lr_scheduler_type cosine \
--learning_rate ${lr} \
--warmup_ratio 0.03 \
Expand All @@ -40,8 +39,8 @@ torchrun --nnodes 1 --nproc_per_node 1 run_clm_sft_with_peft.py \
--save_strategy steps \
--save_total_limit 3 \
--evaluation_strategy steps \
--eval_steps 250 \
--save_steps 500 \
--eval_steps 100 \
--save_steps 200 \
--gradient_accumulation_steps ${gradient_accumulation_steps} \
--preprocessing_num_workers 8 \
--max_seq_length 512 \
Expand Down