We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
i use nohup log command, but the information i want print not show in the log : nohup deepspeed llava/train/train_xformers.py --lora_enable True --lora_r 128 --lora_alpha 256 --mm_projector_lr 2e-5 --deepspeed ./scripts/zero3.json --model_name_or_path /home/ma-user/work/embodied_models/llava-v1.5-7b --version v1 --data_path ./playground/data/llava_train_test.json --image_folder /home/ma-user/work/embodied_data/ --vision_tower /home/ma-user/work/embodied_models/clip-vit-large-patch14-336 --mm_projector_type mlp2x_gelu --mm_vision_select_layer -2 --mm_use_im_start_end False --mm_use_im_patch_token False --image_aspect_ratio pad --group_by_modality_length True --bf16 False --output_dir ./checkpoints/llava-v1.5-7b-task-lora --num_train_epochs 1 --per_device_train_batch_size 4 --per_device_eval_batch_size 4 --gradient_accumulation_steps 1 --evaluation_strategy "no" --save_strategy "steps" --save_steps 50000 --save_total_limit 1 --learning_rate 2e-4 --weight_decay 0. --warmup_ratio 0.03 --lr_scheduler_type "cosine" --logging_steps 1 --tf32 False --model_max_length 2048 --gradient_checkpointing True --dataloader_num_workers 4 --lazy_preprocess True --report_to tensorboard > logs/llava-v1.5-7b-task-lora-1.log 2>&1
in the train.py : line793 rank0_print("model_args:") rank0_print(model_args) rank0_print("data_args:") rank0_print(data_args) rank0_print("training_args:") rank0_print(training_args)
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Question
i use nohup log command, but the information i want print not show in the log :
nohup deepspeed llava/train/train_xformers.py
--lora_enable True --lora_r 128 --lora_alpha 256 --mm_projector_lr 2e-5
--deepspeed ./scripts/zero3.json
--model_name_or_path /home/ma-user/work/embodied_models/llava-v1.5-7b
--version v1
--data_path ./playground/data/llava_train_test.json
--image_folder /home/ma-user/work/embodied_data/
--vision_tower /home/ma-user/work/embodied_models/clip-vit-large-patch14-336
--mm_projector_type mlp2x_gelu
--mm_vision_select_layer -2
--mm_use_im_start_end False
--mm_use_im_patch_token False
--image_aspect_ratio pad
--group_by_modality_length True
--bf16 False
--output_dir ./checkpoints/llava-v1.5-7b-task-lora
--num_train_epochs 1
--per_device_train_batch_size 4
--per_device_eval_batch_size 4
--gradient_accumulation_steps 1
--evaluation_strategy "no"
--save_strategy "steps"
--save_steps 50000
--save_total_limit 1
--learning_rate 2e-4
--weight_decay 0.
--warmup_ratio 0.03
--lr_scheduler_type "cosine"
--logging_steps 1
--tf32 False
--model_max_length 2048
--gradient_checkpointing True
--dataloader_num_workers 4
--lazy_preprocess True
--report_to tensorboard
> logs/llava-v1.5-7b-task-lora-1.log 2>&1
in the train.py :
line793
rank0_print("model_args:")
rank0_print(model_args)
rank0_print("data_args:")
rank0_print(data_args)
rank0_print("training_args:")
rank0_print(training_args)
The text was updated successfully, but these errors were encountered: