Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Continue FT from stage 2 with custom data #49

Closed
adrielkuek opened this issue Apr 15, 2024 · 2 comments
Closed

Continue FT from stage 2 with custom data #49

adrielkuek opened this issue Apr 15, 2024 · 2 comments

Comments

@adrielkuek
Copy link

Hi, was wondering whether the stage2 script would be applicable for further FT from stage 2 with a small custom dataset for domain transfer? Or do we have to rewrite a separate script to do this?

Thanks and appreciate any help given!

Regards,

Adriel

@yanwei-li
Copy link
Member

Hi, thanks for your interest. You can try this, and replace the data to your custom dataset

FINETUNE_NAME=Mini-Gemini-7B
STAGE3_NAME=Your_prefer_name
AUX_SIZE=768
deepspeed minigemini/train/train_mem.py \
    --deepspeed ./scripts/zero2_offload.json \
    --model_name_or_path ./work_dirs/$FINETUNE_NAME \
    --version v1 \
    --data_path ./data/MiniGemini-Finetune/minigemini_instruction.json \
    --image_folder ./data/MiniGemini-Finetune \
    --vision_tower model_zoo/OpenAI/clip-vit-large-patch14-336 \
    --vision_tower_aux model_zoo/OpenAI/openclip-convnext-large-d-320-laion2B-s29B-b131K-ft-soup \
    --mm_projector_type mlp2x_gelu \
    --mm_vision_select_layer -2 \
    --mm_use_im_start_end False \
    --mm_use_im_patch_token False \
    --image_aspect_ratio pad \
    --group_by_modality_length True \
    --image_size_aux $AUX_SIZE \
    --bf16 True \
    --output_dir ./work_dirs/$STAGE3_NAME \
    --num_train_epochs 1 \
    --per_device_train_batch_size 8 \
    --per_device_eval_batch_size 4 \
    --gradient_accumulation_steps 2 \
    --evaluation_strategy "no" \
    --save_strategy "steps" \
    --save_steps 1000 \
    --save_total_limit 1 \
    --learning_rate 2e-5 \
    --weight_decay 0. \
    --warmup_ratio 0.03 \
    --lr_scheduler_type "cosine" \
    --logging_steps 1 \
    --tf32 True \
    --model_max_length 2048 \
    --gradient_checkpointing True \
    --dataloader_num_workers 4 \
    --lazy_preprocess True \
    --report_to wandb

@adrielkuek
Copy link
Author

Hi Yanwei, apologies for the late response. Thank you very much for providing the guidance on this. Appreciate it!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants