You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for your interest in Bunny.
24G per device is enough to pretrain and finetune Bunny. However, the actual GPU memory consumption depends on your base model, image resolution and data.
For finetuning, setting per-device-batch-size to 2 or 4 may be good to you. In order to use the default learning rate in finetune_lora.sh, we recommend keeping global batch size 128. Global batch size = num of GPU * batch size per GPU * accumulation step. In your case, num of GPU is 4. All these parameters can be set in finetune_lora.sh. Similarly, set batch size that fits for you in pretraining, of full parameter tuning.
Feel free to further comment on this issue if you meet any problems in using Bunny.
I only have 4x4090 cards,under this circumstance, can i finetune a MLLM?
How should I do to train, thanks a lot.
The text was updated successfully, but these errors were encountered: