Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Differences in the checkpoint size #33

Closed
SushantGautam opened this issue Jun 15, 2023 · 1 comment
Closed

Differences in the checkpoint size #33

SushantGautam opened this issue Jun 15, 2023 · 1 comment

Comments

@SushantGautam
Copy link
Contributor

image

The checkpoint size after training through video_llama_stage1_pretrain.yaml is far less than what you have in the public checkpoint.
The size of your public checkpoint (finetune-vicuna7b-v2.pth) is: 254MB
But the one I got is just around 37MB

Are you using a different training config than what is in video_llama_stage1_pretrain.yaml?
Can you share your training config? I wanted to fine-tune by loading the checkpoint from finetune-vicuna7b-v2.pth.
But get mismatch errors.

@SushantGautam
Copy link
Contributor Author

ah! I figured it out. It's due to missing params in the training config

  frozen_video_Qformer: False
  frozen_audio_Qformer: True

The checkpoint size was 254MB after including these.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant