Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to save the VQVAE's weight separately from the whole model? #81

Open
MingCongSu opened this issue Feb 15, 2024 · 1 comment
Open

Comments

@MingCongSu
Copy link

Hi, thanks for sharing the great work.👍
I am trying to reuse the motion tokenizer (VQVAE) in MotionGPT so I wonder.

  1. Is there a way to save the VQVAE's weight as a checkpoint file separately from saving the whole MotionGPT model?
    Cuz I saw there is load_pretrained_vae() function here:

    def load_pretrained_vae(cfg, model, logger=None):

  2. I checked the pre-trained model motiongpt_s3_h3d.tar, and I found the checkpoint includes many parts (metrics, vae, lm, loss).
    Why does it contain so many parameters only for metrics?
    image

It would be a big help if someone could reply, thanks🤗

@billl-jiang
Copy link
Collaborator

Hello,

Thank you for your interest and support in our work. Regarding your queries:

  1. We currently utilize PyTorch Lightning's callback function for saving checkpoints, which saves all modules by default. We plan to consider the implementation logic for saving specific components, like the VQVAE's weights separately, in future versions.
  2. The reason for the inclusion of many parameters, especially for metrics, is due to our model-based calculations. These parameters do not participate in the network's forward pass or inference but are used during metric computation.

We appreciate your suggestion and are looking into enhancing our model's usability in future updates.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants