Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Way to future generate frames(infer) conditioned one or a few initial frames of a video sequence. #18

Closed
horn-video opened this issue Dec 6, 2022 · 2 comments

Comments

@horn-video
Copy link

horn-video commented Dec 6, 2022

Hi,
First of all, congratulation for this work!
My query is simple, take for example UCF101 dataset. If I randomly select a sequence from CleanAndJerk category. Can I generate future frames for this specific sequence using your model? If so, what changes should I be making in the `sample_vqgan_transformer_short_videos.py' file.
Appreciate your time and help :)

@songweige
Copy link
Owner

Hi,

Thank you for your question! That's actually possible and interesting but I have never tried! I think it should be more straightforward to do by modifying the sample_vqgan_transformer_long_videos.py file.

So what you may need to do is replace the first few generated latent frames with the latent of real frames obtained by the encoder here:
https://github.com/SongweiGe/TATS/blob/8ea1b587a74736d420b70cc2b52ac1683682ec6c/scripts/sample_vqgan_transformer_long_videos.py#L126

@horn-video
Copy link
Author

Thanks a lot! The suggestion worked

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants