Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

16x or higher factor trained model #32

Closed
Youngtrue opened this issue Aug 10, 2021 · 2 comments
Closed

16x or higher factor trained model #32

Youngtrue opened this issue Aug 10, 2021 · 2 comments

Comments

@Youngtrue
Copy link

Hi Tarun,

It is remarkable that you make inference speed for 16x or higher factor faster than Super SloMo while performing well. Will you publicate the trained model of 16x and higher and add support afterward?

@tarun005
Copy link
Owner

For the 16x model, we do not train a new model due to lack of training data. Instead, we cascade (8x,2x) to form 16x models and (8x,8x) model to form 64x model. Note that it is still completely possible to train a complete end to end 64x interpolation model, just that we did not do it because we do not have enough data to train.

@Astrostellar
Copy link

Hi, could you please share the pretrained model? The current links seem to be unavailable. Thanks a lot!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants