You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It is remarkable that you make inference speed for 16x or higher factor faster than Super SloMo while performing well. Will you publicate the trained model of 16x and higher and add support afterward?
The text was updated successfully, but these errors were encountered:
For the 16x model, we do not train a new model due to lack of training data. Instead, we cascade (8x,2x) to form 16x models and (8x,8x) model to form 64x model. Note that it is still completely possible to train a complete end to end 64x interpolation model, just that we did not do it because we do not have enough data to train.
Hi Tarun,
It is remarkable that you make inference speed for 16x or higher factor faster than Super SloMo while performing well. Will you publicate the trained model of 16x and higher and add support afterward?
The text was updated successfully, but these errors were encountered: