-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is it possible to adopt your temporal loss on other video tasks? #3
Comments
|
|
Thank you very much for your advice, i try it |
I have tried your temporal loss on video depth estimation. I use your loss on frame i and i+3 with the model prediction and the depth ground truth. I do not change any other parameters. I find that your loss does not work well in my case. The loss seems not to affect the training process obviously, which means that the model performance(temporal consistency) does not obviously change to be better or worse by adding the loss. I assume there might be 2 reasons:
|
|
|
Hi! Thanks for the great work and releasing the code.
My question is that is it possible to adopt your temporal loss on other video tasks such as video semantic segmentation and video depth estimation? In those areas, most temporal losses are based on the optical flow warping loss, which is quite time consuming while training. Your temporal loss are used on RGB outputs. Is it possible to be extended to semantic results or depth maps?
By the way, is the temporal_loss_mode == 2 worse than temporal_loss_mode == 1 in your case? What's the reason for that case?
The text was updated successfully, but these errors were encountered: