-
Notifications
You must be signed in to change notification settings - Fork 96
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to process the video that has more than 36 frames? #3
Comments
@jxiangli If the video is larger than 36, you could split the video into several clips, and link them based on the result of the overlapping frame using traditional post-processing. |
|
Thanks for your reply~ |
@jxiangli That's right. |
Thank you very much! |
@jxiangli @noobying Thanks for pointing out it, I have not tried Youtube VIS 2021 before. In the original setting of VisTR, the number of instance queries is proportional to the number of input frames, and the instance queries are a fixed number of learned embedding. Therefore the number of input frames is fixed. But in row 3 of Table 1(b) of the paper, we also experiment with the instance-level queries that do not rely on the number of frames. In this way, the model could process a dynamic number of frames. But two many frames could also raise a memory issue, we leave it as future work. |
Thanks for your excellent work.
I have one qustion about the inference process. If the length of video is larger than 36, how to link the tracks from diffenrent clips?
looking forward to your reply.
The text was updated successfully, but these errors were encountered: