Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I just came into contact with the research direction of video inpainting recently. The test sets of Davis and YouTube-VOS only correspond to one mask for each video. How did you use these data sets to conduct the test? #7

Open
sangruolin opened this issue May 30, 2021 · 3 comments

Comments

@sangruolin
Copy link

No description provided.

@ruiliu-ai
Copy link
Owner

Actually we usually don't use object mask to evaluate the model in the absence of ground truth. We usually randamly generate a sequence of masks and calculate the difference between the output reconstructed video and the original video.

@Feynman1999
Copy link

Actually we usually don't use object mask to evaluate the model in the absence of ground truth. We usually randamly generate a sequence of masks and calculate the difference between the output reconstructed video and the original video.

how do you select the random seed for generating test mask? how do you make sure that each paper use the same setting? I can't find the test mask in dataset, looking forward your reply, thanks

@Feynman1999
Copy link

any answer?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants