You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the paper, it is mentioned that ImageCut2Video is used to create a synthetic video dataset using pseudo-annotations generated from maskcut. I understand it is generated from video_imagenet_train_fixsize480_tau0.15_N3.json, but the code appears to use internal APIs of Detectron2.
I'm wondering if there is any guide or method for me to view this synthetic data!
The text was updated successfully, but these errors were encountered:
In the paper, it is mentioned that ImageCut2Video is used to create a synthetic video dataset using pseudo-annotations generated from maskcut. I understand it is generated from video_imagenet_train_fixsize480_tau0.15_N3.json, but the code appears to use internal APIs of Detectron2.
I'm wondering if there is any guide or method for me to view this synthetic data!
Thank you for great works!
In the paper, it is mentioned that ImageCut2Video is used to create a synthetic video dataset using pseudo-annotations generated from maskcut. I understand it is generated from video_imagenet_train_fixsize480_tau0.15_N3.json, but the code appears to use internal APIs of Detectron2.
I'm wondering if there is any guide or method for me to view this synthetic data!
The text was updated successfully, but these errors were encountered: