Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HumorQA set annotation: Mismatch between ground truth annotation timestamp and video sample time stamp #15

Open
Hasnat79 opened this issue Jun 19, 2024 · 4 comments
Assignees

Comments

@Hasnat79
Copy link

Hi @Nicous20 and everyone,
While going through the test sample and annotations, I found that the ground truth start and end time stamps are not matching with the test sample duration.
image

Can you please provide us with the annotations aligned within the sample video durations? It would be helpful to calculate IoU scores in localization tasks over this valuable dataset. Otherwise, we cannot match the ground truth time span with the prediction time span.

Thanks.

@Jingkang50
Copy link
Collaborator

@Nicous20 I think the output is frame id right?
frame_id = idx_in_second * fps
Could you confirm what is the fps in use?

@Jingkang50 Jingkang50 assigned Jingkang50 and Nicous20 and unassigned Jingkang50 Jun 19, 2024
@Nicous20
Copy link
Owner

@Hasnat79
HumorQA and MagicQA are frame-counting and CreativeQA is second.
0 200 means 0 - 200 frames (whole video) and the setting fps is 30.

@Hasnat79
Copy link
Author

Hi guys ,
I think now it makes sense like 200frames//30fps = 6 sec. Can you please tell me that approximately how much percentage of the cases the whole video (not any small segment) is annotated as humoristic/funny?

Thanks.

@Nicous20
Copy link
Owner

@Hasnat79
Annotators were asked to avoid selecting entire segments of the video as much as possible, so the proportion is very small, as shown in Figure 2(c). (The red areas indicate the frequently selected regions.)
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants