Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some questions about video classification. #21

Closed
RongchangLi opened this issue Feb 28, 2022 · 8 comments
Closed

Some questions about video classification. #21

RongchangLi opened this issue Feb 28, 2022 · 8 comments

Comments

@RongchangLi
Copy link

Hello, this is a great work. But there are something I want to ask:
1.How to prepare the train and val list file for Something-Something V1 dataset? Are these of the same format of Something-Something V2 mentioned in dataset.md? Can you please provide those?

  1. I ever adopt the tools in TSM project to extract frames of Something-Something V2. The difference is that the extracted frames are sparse. For example, according to the train.csc you provide in dataset.md the frames of video 1 is totally 117 while in TSM project the number is 45. So, why do you adopt a much dense extracting rate? How do different rates influence the final performance?
@Andy1621
Copy link
Collaborator

Andy1621 commented Feb 28, 2022

Thanks for your question.

For Something-SomethingV1, I remember that 20BN provides the extracted frames.
For Something-SomethingV2, as claimed in my README, you have to extract the frames at 30 FPS. Such dense extracting is a common setting in MMAction2. It wii improve the performance slightly (<0.5% top-1 accuracy).

@RongchangLi
Copy link
Author

RongchangLi commented Feb 28, 2022

Thanks for your question.

For Something-SomethingV1, I remember that 20BN provides the extracted frames. For Something-SomethingV2, as claimed in my README, you have to extract the frames at 30 FPS. Such dense extracting is a common setting in MMAction2. It wii improve the performance slightly (<0.5% top-1 accuracy).

Thanks for your explaination. But I am still a little confused. So, for SS-V1, I should use the files such as somesomev1_rgb_{}_split.txt in ./data_list/sthv1/ to bulid dataset. For SS-V2, I should download all the annotation json files and the frame lists you provide to build dataset. Is that so?

@Andy1621
Copy link
Collaborator

Andy1621 commented Feb 28, 2022

Actually, for SSV2, the annotation JSON files contain the object labels.
The frame lists I provided only contain paths, frame numbers, and labels. You can simply extract videos at 30FPS, thus the frame numbers will be the same.

@RongchangLi
Copy link
Author

Actually, for SSV2, the annotation JSON files contain the object labels. The frame lists I provided only contain paths, frame numbers, and labels. You can simply extract videos at 30FPS, thus the frame numbers will be the same.

Oh, It seems that I have found that what I am confused of. Actually I just need to use the split files in ./data_list/sthv2 to build SS-V2 dataset (If I re-extract frames with 30 FPS). The annotation JSON files can be used to create these split files. My confusion is from the tran in DATASET.md. It seems that the train.csv and val.csv won't be used in building dataset object. But they are referred when extracting frames. Is that so?

@Andy1621
Copy link
Collaborator

I remember that all the videos are in one directory. You can just extract all the videos, and find the train/val videos according to the provided split files.

@Andy1621
Copy link
Collaborator

Moreover, the train/val files in DATASET.md are provided by MMAction. You can simply use the file in data_list/sthv2.

@RongchangLi
Copy link
Author

Moreover, the train/val files in DATASET.md are provided by MMAction. You can simply use the file in data_list/sthv2.

OK, thanks very much!

@Andy1621
Copy link
Collaborator

Andy1621 commented Mar 1, 2022

As there is no more activity, I am closing the issue, don't hesitate to reopen it if necessary.

@Andy1621 Andy1621 closed this as completed Mar 1, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants