Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Demo.py #56

Closed
IDayday opened this issue Jul 26, 2020 · 13 comments
Closed

Demo.py #56

IDayday opened this issue Jul 26, 2020 · 13 comments
Assignees
Labels
enhancement New feature or request question Further information is requested

Comments

@IDayday
Copy link

IDayday commented Jul 26, 2020

when I try to run demo.py to test my own video
I use:
python demo/demo.py configs/recognition/slowfast/slowfast_r50_4x16x1_256e_kinetics400_rgb.py demo/checkpoints/slowfast_r50_4x16x1_256e_kinetics400_rgb_20200618-9a124260.pth demo/test1.mp4 demo/label_map.txt

but it was wrong and say:
Traceback (most recent call last):
File "demo/demo.py", line 35, in
main()
File "demo/demo.py", line 27, in main
results = inference_recognizer(model, args.video, args.label)
File "/dat01/wangbo2/ZT/mmaction2/mmaction/apis/inference.py", line 63, in inference_recognizer
data = test_pipeline(data)
File "/dat01/wangbo2/ZT/mmaction2/mmaction/datasets/pipelines/compose.py", line 41, in call
data = t(data)
File "/dat01/wangbo2/ZT/mmaction2/mmaction/datasets/pipelines/loading.py", line 582, in call
directory = results['frame_dir']
KeyError: 'frame_dir'

my environment:
python 3.6
pytorch 1.3
others followed the requirements

need your help!

@innerlee
Copy link
Contributor

innerlee commented Jul 26, 2020

@dreamerlin maybe we should make a dummy run of demo.py in the test to make sure it works

@innerlee innerlee added the question Further information is requested label Jul 26, 2020
@innerlee
Copy link
Contributor

Currently demo.py is written for videos, so you need to change the dataset type to video, as in https://github.com/open-mmlab/mmaction2/blob/master/configs/recognition/tsn/tsn_r50_video_1x1x8_100e_kinetics400_rgb.py#L21 , and also make changes to the testing pipeline to use video loader such as decord, as in https://github.com/open-mmlab/mmaction2/blob/master/configs/recognition/tsn/tsn_r50_video_1x1x8_100e_kinetics400_rgb.py#L70

@IDayday
Copy link
Author

IDayday commented Jul 26, 2020

thanks! I will try for that

@dreamerlin
Copy link
Collaborator

Rawframe inference in demo scripts will be supported in this.

@IDayday
Copy link
Author

IDayday commented Jul 26, 2020

Sorry, I still meet a trouble. I know that demo.py is written for videos. So I put my own video named test1.mp4 in the demo folder.
I change the code as:
python demo/demo.py configs/recognition/tsn/tsn_r50_video_1x1x8_100e_kinetics400_rgb.py demo/checkpoints/tsn_r50_video_1x1x8_100e_kinetics400_rgb_20200702-568cde33.pth demo/test1.mp4 demo/label_map.txt
( I first want to use the SlowFast model, but I can't find any checkpoints with videos in the Modelzoo, so I follow your advice to use TSN.)
But it still can't work.
I have provided the video path in the code such as demo/test1.mp4. Should I modify the config?

@innerlee
Copy link
Contributor

you can run video on models that were trained with rawframes. video/rawframe are input formats and they are not tight with models.

@dreamerlin could u pls check why tsn_r50_video_1x1x8_100e_kinetics400_rgb.py does not work?

@IDayday
Copy link
Author

IDayday commented Jul 26, 2020

This is the error message I got when I uesd TSN:

Traceback (most recent call last):
File "demo/demo.py", line 35, in
main()
File "demo/demo.py", line 27, in main
results = inference_recognizer(model, args.video, args.label)
File "/dat01/wangbo2/ZT/mmaction2/mmaction/apis/inference.py", line 64, in inference_recognizer
data = collate([data], samples_per_gpu=1)
File "/dat01/wangbo2/anaconda3/envs/zt/lib/python3.6/site-packages/mmcv/parallel/collate.py", line 82, in collate
for key in batch[0]
File "/dat01/wangbo2/anaconda3/envs/zt/lib/python3.6/site-packages/mmcv/parallel/collate.py", line 82, in
for key in batch[0]
File "/dat01/wangbo2/anaconda3/envs/zt/lib/python3.6/site-packages/mmcv/parallel/collate.py", line 78, in collate
return [collate(samples, samples_per_gpu) for samples in transposed]
File "/dat01/wangbo2/anaconda3/envs/zt/lib/python3.6/site-packages/mmcv/parallel/collate.py", line 78, in
return [collate(samples, samples_per_gpu) for samples in transposed]
File "/dat01/wangbo2/anaconda3/envs/zt/lib/python3.6/site-packages/mmcv/parallel/collate.py", line 78, in collate
return [collate(samples, samples_per_gpu) for samples in transposed]
File "/dat01/wangbo2/anaconda3/envs/zt/lib/python3.6/site-packages/mmcv/parallel/collate.py", line 78, in
.....
( just like above for nearly 2000 lines)
.....
if not isinstance(batch, Sequence):
File "/dat01/wangbo2/anaconda3/envs/zt/lib/python3.6/abc.py", line 184, in instancecheck
if subclass in cls._abc_cache:
File "/dat01/wangbo2/anaconda3/envs/zt/lib/python3.6/_weakrefset.py", line 75, in contains
return wr in self.data
RecursionError: maximum recursion depth exceeded in comparison

@dreamerlin
Copy link
Collaborator

Sorry, I still meet a trouble. I know that demo.py is written for videos. So I put my own video named test1.mp4 in the demo folder.
I change the code as:
python demo/demo.py configs/recognition/tsn/tsn_r50_video_1x1x8_100e_kinetics400_rgb.py demo/checkpoints/tsn_r50_video_1x1x8_100e_kinetics400_rgb_20200702-568cde33.pth demo/test1.mp4 demo/label_map.txt
( I first want to use the SlowFast model, but I can't find any checkpoints with videos in the Modelzoo, so I follow your advice to use TSN.)
But it still can't work.
I have provided the video path in the code such as demo/test1.mp4. Should I modify the config?

You can try to write a tsn_r50_video_inference_1x1x8_100e_kinetics400_rgb.py like tsn_r50_video_inference_1x1x3_100e_kinetics400_rgb.py by using some setting related with testing from the tsn_r50_video_1x1x8_100e_kinetics400_rgb.py.

Since it is for inferencing a single video, There are some hints to modify some params:

  • dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]) -> dict(type='Collect', keys=['imgs'], meta_keys=[]), removing label in keys since we don't need to calculate the top_k_accuracy.
  • Set ann_file None.
  • Set data_prefix to None, since your video filename is in demo/test1.mp4 without directory prefix.

@dreamerlin
Copy link
Collaborator

This is the error message I got when I uesd TSN:

Traceback (most recent call last):
File "demo/demo.py", line 35, in
main()
File "demo/demo.py", line 27, in main
results = inference_recognizer(model, args.video, args.label)
File "/dat01/wangbo2/ZT/mmaction2/mmaction/apis/inference.py", line 64, in inference_recognizer
data = collate([data], samples_per_gpu=1)
File "/dat01/wangbo2/anaconda3/envs/zt/lib/python3.6/site-packages/mmcv/parallel/collate.py", line 82, in collate
for key in batch[0]
File "/dat01/wangbo2/anaconda3/envs/zt/lib/python3.6/site-packages/mmcv/parallel/collate.py", line 82, in
for key in batch[0]
File "/dat01/wangbo2/anaconda3/envs/zt/lib/python3.6/site-packages/mmcv/parallel/collate.py", line 78, in collate
return [collate(samples, samples_per_gpu) for samples in transposed]
File "/dat01/wangbo2/anaconda3/envs/zt/lib/python3.6/site-packages/mmcv/parallel/collate.py", line 78, in
return [collate(samples, samples_per_gpu) for samples in transposed]
File "/dat01/wangbo2/anaconda3/envs/zt/lib/python3.6/site-packages/mmcv/parallel/collate.py", line 78, in collate
return [collate(samples, samples_per_gpu) for samples in transposed]
File "/dat01/wangbo2/anaconda3/envs/zt/lib/python3.6/site-packages/mmcv/parallel/collate.py", line 78, in
.....
( just like above for nearly 2000 lines)
.....
if not isinstance(batch, Sequence):
File "/dat01/wangbo2/anaconda3/envs/zt/lib/python3.6/abc.py", line 184, in instancecheck
if subclass in cls._abc_cache:
File "/dat01/wangbo2/anaconda3/envs/zt/lib/python3.6/_weakrefset.py", line 75, in contains
return wr in self.data
RecursionError: maximum recursion depth exceeded in comparison

This is due to the dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]), you can change it to dict(type='Collect', keys=['imgs'], meta_keys=[]) by removing the unused label. BTW, we will hardcode the label to -1 to avoid this case in #59

Thanks for your report!

@IDayday
Copy link
Author

IDayday commented Jul 26, 2020

This is the error message I got when I uesd TSN:
Traceback (most recent call last):
File "demo/demo.py", line 35, in
main()
File "demo/demo.py", line 27, in main
results = inference_recognizer(model, args.video, args.label)
File "/dat01/wangbo2/ZT/mmaction2/mmaction/apis/inference.py", line 64, in inference_recognizer
data = collate([data], samples_per_gpu=1)
File "/dat01/wangbo2/anaconda3/envs/zt/lib/python3.6/site-packages/mmcv/parallel/collate.py", line 82, in collate
for key in batch[0]
File "/dat01/wangbo2/anaconda3/envs/zt/lib/python3.6/site-packages/mmcv/parallel/collate.py", line 82, in
for key in batch[0]
File "/dat01/wangbo2/anaconda3/envs/zt/lib/python3.6/site-packages/mmcv/parallel/collate.py", line 78, in collate
return [collate(samples, samples_per_gpu) for samples in transposed]
File "/dat01/wangbo2/anaconda3/envs/zt/lib/python3.6/site-packages/mmcv/parallel/collate.py", line 78, in
return [collate(samples, samples_per_gpu) for samples in transposed]
File "/dat01/wangbo2/anaconda3/envs/zt/lib/python3.6/site-packages/mmcv/parallel/collate.py", line 78, in collate
return [collate(samples, samples_per_gpu) for samples in transposed]
File "/dat01/wangbo2/anaconda3/envs/zt/lib/python3.6/site-packages/mmcv/parallel/collate.py", line 78, in
.....
( just like above for nearly 2000 lines)
.....
if not isinstance(batch, Sequence):
File "/dat01/wangbo2/anaconda3/envs/zt/lib/python3.6/abc.py", line 184, in instancecheck
if subclass in cls._abc_cache:
File "/dat01/wangbo2/anaconda3/envs/zt/lib/python3.6/_weakrefset.py", line 75, in contains
return wr in self.data
RecursionError: maximum recursion depth exceeded in comparison

This is due to the dict(type='Collect', keys=['imgs', 'label'], meta_keys=[]), you can change it to dict(type='Collect', keys=['imgs'], meta_keys=[]) by removing the unused label. BTW, we will hardcode the label to -1 to avoid this case in #59

Thanks for your report!

Thank you for answering my doubts. TSN currently work!
I will follow your advice to test other models.

@IDayday
Copy link
Author

IDayday commented Jul 27, 2020

Another question. How can I get the ouput in video format like the gif ? ( The labels appear in the video)
Dome.py just feedback top-5 recognitions in text. It's useful, but visualization is not good.

@innerlee
Copy link
Contributor

One way to do it is to paint the label into frames using opencv and save it to mp4, and convert mp4 to gif using ffmpeg or online converter.

Maybe supporting mp4 output with label overlay is an option. You may request this feature in the roadmap issue #19

@innerlee innerlee added the enhancement New feature or request label Jul 27, 2020
@dreamerlin
Copy link
Collaborator

We have already support to output video or gif file in this pr.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants