-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Demo.py #56
Comments
@dreamerlin maybe we should make a dummy run of demo.py in the test to make sure it works |
Currently demo.py is written for videos, so you need to change the dataset type to video, as in https://github.com/open-mmlab/mmaction2/blob/master/configs/recognition/tsn/tsn_r50_video_1x1x8_100e_kinetics400_rgb.py#L21 , and also make changes to the testing pipeline to use video loader such as decord, as in https://github.com/open-mmlab/mmaction2/blob/master/configs/recognition/tsn/tsn_r50_video_1x1x8_100e_kinetics400_rgb.py#L70 |
thanks! I will try for that |
Rawframe inference in demo scripts will be supported in this. |
Sorry, I still meet a trouble. I know that demo.py is written for videos. So I put my own video named test1.mp4 in the demo folder. |
you can run video on models that were trained with rawframes. video/rawframe are input formats and they are not tight with models. @dreamerlin could u pls check why tsn_r50_video_1x1x8_100e_kinetics400_rgb.py does not work? |
This is the error message I got when I uesd TSN: Traceback (most recent call last): |
You can try to write a Since it is for inferencing a single video, There are some hints to modify some params:
|
This is due to the Thanks for your report! |
Thank you for answering my doubts. TSN currently work! |
Another question. How can I get the ouput in video format like the gif ? ( The labels appear in the video) |
One way to do it is to paint the label into frames using opencv and save it to mp4, and convert mp4 to gif using ffmpeg or online converter. Maybe supporting mp4 output with label overlay is an option. You may request this feature in the roadmap issue #19 |
We have already support to output video or gif file in this pr. |
when I try to run demo.py to test my own video
I use:
python demo/demo.py configs/recognition/slowfast/slowfast_r50_4x16x1_256e_kinetics400_rgb.py demo/checkpoints/slowfast_r50_4x16x1_256e_kinetics400_rgb_20200618-9a124260.pth demo/test1.mp4 demo/label_map.txt
but it was wrong and say:
Traceback (most recent call last):
File "demo/demo.py", line 35, in
main()
File "demo/demo.py", line 27, in main
results = inference_recognizer(model, args.video, args.label)
File "/dat01/wangbo2/ZT/mmaction2/mmaction/apis/inference.py", line 63, in inference_recognizer
data = test_pipeline(data)
File "/dat01/wangbo2/ZT/mmaction2/mmaction/datasets/pipelines/compose.py", line 41, in call
data = t(data)
File "/dat01/wangbo2/ZT/mmaction2/mmaction/datasets/pipelines/loading.py", line 582, in call
directory = results['frame_dir']
KeyError: 'frame_dir'
my environment:
python 3.6
pytorch 1.3
others followed the requirements
need your help!
The text was updated successfully, but these errors were encountered: