Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Data-preprocessing for kinetics-400 #34

Open
YongyiTang92 opened this issue Oct 11, 2018 · 17 comments
Open

Data-preprocessing for kinetics-400 #34

YongyiTang92 opened this issue Oct 11, 2018 · 17 comments

Comments

@YongyiTang92
Copy link

Hi, I would like to know how to preprocess the kinetics-400 for reproducing the results. I found that extracting tvl1 flow before rescale the rgb images leads to worse flow recognition accuracy.
So, currently, I first resampling videos at 25 fps. Then I extract rgb frames and resize with shorter side setting 256 pixels. I am using opencv3.4 version of cv::cuda::OpticalFlowDual_TVL1 for flow extraction on the resize gray-scale frames. All the pixels values are rescale as mention in the project. Are there any details i am missing in this preprossing procedure? Or, am I conducting the right way for extracting optical flow? Thanks.

@salmedina
Copy link

Ping! Any update on this?

@YongyiTang92
Copy link
Author

Ping! Any update on this?

I found that the pretrained model work better with the flow images which are extracted after resizing the rgb frames.
And I used OpenCV3.3 (or 4.0 may work) instead of 3.4 since I found there some difference for the cv::cuda::OpticalFlowDual_TVL1.
I have got compatible accuracy for the fusion results while the flow results is still slightly worse.

@salmedina
Copy link

Thank you very much for the feedback. This is helpful. I'll look into that.

@dreichCSL
Copy link

Has anyone tried calculating optical flow using python opencv? I can't seem to get good results with that preprocessing, but might also be my lack of understing about parameters to use. I'm using opencv 4.1.0:
optical_flow = cv2.optflow.createOptFlow_DualTVL1()
flow_frame = optical_flow.calc(prev, curr, None)
flow_frame = np.clip(flow_frame, -20, 20)
flow_frame = flow_frame / 20.0

Thanks for any comments!

@dreichCSL
Copy link

Actually, the code I used above works fine and produces good results on the example vid. But would still be nice to get pointers if this is missing something from the original preprocessing. Thanks

@YongyiTang92
Copy link
Author

Actually, the code I used above works fine and produces good results on the example vid. But would still be nice to get pointers if this is missing something from the original preprocessing. Thanks

I think your code is correct. But the python interface is too slow. Do you have any idea how to speedup? Actually, I used the C++ interface of OpenCV for flow extraction.

@dreichCSL
Copy link

Yes, it's slow, on my desktop flow calc runs at about 4 fps. I just wanted to reproduce for now. Don't know if it's possible to bring it up to 25 fps with Python - I'd guess that it isn't. Is the speed fine with C++?

@skaws2003
Copy link

Hello, does anyone know how the frame sampling was done? Is it just nearest sampling?

@skaws2003
Copy link

Yes, it's slow, on my desktop flow calc runs at about 4 fps. I just wanted to reproduce for now. Don't know if it's possible to bring it up to 25 fps with Python - I'd guess that it isn't. Is the speed fine with C++?

it is fine with C++. it runs at about 40~50fps on 2080ti with the opencv cuda interface.

@zehzhang
Copy link

Hello, does anyone know how the frame sampling was done? Is it just nearest sampling?

Hi do you have any ideas? I am also wondering how the video resampling is done...

@joaoluiscarreira
Copy link

joaoluiscarreira commented Sep 24, 2019 via email

@dreichCSL
Copy link

Just to reply to the resampling question: This is what I did to my videos which were originally 1280x720 and 30fps (using ffmpeg on ubuntu command line):

ffmpeg -y -r 30 -i input.avi -r 25 -filter:v scale=456x256 -sws_flags bilinear output.avi

Output of this should be a bilinearly interpolated video with 25fps and smaller side of video at 256px, as described in the paper and/or README file.

@ss-github-code
Copy link

Hi Joao,

the preprocessing code has been released as part of mediapipe, see here:
https://github.com/google/mediapipe/tree/master/mediapipe/examples/desktop/media_sequence

If I follow the steps and use kinetics_dataset.py on v_CricketShot_g04_c01,avi, I do not get rgb and flow files. Please do elaborate on how to preprocess the avi file to generate the rgb and flow data.

Thanks for your help

@joaoluiscarreira
Copy link

joaoluiscarreira commented Nov 19, 2019 via email

@ss-github-code
Copy link

Thanks Joao,
Just to clarify, I am following the steps outlined under "custom videos in the Kinetics format". I change VIDEO_PATH to point to the avi, build the media_sequence_demo and run kinetics_dataset.py. I do see an output file kinetics_700_custom_25fps_rgb_flow-00000-of-00001. I am not sure about the next step.

@ss-github-code
Copy link

I was hoping that it will generate the rgb & flow files in a format that I can then use as an input to evaluate_sample. Not sure if that is the intent of the release of the preprocessing code.

@joaoluiscarreira
Copy link

joaoluiscarreira commented Nov 19, 2019 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants