-
Notifications
You must be signed in to change notification settings - Fork 461
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Data-preprocessing for kinetics-400 #34
Comments
Ping! Any update on this? |
I found that the pretrained model work better with the flow images which are extracted after resizing the rgb frames. |
Thank you very much for the feedback. This is helpful. I'll look into that. |
Has anyone tried calculating optical flow using python opencv? I can't seem to get good results with that preprocessing, but might also be my lack of understing about parameters to use. I'm using opencv 4.1.0: Thanks for any comments! |
Actually, the code I used above works fine and produces good results on the example vid. But would still be nice to get pointers if this is missing something from the original preprocessing. Thanks |
I think your code is correct. But the python interface is too slow. Do you have any idea how to speedup? Actually, I used the C++ interface of OpenCV for flow extraction. |
Yes, it's slow, on my desktop flow calc runs at about 4 fps. I just wanted to reproduce for now. Don't know if it's possible to bring it up to 25 fps with Python - I'd guess that it isn't. Is the speed fine with C++? |
Hello, does anyone know how the frame sampling was done? Is it just nearest sampling? |
it is fine with C++. it runs at about 40~50fps on 2080ti with the opencv cuda interface. |
Hi do you have any ideas? I am also wondering how the video resampling is done... |
Hi,
the preprocessing code has been released as part of mediapipe, see here:
https://github.com/google/mediapipe/tree/master/mediapipe/examples/desktop/media_sequence
Best,
Joao
…On Tue, Sep 24, 2019 at 7:20 AM zehzhang ***@***.***> wrote:
Hello, does anyone know how the frame sampling was done? Is it just
nearest sampling?
Hi do you have any ideas? I am also wondering how the video resampling is
done...
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#34>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ADXKU2QKEOFXA7BL2M5N2SDQLGWRPANCNFSM4F22S2JQ>
.
|
Just to reply to the resampling question: This is what I did to my videos which were originally 1280x720 and 30fps (using ffmpeg on ubuntu command line): ffmpeg -y -r 30 -i input.avi -r 25 -filter:v scale=456x256 -sws_flags bilinear output.avi Output of this should be a bilinearly interpolated video with 25fps and smaller side of video at 256px, as described in the paper and/or README file. |
Hi Joao,the preprocessing code has been released as part of mediapipe, see here:
|
I think Jiuqiang may be able to help.
Joao
…On Tue, Nov 19, 2019 at 6:44 AM Shiv Saxena ***@***.***> wrote:
Hi Joao, the preprocessing code has been released as part of mediapipe,
see here:
https://github.com/google/mediapipe/tree/master/mediapipe/examples/desktop/media_sequence
If I follow the steps and use kinetics_dataset.py on
v_CricketShot_g04_c01,avi, I do not get rgb and flow files. Please do
elaborate on how to preprocess the avi file to generate the rgb and flow
data.
Thanks for your help
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#34>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADXKU2QZSYOJBUHA3ZZDCFTQUODMHANCNFSM4F22S2JQ>
.
|
Thanks Joao, |
I was hoping that it will generate the rgb & flow files in a format that I can then use as an input to evaluate_sample. Not sure if that is the intent of the release of the preprocessing code. |
I can successfully gengerate the tfrecord file for v_CricketShot_g04_c01.avi.
Please see
google-ai-edge/mediapipe#257 (comment) for
the details. Thanks!
On Mon, Nov 18, 2019 at 11:17 PM João Carreira <joaoluiscarreira@gmail.com>
wrote:
… I think Jiuqiang may be able to help.
Joao
On Tue, Nov 19, 2019 at 6:44 AM Shiv Saxena ***@***.***>
wrote:
> Hi Joao, the preprocessing code has been released as part of mediapipe,
> see here:
>
> https://github.com/google/mediapipe/tree/master/mediapipe/examples/desktop/media_sequence
>
> If I follow the steps and use kinetics_dataset.py on
> v_CricketShot_g04_c01,avi, I do not get rgb and flow files. Please do
> elaborate on how to preprocess the avi file to generate the rgb and flow
> data.
>
> Thanks for your help
>
> —
> You are receiving this because you commented.
> Reply to this email directly, view it on GitHub
> <#34>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/ADXKU2QZSYOJBUHA3ZZDCFTQUODMHANCNFSM4F22S2JQ>
> .
>
|
Hi, I would like to know how to preprocess the kinetics-400 for reproducing the results. I found that extracting tvl1 flow before rescale the rgb images leads to worse flow recognition accuracy.
So, currently, I first resampling videos at 25 fps. Then I extract rgb frames and resize with shorter side setting 256 pixels. I am using opencv3.4 version of cv::cuda::OpticalFlowDual_TVL1 for flow extraction on the resize gray-scale frames. All the pixels values are rescale as mention in the project. Are there any details i am missing in this preprossing procedure? Or, am I conducting the right way for extracting optical flow? Thanks.
The text was updated successfully, but these errors were encountered: