-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to get my own extracted-features? #5
Comments
Hi takuyara, we extracted our features by I3D, InceptionResNetV2 , and BUTD |
Thanks for your quick reply! |
Hi tgc, I learned the I3D code you provided above. I am wondering how to set max_interval and overlap to get equally-spaced 26 features for each video? Or it's no need to set these two parameters, and we just need to extract around 209 frames as the input of the i3d model? |
We first set max_interval=64, overlap=8 to extract features and then sample 26 of them. |
Hi tgc, Thanks for your reply! Sorry to bother you again, I still have 2 questions about this,
|
|
Thanks for your explanations! |
Sorry to bother you again! I am still confused about the steps to determine the 'max_interval' and 'overlap'. Could you please give an example, like when we have 2 videos, and one is 10-min long at 25 FPS while another is 8-min long at 30 FPS? Many thanks! |
Hi, I have tried to extract features using I3D, IRV2 and BUTD like you mentioned but I am not able to get the same features as you. The features that I obtained seem to be very different from the provided h5 file... How to get the same features as produced in the h5 file? How were the equally spaced frames selected? Is it by the following method: index = [int(ceil(i*len(l)/26)) for i in range(26)] May I know what other steps are required during extraction? |
Hi tgc, I'd like to test this model on my own video. How could I get the extracted features as inputs?
The text was updated successfully, but these errors were encountered: