Skip to content
This repository has been archived by the owner on Oct 1, 2019. It is now read-only.

The mean file returned by compute_volume_mean_from_list.bin is empty #38

Closed
VigneshSrinivasan10 opened this issue Oct 21, 2015 · 13 comments

Comments

@VigneshSrinivasan10
Copy link

I am trying to compute the mean volume of the dataset HMDB-51 and I run this code:

GLOG_logtostderr=1 ../../build/tools/compute_volume_mean_from_list.bin c3d_finetuning_HMDB51/train2.lst 16 128 171 1 hmdb51_train_mean.binaryproto 10

My train.lst looks like this:

c3d_finetuning_HMDB51/data/April_09_brush_hair_u_nm_np1_ba_goo_0.avi 1 0
c3d_finetuning_HMDB51/data/April_09_brush_hair_u_nm_np1_ba_goo_0.avi 17 0
c3d_finetuning_HMDB51/data/April_09_brush_hair_u_nm_np1_ba_goo_0.avi 33 0
c3d_finetuning_HMDB51/data/April_09_brush_hair_u_nm_np1_ba_goo_0.avi 49 0

I1021 15:24:30.223497 28438 compute_volume_mean_from_list.cpp:53] using dropping rate 10
I1021 15:24:30.223736 28438 compute_volume_mean_from_list.cpp:80] Starting Iteration
E1021 15:24:30.296300 28438 compute_volume_mean_from_list.cpp:112] Processed 2872 files.
I1021 15:24:30.296344 28438 compute_volume_mean_from_list.cpp:119] Write to hmdb51_train_mean.binaryproto

I get an output file hmdb51_train_mean.binaryproto which is of size 10Bytes.

I have checked the 2 suggestions related to issue-30:

  1. The correctness of the relative paths
  2. Opencv and ffmpeg compiles with the --enable-shared flag on.

I have tried both and yet the same empty file is returned.

P.S: I do not have any other problems with the code. I am able to extract the features as well.

Thanks in advance!

@dutran
Copy link
Contributor

dutran commented Oct 22, 2015

Yes, your mean file is empty because of IO error.

  1. You may want to check you relative location (path to files), if the files are existed. If possible convert them to absolutely paths to make less confused.
  2. If 1 is correct, then it may be the case your OpenCV is not compiled with ffmpeg thus It does not have appropriate codec to read your video files. In that case, you can re-compile ffmpeg, then opencv with --enable-shared flag is ON

@VigneshSrinivasan10
Copy link
Author

I have tried both the solutions!
I dont seem to find where the error could be.

@dutran
Copy link
Contributor

dutran commented Oct 22, 2015

Oops, just check and realize the tool only takes images.
https://github.com/facebook/C3D/blob/master/tools/compute_volume_mean_from_list.cpp#L65
You either:

  1. extract you movies into frames
  2. modify https://github.com/facebook/C3D/blob/master/tools/compute_volume_mean_from_list.cpp#L65 and https://github.com/facebook/C3D/blob/master/tools/compute_volume_mean_from_list.cpp#L88
    to use ReadVideoToVolumeDatum (https://github.com/facebook/C3D/blob/master/src/caffe/util/image_io.cpp#L101) instead of ReadImageSequenceToVolumeDatum

@VigneshSrinivasan10
Copy link
Author

Great :) Thanks for the clarification.
I will try it out and let u know.

@VigneshSrinivasan10
Copy link
Author

Dear Tran,

I have written down the frames from the videos with each folder being named after the video, and every frame named after the frame numbers. eg: 1.jpg, 2.jpg,...

and my train.lst looks like this now:

c3d_finetuning_HMDB51/dataFrames1/April_09_brush_hair_u_nm_np1_ba_goo_0/ 1 0
c3d_finetuning_HMDB51/dataFrames1/April_09_brush_hair_u_nm_np1_ba_goo_0/ 17 0
c3d_finetuning_HMDB51/dataFrames1/April_09_brush_hair_u_nm_np1_ba_goo_0/ 33 0
c3d_finetuning_HMDB51/dataFrames1/April_09_brush_hair_u_nm_np1_ba_goo_0/ 49 0

And I still get a mean file of 10Bytes. Am I passing the correct values ?

@dutran
Copy link
Contributor

dutran commented Oct 22, 2015

you should make your file names as %06d.jpg
Sorry, I did a lazy job. If I have time, will spend on this to make IO function nicer :)

@VigneshSrinivasan10
Copy link
Author

Thank you. I have got the mean file now.

@dutran
Copy link
Contributor

dutran commented Oct 23, 2015

Glad to hear, good luck with your experiments!

@TinkerBell123
Copy link

hi du,i use ffmpeg to extract frames from videos,here is an example of my frames:
master/data/users/trandu/datasets/ucf101/frm/ApplyEyeMakeup/v_ApplyEyeMakeup_g09_c04]# ls
000001.jpg 000021.jpg 000041.jpg 000061.jpg 000081.jpg 000101.jpg 000121.jpg 000141.jpg 000161.jpg 000181.jpg

and,when i run the create_volume_mean file,there some error:

sh create_volume_mean.sh

I1202 14:35:56.476048 31451 compute_volume_mean_from_list.cpp:53] using dropping rate 10
I1202 14:35:56.476285 31451 compute_volume_mean_from_list.cpp:81] Starting Iteration
E1202 14:35:56.595129 31451 compute_volume_mean_from_list.cpp:107] Processed 10000 files.
E1202 14:35:56.599031 31451 compute_volume_mean_from_list.cpp:113] Processed 10725 files.
I1202 14:35:56.599041 31451 compute_volume_mean_from_list.cpp:120] Write to ucf101_train_mean.binaryproto
I1202 14:35:56.599046 31451 compute_volume_mean_from_list.cpp:121] sum blob num: 1
I1202 14:35:56.599051 31451 compute_volume_mean_from_list.cpp:122] sum blob channels: 3
I1202 14:35:56.599056 31451 compute_volume_mean_from_list.cpp:123] sum blob length: 16
I1202 14:35:56.599061 31451 compute_volume_mean_from_list.cpp:124] sum blob height: 0
I1202 14:35:56.599064 31451 compute_volume_mean_from_list.cpp:125] sum blob width: 0

I check the compute_volum_mean_from_list.cpp:

ReadImageSequenceToVolumeDatum(frm_dir.c_str(), start_frm, label, length, height, width, sampling_rate, &datum);
69 sum_blob.set_num(1);
70 sum_blob.set_channels(datum.channels());
71 sum_blob.set_length(datum.length());
72 sum_blob.set_height(datum.height());
73 sum_blob.set_width(datum.width());

i think the erro happen in the function ReadImageSequenceToVolumeDatum,the height=128,width=171,and datum.height()=0,datum.width()=0.

so ,what do you think the error is ,thank you !

@Michael-Guo
Copy link

@vignesh10 Hello, I want to finetune the C3D network on HMDB51 just like you. My loss goes down from 4.8 to 0.02. But test accuracy is very low. It is only 1%. Did you meet similar problem like me? And I will be appreciate if you share your trained model and solver on HMDB51. Thanks a lot!

@VigneshSrinivasan10
Copy link
Author

@Michael-Guo, Please check ur training accuracy. If it is saturating, then ur network isnt learning anything. It has simply over-fit to ur training set.

If I were you, I would look at the input data again. Good Luck!

@Michael-Guo
Copy link

@vignesh10 It is a good idea! But what's the meaning of word 'ur'?

@VigneshSrinivasan10
Copy link
Author

Sorry for the slang.. it is your*

  • saves two letters worth of typing effort..

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants