Skip to content

Frequently Asked Questions

yjxiong edited this page Mar 23, 2017 · 14 revisions

Which platform is supported?

Currently, our modified Caffe toolbox and dense_flow toolbox only support Linux platform. You are welcome to make a PR if you manage to run them on Mac/Windows.

Preparation

1. How can I generate the optical flow & the warped optical flow images?

Please see README

2. Cannot compile the dense_flow toolbox

Dense flow relies on an additional dependency: libzip. Please use your package manager to install it.

3. The folders holding the extracted frames are all empty.

This is due to the dense_flow tool not able to open any video. The reason is usually the failure of building the companioned OpenCV with VideIO support. On systems having other OpenCV installations, the dense_flow is thus linked to those installations, which typically do not have VideoIO support.

4. Build failure! What to do?

  • Errors related to nppGraphCut: find nppGraphCut.cpp in the bundled OpenCV sources and follow the intructions in http://answers.opencv.org/question/95148/cudalegacy-not-compile-nppigraphcut-missing/

  • Link error: -lopencv_dep_cudart: update to the latest version of TSN codebase and start again.

  • undefined symbols in OpenCV: it is possible that you have multiple versions of OpenCV installed. They are misused during the build. Please check your path settings.

Training TSN

1. The Caffe toolbox reports some file missing, what can I do?

Perhaps the file lists need to be generated again. Please see README for how to do it.

2. How are frames sampled during training?

As stated in the paper, the whole video is first divided into K segments (k=3 by default). One snippet, represented by either an RGB frame or a stack of 5 consecutive optical flow fields are randomly sampled from each segment. The K snippets sampled are then fed to CNN.

Testing TSN

1. Caffe reports way lower accuracies in on-the-fly validation than what you guys reported, what's wrong?

Easy, there is nothing wrong. Numbers you see during training are just for monitoring the training process.

The reported numbers in the paper are the video-level testing results using 25 frames per video. Please follow the intructions provided to test your trained models and check the results

More to be added...

Contact

For other questions and inquiries, contact