Python Shell
Switch branches/tags
Nothing to show
Clone or download
Latest commit a28f04b May 13, 2017
Failed to load latest commit information.
lib finalize Apr 27, 2017
src fix comments Aug 9, 2016
LICENSE finalize Apr 27, 2017 Update May 14, 2017

Video Segmentation and tracking

Code for unsupervised bottom-up video motion segmentation. uNLC is a reimplementation of the NLC algorithm by Faktor and Irani, BMVC 2014, that removes the trained edge detector and makes numerous other modifications and simplifications. For additional details, see section 5.1 in the paper. This repository also contains code for a very simple video tracker which we developed.

This code was developed and is used in our CVPR 2017 paper on Unsupervised Learning using unlabeled videos. Github repository for our CVPR 17 paper is here. If you find this work useful in your research, please cite:

    Author = {Pathak, Deepak and Girshick, Ross and Doll\'{a}r,
              Piotr and Darrell, Trevor and Hariharan, Bharath},
    Title = {Learning Features by Watching Objects Move},
    Booktitle = {Computer Vision and Pattern Recognition ({CVPR})},
    Year = {2017}

Video Segmentation using low-level vision based unsupervised methods. It is largely inspired from Non-Local Consensus [Faktor and Irani, BMVC 2014] method, but removes all trained components. This segmentation method includes and make use of code for optical flow, motion saliency code, appearance saliency, superpixel and low-level descriptors.

Video Tracking code includes deepmatch followed by epic flow (or farneback) and then doing homography followed by bipartite matching to obtain foreground tracks.


  1. Install optical flow
cd videoseg/lib/
git clone
cd pyflow/
python build_ext -i
python    # -viz option to visualize output
  1. Install Dense CRF code
cd videoseg/lib/
git clone
cd pydensecrf/
python build_ext -i
PYTHONPATH=.:$PYTHONPATH python examples/ examples/im1.png examples/anno1.png examples/out_new1.png
# compare out_new1.png and out1.png -- they should be same
  1. Install appearance saliency
cd videoseg/lib/
git clone
  1. Install kernel temporal segmentation code
# cd videoseg/lib/
# wget
# tar -zxvf kts_ver1.1.tar.gz && mv kts_ver1.1 kts
# rm -f kts_ver1.1.tar.gz

# Edit kts/ to remove weave dependecy. Due to this change, we are shipping the library.
# Included in videoseg/lib/kts/ . However, it is not a required change if you already have weave installed
# (which is mostly present by default).
  1. Convert them to modules
cd videoseg/lib/
cp mr_saliency/
cp kts/
  1. Run temporal segmentation:
time python -imdir /home/dpathak/local/data/trash/my_nlc/imseq/v21/ -out /home/dpathak/local/data/trash/my_nlc/nlc_out/
  1. Run NLC segmentation:
cd videoseg/src/
time python -imdir /home/dpathak/local/data/trash/my_nlc/imseq/3_tmp/ -out /home/dpathak/local/data/trash/my_nlc/nlc_out/ -maxsp 400 -iters 100
  1. Run Tracker:
cd videoseg/src/
time python -fgap 2 -seed 2905 -vizTr -dmThresh 0 -shotFrac 0.2 -matchNbr 20 -postTrackHomTh -1 -preTrackHomTh 10
  1. Run CRF sample:
cd videoseg/src/
time python -inIm ../lib/pydensecrf/examples/im1.png -inL ../lib/pydensecrf/examples/anno1.png -out ../lib/pydensecrf/examples/out_new2.png
  1. Run Full Pipeline:
cd videoseg/src/
time python -out /home/dpathak/local/data/AllSegInit -in /home/dpathak/fbcode/experimental/deeplearning/dpathak/videoseg/datasets/imdir_Samples.txt