Clockwork Convnets for Video Semantic Segmenation
Branch: master
Clone or download
shelhamer switch to group deconv and update reference net weights!
standardize net spec to group deconv (no. of outputs == no. of groups)
and likewise standardize the model weights. the model filenames have
changed to mark the difference.

n.b. the older model weights disagreed with the net spec, yielding wrong
results for certain notebooks. make sure to download the latest weights!
Latest commit 19a413e Feb 1, 2017
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
caffe @ d8f2006 clockwork convnets for video semantic segmentation Aug 24, 2016
datasets cityscapes: make class list in ID order Nov 16, 2016
lib clockwork convnets for video semantic segmentation Aug 24, 2016
nets switch to group deconv and update reference net weights! Feb 2, 2017
notebooks switch to group deconv and update reference net weights! Feb 2, 2017
.envrc clockwork convnets for video semantic segmentation Aug 24, 2016
.gitignore clockwork convnets for video semantic segmentation Aug 24, 2016
.gitmodules clockwork convnets for video semantic segmentation Aug 24, 2016
LICENSE
README.md clockwork convnets for video semantic segmentation Aug 24, 2016

README.md

Clockwork Convnets for Video Semantic Segmentation

This is the reference implementation of arxiv:1608.03609:

Clockwork Convnets for Video Semantic Segmentation
Evan Shelhamer*, Kate Rakelly*, Judy Hoffman*, Trevor Darrell
arXiv:1605.06211

This project reproduces results from the arxiv and demonstrates how to execute staged fully convolutional networks (FCNs) on video in Caffe by controlling the net through the Python interface. In this way this these experiments are a proof-of-concept implementation of clockwork, and further development is needed to achieve peak efficiency (such as pre-fetching video data layers, threshold GPU layers, and a native Caffe library edition of the staged forward pass for pipelining).

For simple reference, refer to these (display only) editions of the experiments:

Contents

  • notebooks: interactive code and documentation that carries out the experiments (in jupyter/ipython format).
  • nets: the net specification of the various FCNs in this work, and the pre-trained weights (see installation instructions).
  • caffe: the Caffe framework, included as a git submodule pointing to a compatible version
  • datasets: input-output for PASCAL VOC, NYUDv2, YouTube-Objects, and Cityscapes
  • lib: helpers for executing networks, scoring metrics, and plotting

License

This project is licensed for open non-commercial distribution under the UC Regents license; see LICENSE. Its dependencies, such as Caffe, are subject to their own respective licenses.

Requirements & Installation

Caffe, Python, and Jupyter are necessary for all of the experiments. Any installation or general Caffe inquiries should be directed to the caffe-users mailing list.

  1. Install Caffe. See the installation guide and try Caffe through Docker (recommended). Make sure to configure pycaffe, the Caffe Python interface, too.
  2. Install Python, and then install our required packages listed in requirements.txt. For instance, for x in $(cat requirements.txt); do pip install $x; done should do.
  3. Install Jupyter, the interface for viewing, executing, and altering the notebooks.
  4. Configure your PYTHONPATH as indicated by the included .envrc so that this project dir and pycaffe are included.
  5. Download the model weights for this project and place them in nets.

Now you can explore the notebooks by firing up Jupyter.