In this package, we provide the python code for following paper:
Self-taught Object Localization using Deep Networks. L. Bazzani, A. Bergamo, D. Anguelov, and L. Torresani CoRR 2014.
- A demo that shows how STL can be used to extract the objectness bounding boxes of an image
- The scripts to generate the plots of our paper
- Python 2.7, NumPy, SciPy, scikit-image, and Matplotlib packages.
- Caffe: installed from the gist_id c18d22eb92 Date: Mon Oct 20 12:57:18 2014 -0700
- Segmentation Alg. from the Selective Search package: install it in the folder img_segmentation/ or use the provided compiled mex
- Download the model used in our experiments from here
- Open the file
- Change row 19 with the path where you downloaded the model
- Select the option "cpu" or "gpu" at row 21
In order to play around with the parameters of STL, open the file
stl_params.py and look at what you can pass to the the initialization function as argument.
By default the code runs the unsupervised version of STL, but it can be changed to the supervised version by choosing
use_fullimg_GT_label=True. Note that the label(s) should be provided along with the image at row 24. See the file
prototxt/synset_words.txt for the list of labels. For the example in the demo, it should be used
gt_labels = ["n01744401"].
Generate Plots Paper
Open Matlab and run the script
generate_figures.m. New curves can be added by modifing the file
List of 200 classes randomly selected for the ILSVRC2012-(val,train)-200-RND can be found in the file
L. Bazzani and A. Bergamo contributed equally to the project. For the license and usage, have a look at the file LICENSE.