Skip to content

Commit

Permalink
update code
Browse files Browse the repository at this point in the history
  • Loading branch information
yanxp committed May 29, 2018
1 parent 934e258 commit f232f96
Show file tree
Hide file tree
Showing 247 changed files with 232,575 additions and 0 deletions.
21 changes: 21 additions & 0 deletions LICENSE
@@ -0,0 +1,21 @@
MIT License

Copyright (c) 2016 Yuwen Xiong

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
1 change: 1 addition & 0 deletions caffe
Submodule caffe added at 1a2be8
8 changes: 8 additions & 0 deletions data/.gitignore
@@ -0,0 +1,8 @@
selective_search*
imagenet_models*
fast_rcnn_models*
faster_rcnn_models*
rfcn_models*
VOCdevkit*
coco*
cache
69 changes: 69 additions & 0 deletions data/README.md
@@ -0,0 +1,69 @@
This directory holds (*after you download them*):
- Caffe models pre-trained on ImageNet
- Faster R-CNN models
- Symlinks to datasets

To download Caffe models (ZF, VGG16) pre-trained on ImageNet, run:

```
./data/scripts/fetch_imagenet_models.sh
```

This script will populate `data/imagenet_models`.

To download Faster R-CNN models trained on VOC 2007, run:

```
./data/scripts/fetch_faster_rcnn_models.sh
```

This script will populate `data/faster_rcnn_models`.

In order to train and test with PASCAL VOC, you will need to establish symlinks.
From the `data` directory (`cd data`):

```
# For VOC 2007
ln -s /your/path/to/VOC2007/VOCdevkit VOCdevkit2007
# For VOC 2012
ln -s /your/path/to/VOC2012/VOCdevkit VOCdevkit2012
```

Install the MS COCO dataset at /path/to/coco

```
ln -s /path/to/coco coco
```

For COCO with Fast R-CNN, place object proposals under `coco_proposals` (inside
the `data` directory). You can obtain proposals on COCO from Jan Hosang at
https://www.mpi-inf.mpg.de/departments/computer-vision-and-multimodal-computing/research/object-recognition-and-scene-understanding/how-good-are-detection-proposals-really/.
For COCO, using MCG is recommended over selective search. MCG boxes can be downloaded
from http://www.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/mcg/.
Use the tool `lib/datasets/tools/mcg_munge.py` to convert the downloaded MCG data
into the same file layout as those from Jan Hosang.

Since you'll likely be experimenting with multiple installs of Fast/er R-CNN in
parallel, you'll probably want to keep all of this data in a shared place and
use symlinks. On my system I create the following symlinks inside `data`:

Annotations for the 5k image 'minival' subset of COCO val2014 that I like to use
can be found at http://www.cs.berkeley.edu/~rbg/faster-rcnn-data/instances_minival2014.json.zip.
Annotations for COCO val2014 (set) minus minival (~35k images) can be found at
http://www.cs.berkeley.edu/~rbg/faster-rcnn-data/instances_valminusminival2014.json.zip.

```
# data/cache holds various outputs created by the datasets package
ln -s /data/fast_rcnn_shared/cache
# move the imagenet_models to shared location and symlink to them
ln -s /data/fast_rcnn_shared/imagenet_models
# move the selective search data to a shared location and symlink to them
# (only applicable to Fast R-CNN training)
ln -s /data/fast_rcnn_shared/selective_search_data
ln -s /data/VOC2007/VOCdevkit VOCdevkit2007
ln -s /data/VOC2012/VOCdevkit VOCdevkit2012
```
Binary file added data/demo/000456.jpg
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added data/demo/000542.jpg
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added data/demo/001150.jpg
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added data/demo/001763.jpg
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added data/demo/004545.jpg
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
3 changes: 3 additions & 0 deletions data/pylintrc
@@ -0,0 +1,3 @@
[TYPECHECK]

ignored-modules = numpy, numpy.random, cv2
34 changes: 34 additions & 0 deletions data/scripts/fetch_faster_rcnn_models.sh
@@ -0,0 +1,34 @@
#!/bin/bash

DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )/../" && pwd )"
cd $DIR

FILE=faster_rcnn_models.tgz
URL=http://www.cs.berkeley.edu/~rbg/faster-rcnn-data/$FILE
CHECKSUM=ac116844f66aefe29587214272054668

if [ -f $FILE ]; then
echo "File already exists. Checking md5..."
os=`uname -s`
if [ "$os" = "Linux" ]; then
checksum=`md5sum $FILE | awk '{ print $1 }'`
elif [ "$os" = "Darwin" ]; then
checksum=`cat $FILE | md5`
fi
if [ "$checksum" = "$CHECKSUM" ]; then
echo "Checksum is correct. No need to download."
exit 0
else
echo "Checksum is incorrect. Need to download again."
fi
fi

echo "Downloading Faster R-CNN demo models (695M)..."

wget $URL -O $FILE

echo "Unzipping..."

tar zxvf $FILE

echo "Done. Please run this command again to verify that checksum = $CHECKSUM."
34 changes: 34 additions & 0 deletions data/scripts/fetch_imagenet_models.sh
@@ -0,0 +1,34 @@
#!/bin/bash

DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )/../" && pwd )"
cd $DIR

FILE=imagenet_models.tgz
URL=http://www.cs.berkeley.edu/~rbg/faster-rcnn-data/$FILE
CHECKSUM=ed34ca912d6782edfb673a8c3a0bda6d

if [ -f $FILE ]; then
echo "File already exists. Checking md5..."
os=`uname -s`
if [ "$os" = "Linux" ]; then
checksum=`md5sum $FILE | awk '{ print $1 }'`
elif [ "$os" = "Darwin" ]; then
checksum=`cat $FILE | md5`
fi
if [ "$checksum" = "$CHECKSUM" ]; then
echo "Checksum is correct. No need to download."
exit 0
else
echo "Checksum is incorrect. Need to download again."
fi
fi

echo "Downloading pretrained ImageNet models (1G)..."

wget $URL -O $FILE

echo "Unzipping..."

tar zxvf $FILE

echo "Done. Please run this command again to verify that checksum = $CHECKSUM."
34 changes: 34 additions & 0 deletions data/scripts/fetch_selective_search_data.sh
@@ -0,0 +1,34 @@
#!/bin/bash

DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )/../" && pwd )"
cd $DIR

FILE=selective_search_data.tgz
URL=http://www.cs.berkeley.edu/~rbg/fast-rcnn-data/$FILE
CHECKSUM=7078c1db87a7851b31966b96774cd9b9

if [ -f $FILE ]; then
echo "File already exists. Checking md5..."
os=`uname -s`
if [ "$os" = "Linux" ]; then
checksum=`md5sum $FILE | awk '{ print $1 }'`
elif [ "$os" = "Darwin" ]; then
checksum=`cat $FILE | md5`
fi
if [ "$checksum" = "$CHECKSUM" ]; then
echo "Checksum is correct. No need to download."
exit 0
else
echo "Checksum is incorrect. Need to download again."
fi
fi

echo "Downloading precomputed selective search boxes (0.5G)..."

wget $URL -O $FILE

echo "Unzipping..."

tar zxvf $FILE

echo "Done. Please run this command again to verify that checksum = $CHECKSUM."
5 changes: 5 additions & 0 deletions experiments/README.md
@@ -0,0 +1,5 @@
Scripts are under `experiments/scripts`.

Each script saves a log file under `experiments/logs`.

Configuration override files used in the experiments are stored in `experiments/cfgs`.
5 changes: 5 additions & 0 deletions experiments/cfgs/faster_rcnn_alt_opt.yml
@@ -0,0 +1,5 @@
EXP_DIR: faster_rcnn_alt_opt
TRAIN:
BG_THRESH_LO: 0.0
TEST:
HAS_RPN: True
11 changes: 11 additions & 0 deletions experiments/cfgs/faster_rcnn_end2end.yml
@@ -0,0 +1,11 @@
EXP_DIR: faster_rcnn_end2end
TRAIN:
HAS_RPN: True
IMS_PER_BATCH: 1
BBOX_NORMALIZE_TARGETS_PRECOMPUTED: True
RPN_POSITIVE_OVERLAP: 0.7
RPN_BATCHSIZE: 256
PROPOSAL_METHOD: gt
BG_THRESH_LO: 0.0
TEST:
HAS_RPN: True
12 changes: 12 additions & 0 deletions experiments/cfgs/rfcn_alt_opt_5step_ohem.yml
@@ -0,0 +1,12 @@
EXP_DIR: rfcn_alt_opt_5step_ohem
TRAIN:
BG_THRESH_LO: 0.0
RPN_PRE_NMS_TOP_N: 6000
RPN_POST_NMS_TOP_N: 300
AGNOSTIC: True
BATCH_SIZE: -1
RPN_NORMALIZE_TARGETS: True
TEST:
PROPOSAL_METHOD: 'rpn'
HAS_RPN: False
AGNOSTIC: True
17 changes: 17 additions & 0 deletions experiments/cfgs/rfcn_end2end.yml
@@ -0,0 +1,17 @@
EXP_DIR: rfcn_end2end
TRAIN:
HAS_RPN: True
IMS_PER_BATCH: 1
BBOX_NORMALIZE_TARGETS_PRECOMPUTED: True
RPN_POSITIVE_OVERLAP: 0.7
RPN_BATCHSIZE: 256
PROPOSAL_METHOD: gt
BG_THRESH_LO: 0.1
BATCH_SIZE: 128
AGNOSTIC: True
SNAPSHOT_ITERS: 10000
RPN_PRE_NMS_TOP_N: 6000
RPN_POST_NMS_TOP_N: 300
TEST:
HAS_RPN: True
AGNOSTIC: True
20 changes: 20 additions & 0 deletions experiments/cfgs/rfcn_end2end_ohem.yml
@@ -0,0 +1,20 @@
EXP_DIR: rfcn_end2end_ohem
TRAIN:
HAS_RPN: True
IMS_PER_BATCH: 1
BBOX_NORMALIZE_TARGETS_PRECOMPUTED: True
RPN_POSITIVE_OVERLAP: 0.7
RPN_NORMALIZE_TARGETS: True
RPN_BATCHSIZE: 256
PROPOSAL_METHOD: gt
BG_THRESH_LO: 0.0
BATCH_SIZE: -1
AGNOSTIC: True
SNAPSHOT_ITERS: 10000
RPN_PRE_NMS_TOP_N: 6000
RPN_POST_NMS_TOP_N: 300
ASPECT_GROUPING: False
TEST:
HAS_RPN: True
AGNOSTIC: True
BBOX_REG: True
1 change: 1 addition & 0 deletions experiments/logs/.gitignore
@@ -0,0 +1 @@
*.txt*
63 changes: 63 additions & 0 deletions experiments/scripts/fast_rcnn.sh
@@ -0,0 +1,63 @@
#!/bin/bash
# Usage:
# ./experiments/scripts/fast_rcnn.sh GPU NET DATASET [options args to {train,test}_net.py]
# DATASET is either pascal_voc or coco.
#
# Example:
# ./experiments/scripts/fast_rcnn.sh 0 VGG_CNN_M_1024 pascal_voc \
# --set EXP_DIR foobar RNG_SEED 42 TRAIN.SCALES "[400, 500, 600, 700]"

set -x
set -e

export PYTHONUNBUFFERED="True"

GPU_ID=$1
NET=$2
NET_lc=${NET,,}
DATASET=$3

array=( $@ )
len=${#array[@]}
EXTRA_ARGS=${array[@]:3:$len}
EXTRA_ARGS_SLUG=${EXTRA_ARGS// /_}

case $DATASET in
pascal_voc)
TRAIN_IMDB="voc_2007_trainval"
TEST_IMDB="voc_2007_test"
PT_DIR="pascal_voc"
ITERS=40000
;;
coco)
TRAIN_IMDB="coco_2014_train"
TEST_IMDB="coco_2014_minival"
PT_DIR="coco"
ITERS=280000
;;
*)
echo "No dataset given"
exit
;;
esac

LOG="experiments/logs/fast_rcnn_${NET}_${EXTRA_ARGS_SLUG}.txt.`date +'%Y-%m-%d_%H-%M-%S'`"
exec &> >(tee -a "$LOG")
echo Logging output to "$LOG"

time ./tools/train_net.py --gpu ${GPU_ID} \
--solver models/${PT_DIR}/${NET}/fast_rcnn/solver.prototxt \
--weights data/imagenet_models/${NET}.v2.caffemodel \
--imdb ${TRAIN_IMDB} \
--iters ${ITERS} \
${EXTRA_ARGS}

set +x
NET_FINAL=`grep -B 1 "done solving" ${LOG} | grep "Wrote snapshot" | awk '{print $4}'`
set -x

time ./tools/test_net.py --gpu ${GPU_ID} \
--def models/${PT_DIR}/${NET}/fast_rcnn/test.prototxt \
--net ${NET_FINAL} \
--imdb ${TEST_IMDB} \
${EXTRA_ARGS}

0 comments on commit f232f96

Please sign in to comment.