Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update the code to python3 and numpy 1.12.1 #18

Open
wants to merge 40 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
40 commits
Select commit Hold shift + click to select a range
e63fbc5
Update train.sh
NB-Dragon Dec 10, 2017
79bba6b
Delete fetch_imagenet_models.sh
NB-Dragon Dec 10, 2017
b0cdb6f
Delete fetch_fast_rcnn_ohem_models.sh
NB-Dragon Dec 10, 2017
0ec8819
Delete fetch_selective_search_data.sh
NB-Dragon Dec 10, 2017
2da1c3b
Add files via upload
NB-Dragon Dec 10, 2017
7d54830
Delete 000456.jpg
NB-Dragon Dec 10, 2017
db42125
Delete 000542.jpg
NB-Dragon Dec 10, 2017
36caef2
Delete 001150.jpg
NB-Dragon Dec 10, 2017
0d0d062
Delete 001763.jpg
NB-Dragon Dec 10, 2017
653bb81
Delete 004545.jpg
NB-Dragon Dec 10, 2017
c6d4f54
Add files via upload
NB-Dragon Dec 10, 2017
bdef9c2
Update README.md
NB-Dragon Dec 10, 2017
b22002c
Update fetch_selective_search_data.sh
NB-Dragon Dec 10, 2017
f94947e
Update setup.py
NB-Dragon Dec 10, 2017
bd4935e
Update mcg_munge.py
NB-Dragon Dec 10, 2017
38ab32f
Update voc_eval.m
NB-Dragon Dec 10, 2017
9aaf2bb
Update coco.py
NB-Dragon Dec 10, 2017
47e8536
Update factory.py
NB-Dragon Dec 10, 2017
d03d571
Update imdb.py
NB-Dragon Dec 10, 2017
7669a13
Update pascal_voc.py
NB-Dragon Dec 10, 2017
4d11b44
Adapted in python3
NB-Dragon Dec 10, 2017
8a8e60a
Adapted in python3
NB-Dragon Dec 10, 2017
64bd262
Adapted in python3
NB-Dragon Dec 10, 2017
c7255b1
Adapted in python3
NB-Dragon Dec 10, 2017
6f870db
Adapted in python3
NB-Dragon Dec 10, 2017
f312575
Adapted in python3
NB-Dragon Dec 10, 2017
eb919b1
Adapted in python3
NB-Dragon Dec 10, 2017
1149f8b
Adapt to python3 and the version of numpy1.12.1
NB-Dragon Dec 10, 2017
6944fd3
change the URL to download selective search data.
NB-Dragon Dec 10, 2017
878f64e
Update LICENSE
NB-Dragon Dec 10, 2017
3770764
Update LICENSE
NB-Dragon Dec 10, 2017
0b26891
Merge branch 'master' of https://github.com/NB-Dragon/adversarial-frcnn
NB-Dragon Dec 10, 2017
1a00a09
Change the style to show images from the list of dataset.
NB-Dragon Dec 10, 2017
c5bec6b
Update README.md
NB-Dragon Dec 10, 2017
aa36596
Update README.md
NB-Dragon Dec 10, 2017
777814b
Change the URL to download training files and logs
NB-Dragon Dec 10, 2017
258e765
Update fetch_selective_search_data.sh
NB-Dragon Dec 11, 2017
f70439b
Supplementary details and remove the old things.
NB-Dragon Dec 11, 2017
0151dc9
Fix the BUG in Python3
NB-Dragon Jan 29, 2018
401e5fd
Fix the BUG in Python3
NB-Dragon Jan 29, 2018
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
29 changes: 2 additions & 27 deletions LICENSE
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ notice and license of such Third Party Code are set out below. This
Third Party Code is licensed to you under their original license terms
set forth below.

1. Fast R-CNN (https://github.com/rbgirshick/fast-rcnn)
1.Fast R-CNN (https://github.com/rbgirshick/fast-rcnn)

Copyright (c) Microsoft Corporation

Expand All @@ -58,32 +58,7 @@ OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
OTHER DEALINGS IN THE SOFTWARE.


2. Faster R-CNN (https://github.com/rbgirshick/py-faster-rcnn)

The MIT License (MIT)

Copyright (c) 2015 Microsoft Corporation

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

3. Caffe, (https://github.com/BVLC/caffe/)
2.Caffe, (https://github.com/BVLC/caffe/)

COPYRIGHT

Expand Down
28 changes: 10 additions & 18 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,8 +37,12 @@ This implementation is built on a *fork* of the OHEM code ([here](https://github


### Installation

Please follow the exact installation and download the VOC data as the Faster R-CNN Python code ([here](https://github.com/rbgirshick/py-faster-rcnn)).
- Step 1:
Download the selective search data by running script 'data/script/fetch_selective_search_data.sh'.
- Step 2:
Download the caffemodel by running script 'data/script/fetch_imagenet_models.sh'.
- Step 3:
Create the link to VOCdevkit.You can find the details from ['data/README.md'](https://github.com/NB-Dragon/A-Fast-RCNN/blob/master/data/README.md).

### Usage

Expand All @@ -50,21 +54,9 @@ To run the code, one can simply do,
It includes 3-stage of training:

```Shell
./experiments/scripts/fast_rcnn_std.sh [GPU_ID] VGG16 pascal_voc
```
which is used for training a standard Fast-RCNN for 10K iterations, you can download my [model](https://www.dropbox.com/s/ccs7lw3gydfzgvv/fast_rcnn_std_iter_10000.caffemodel?dl=0) and [logs](https://www.dropbox.com/s/hwbag60l1gmtxbb/fast_rcnn_std.txt.2017-04-08_16-53-59?dl=0) for this step.

```Shell
./experiments/scripts/fast_rcnn_adv_pretrain.sh [GPU_ID] VGG16 pascal_voc
```
which is a pre-training stage for the adversarial network, you can download my [model](https://www.dropbox.com/s/hvqpxn3bigarhdn/fast_rcnn_adv_pretrain_iter_25000.caffemodel?dl=0) and [logs](https://www.dropbox.com/s/i79j5hd0ee4ybke/fast_rcnn_adv_pretrain.txt.2017-04-08_19-39-49?dl=0) for this step.

```Shell
./experiments/scripts/fast_rcnn_std.sh [GPU_ID] VGG16 pascal_voc
./experiments/scripts/fast_rcnn_adv_pretrain.sh [GPU_ID] VGG16 pascal_voc
./copy_model.h
./experiments/scripts/fast_rcnn_adv.sh [GPU_ID] VGG16 pascal_voc
```
which is used to copy the weights of the above two models to initialize the joint model.

```Shell
./experiments/scripts/fast_rcnn_adv.sh [GPU_ID] VGG16 pascal_voc
```
which is joint training of the detector and the adversarial network, you can download my [model](https://www.dropbox.com/s/5wvxh8g5n3ewvp4/fast_rcnn_adv_iter_40000.caffemodel?dl=0) and [logs](https://www.dropbox.com/s/awrdrwyfthdgba5/fast_rcnn_adv.txt.2017-04-09_22-09-57?dl=0) for this step.
All output training files and logs can be downloaded from my [Baidu cloud disk](https://pan.baidu.com/s/1nvac2Jv).
42 changes: 15 additions & 27 deletions data/README.md
Original file line number Diff line number Diff line change
@@ -1,25 +1,33 @@
This directory holds (*after you download them*):
- Fast R-CNN models trained with OHEM on VOC 2007 trainval
- Pre-computed object proposals
- Caffe models pre-trained on ImageNet
- Fast R-CNN models
- Symlinks to datasets

To download Fast R-CNN models (VGG_CNN_M_1024, VGG16) trained with OHEM on VOC 2007 trainval, run:
To download precomputed Selective Search proposals for PASCAL VOC 2007 and 2012, run:

```
./data/scripts/fetch_fast_rcnn_ohem_models.sh
./data/scripts/fetch_selective_search_data.sh
```

This script will populate `data/fast_rcnn_ohem_models` with VGG16 and VGG_CNN_M_1024 models (Fast R-CNN detectors trained with OHEM).
This script will populate `data/selective_search_data`.


To download Caffe models (ZF, VGG16) pre-trained on ImageNet, run:
To download Caffe models (CaffeNet, VGG_CNN_M_1024, VGG16) pre-trained on ImageNet, run:

```
./data/scripts/fetch_imagenet_models.sh
```

This script will populate `data/imagenet_models`.

To download Fast R-CNN models trained on VOC 2007, run:

```
./data/scripts/fetch_fast_rcnn_models.sh
```

This script will populate `data/fast_rcnn_models`.

In order to train and test with PASCAL VOC, you will need to establish symlinks.
From the `data` directory (`cd data`):

Expand All @@ -31,29 +39,10 @@ ln -s /your/path/to/VOC2007/VOCdevkit VOCdevkit2007
ln -s /your/path/to/VOC2012/VOCdevkit VOCdevkit2012
```

Install the MS COCO dataset at /path/to/coco

```
ln -s /path/to/coco coco
```

For COCO with Fast R-CNN, place object proposals under `coco_proposals` (inside
the `data` directory). You can obtain proposals on COCO from Jan Hosang at
https://www.mpi-inf.mpg.de/departments/computer-vision-and-multimodal-computing/research/object-recognition-and-scene-understanding/how-good-are-detection-proposals-really/.
For COCO, using MCG is recommended over selective search. MCG boxes can be downloaded
from http://www.eecs.berkeley.edu/Research/Projects/CS/vision/grouping/mcg/.
Use the tool `lib/datasets/tools/mcg_munge.py` to convert the downloaded MCG data
into the same file layout as those from Jan Hosang.

Since you'll likely be experimenting with multiple installs of Fast/er R-CNN in
Since you'll likely be experimenting with multiple installs of Fast R-CNN in
parallel, you'll probably want to keep all of this data in a shared place and
use symlinks. On my system I create the following symlinks inside `data`:

Annotations for the 5k image 'minival' subset of COCO val2014 that I like to use
can be found at http://www.cs.berkeley.edu/~rbg/faster-rcnn-data/instances_minival2014.json.zip.
Annotations for COCO val2014 (set) minus minival (~35k images) can be found at
http://www.cs.berkeley.edu/~rbg/faster-rcnn-data/instances_valminusminival2014.json.zip.

```
# data/cache holds various outputs created by the datasets package
ln -s /data/fast_rcnn_shared/cache
Expand All @@ -62,7 +51,6 @@ ln -s /data/fast_rcnn_shared/cache
ln -s /data/fast_rcnn_shared/imagenet_models

# move the selective search data to a shared location and symlink to them
# (only applicable to Fast R-CNN training)
ln -s /data/fast_rcnn_shared/selective_search_data

ln -s /data/VOC2007/VOCdevkit VOCdevkit2007
Expand Down
Binary file added data/demo/000004.jpg
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added data/demo/000004_boxes.mat
Binary file not shown.
Binary file removed data/demo/000456.jpg
Binary file not shown.
Binary file removed data/demo/000542.jpg
Binary file not shown.
Binary file removed data/demo/001150.jpg
Binary file not shown.
Binary file added data/demo/001551.jpg
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added data/demo/001551_boxes.mat
Binary file not shown.
Binary file removed data/demo/001763.jpg
Binary file not shown.
Binary file removed data/demo/004545.jpg
Binary file not shown.
8 changes: 4 additions & 4 deletions data/scripts/fetch_fast_rcnn_ohem_models.sh → data/scripts/fetch_fast_rcnn_models.sh
100755 → 100644
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,9 @@
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )/../" && pwd )"
cd $DIR

FILE=fast_rcnn_ohem_models.tgz
URL=http://graphics.cs.cmu.edu/projects/ohem/data/$FILE
CHECKSUM=cbfd5b7ed5ec4d5cb838701cbf1f3ccb
FILE=fast_rcnn_models.tgz
URL=https://dl.dropboxusercontent.com/s/e3ugqq3lca4z8q6/fast_rcnn_models.tgz
CHECKSUM=5f7dde9f5376e18c8e065338cc5df3f7

if [ -f $FILE ]; then
echo "File already exists. Checking md5..."
Expand All @@ -23,7 +23,7 @@ if [ -f $FILE ]; then
fi
fi

echo "Downloading Fast R-CNN OHEM models (VGG16 and VGG_CNN_M_1024)(1.5G)..."
echo "Downloading Fast R-CNN demo models (0.96G)..."

wget $URL -O $FILE

Expand Down
4 changes: 2 additions & 2 deletions data/scripts/fetch_imagenet_models.sh
100755 → 100644
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,8 @@ DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )/../" && pwd )"
cd $DIR

FILE=imagenet_models.tgz
URL=http://www.cs.berkeley.edu/~rbg/faster-rcnn-data/$FILE
CHECKSUM=ed34ca912d6782edfb673a8c3a0bda6d
URL=https://dl.dropboxusercontent.com/s/riazjuizq0w7dqm/imagenet_models.tgz
CHECKSUM=8b1d4b9da0593fc70ef403284f810adc

if [ -f $FILE ]; then
echo "File already exists. Checking md5..."
Expand Down
8 changes: 4 additions & 4 deletions data/scripts/fetch_selective_search_data.sh
100755 → 100644
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,9 @@
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )/../" && pwd )"
cd $DIR

FILE=selective_search_data.tgz
URL=http://www.cs.berkeley.edu/~rbg/fast-rcnn-data/$FILE
CHECKSUM=7078c1db87a7851b31966b96774cd9b9
FILE=r-cnn-release1-selective-search.tgz
URL=https://dl.dropboxusercontent.com/s/uf2i1y2oee7c6n1/r-cnn-release1-selective-search.tgz
CHECKSUM=6cf6df219c1e514f64482f11d00bd0b4

if [ -f $FILE ]; then
echo "File already exists. Checking md5..."
Expand All @@ -23,7 +23,7 @@ if [ -f $FILE ]; then
fi
fi

echo "Downloading precomputed selective search boxes (0.5G)..."
echo "Downloading precomputed selective search boxes (1.8G)..."

wget $URL -O $FILE

Expand Down
1 change: 1 addition & 0 deletions lib/datasets/VOCdevkit-matlab-wrapper/voc_eval.m
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@

VOCopts = get_voc_opts(path);
VOCopts.testset = test_set;
VOCopts.detrespath=[VOCopts.resdir 'Main/%s_det_' VOCopts.testset '_%s.txt'];

for i = 1:length(VOCopts.classes)
cls = VOCopts.classes{i};
Expand Down
56 changes: 28 additions & 28 deletions lib/datasets/coco.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
import numpy as np
import scipy.sparse
import scipy.io as sio
import cPickle
import pickle
import json
import uuid
# COCO API
Expand All @@ -33,7 +33,7 @@ def _filter_crowd_proposals(roidb, crowd_thresh):
non_gt_inds = np.where(entry['gt_classes'] == 0)[0]
if len(crowd_inds) == 0 or len(non_gt_inds) == 0:
continue
iscrowd = [int(True) for _ in xrange(len(crowd_inds))]
iscrowd = [int(True) for _ in range(len(crowd_inds))]
crowd_boxes = ds_utils.xyxy_to_xywh(entry['boxes'][crowd_inds, :])
non_gt_boxes = ds_utils.xyxy_to_xywh(entry['boxes'][non_gt_inds, :])
ious = COCOmask.iou(non_gt_boxes, crowd_boxes, iscrowd)
Expand All @@ -59,9 +59,9 @@ def __init__(self, image_set, year):
self._COCO = COCO(self._get_ann_file())
cats = self._COCO.loadCats(self._COCO.getCatIds())
self._classes = tuple(['__background__'] + [c['name'] for c in cats])
self._class_to_ind = dict(zip(self.classes, xrange(self.num_classes)))
self._class_to_coco_cat_id = dict(zip([c['name'] for c in cats],
self._COCO.getCatIds()))
self._class_to_ind = dict(list(zip(self.classes, list(range(self.num_classes)))))
self._class_to_coco_cat_id = dict(list(zip([c['name'] for c in cats],
self._COCO.getCatIds())))
self._image_index = self._load_image_set_index()
# Default to roidb handler
self.set_proposal_method('selective_search')
Expand All @@ -76,7 +76,7 @@ def __init__(self, image_set, year):
}
coco_name = image_set + year # e.g., "val2014"
self._data_name = (self._view_map[coco_name]
if self._view_map.has_key(coco_name)
if coco_name in self._view_map
else coco_name)
# Dataset splits that have ground-truth annotations (test splits
# do not have gt annotations)
Expand Down Expand Up @@ -140,9 +140,9 @@ def _roidb_from_proposals(self, method):

if osp.exists(cache_file):
with open(cache_file, 'rb') as fid:
roidb = cPickle.load(fid)
print '{:s} {:s} roidb loaded from {:s}'.format(self.name, method,
cache_file)
roidb = pickle.load(fid)
print('{:s} {:s} roidb loaded from {:s}'.format(self.name, method,
cache_file))
return roidb

if self._image_set in self._gt_splits:
Expand All @@ -154,8 +154,8 @@ def _roidb_from_proposals(self, method):
else:
roidb = self._load_proposals(method, None)
with open(cache_file, 'wb') as fid:
cPickle.dump(roidb, fid, cPickle.HIGHEST_PROTOCOL)
print 'wrote {:s} roidb to {:s}'.format(method, cache_file)
pickle.dump(roidb, fid, pickle.HIGHEST_PROTOCOL)
print('wrote {:s} roidb to {:s}'.format(method, cache_file))
return roidb

def _load_proposals(self, method, gt_roidb):
Expand All @@ -177,10 +177,10 @@ def _load_proposals(self, method, gt_roidb):
'edge_boxes_70']
assert method in valid_methods

print 'Loading {} boxes'.format(method)
print('Loading {} boxes'.format(method))
for i, index in enumerate(self._image_index):
if i % 1000 == 0:
print '{:d} / {:d}'.format(i + 1, len(self._image_index))
print('{:d} / {:d}'.format(i + 1, len(self._image_index)))

box_file = osp.join(
cfg.DATA_DIR, 'coco_proposals', method, 'mat',
Expand Down Expand Up @@ -213,16 +213,16 @@ def gt_roidb(self):
cache_file = osp.join(self.cache_path, self.name + '_gt_roidb.pkl')
if osp.exists(cache_file):
with open(cache_file, 'rb') as fid:
roidb = cPickle.load(fid)
print '{} gt roidb loaded from {}'.format(self.name, cache_file)
roidb = pickle.load(fid)
print('{} gt roidb loaded from {}'.format(self.name, cache_file))
return roidb

gt_roidb = [self._load_coco_annotation(index)
for index in self._image_index]

with open(cache_file, 'wb') as fid:
cPickle.dump(gt_roidb, fid, cPickle.HIGHEST_PROTOCOL)
print 'wrote gt roidb to {}'.format(cache_file)
pickle.dump(gt_roidb, fid, pickle.HIGHEST_PROTOCOL)
print('wrote gt roidb to {}'.format(cache_file))
return gt_roidb

def _load_coco_annotation(self, index):
Expand Down Expand Up @@ -306,18 +306,18 @@ def _get_thr_ind(coco_eval, thr):
precision = \
coco_eval.eval['precision'][ind_lo:(ind_hi + 1), :, :, 0, 2]
ap_default = np.mean(precision[precision > -1])
print ('~~~~ Mean and per-category AP @ IoU=[{:.2f},{:.2f}] '
'~~~~').format(IoU_lo_thresh, IoU_hi_thresh)
print '{:.1f}'.format(100 * ap_default)
print(('~~~~ Mean and per-category AP @ IoU=[{:.2f},{:.2f}] '
'~~~~').format(IoU_lo_thresh, IoU_hi_thresh))
print('{:.1f}'.format(100 * ap_default))
for cls_ind, cls in enumerate(self.classes):
if cls == '__background__':
continue
# minus 1 because of __background__
precision = coco_eval.eval['precision'][ind_lo:(ind_hi + 1), :, cls_ind - 1, 0, 2]
ap = np.mean(precision[precision > -1])
print '{:.1f}'.format(100 * ap)
print('{:.1f}'.format(100 * ap))

print '~~~~ Summary metrics ~~~~'
print('~~~~ Summary metrics ~~~~')
coco_eval.summarize()

def _do_detection_eval(self, res_file, output_dir):
Expand All @@ -330,8 +330,8 @@ def _do_detection_eval(self, res_file, output_dir):
self._print_detection_eval_metrics(coco_eval)
eval_file = osp.join(output_dir, 'detection_results.pkl')
with open(eval_file, 'wb') as fid:
cPickle.dump(coco_eval, fid, cPickle.HIGHEST_PROTOCOL)
print 'Wrote COCO eval results to: {}'.format(eval_file)
pickle.dump(coco_eval, fid, pickle.HIGHEST_PROTOCOL)
print('Wrote COCO eval results to: {}'.format(eval_file))

def _coco_results_one_category(self, boxes, cat_id):
results = []
Expand All @@ -348,7 +348,7 @@ def _coco_results_one_category(self, boxes, cat_id):
[{'image_id' : index,
'category_id' : cat_id,
'bbox' : [xs[k], ys[k], ws[k], hs[k]],
'score' : scores[k]} for k in xrange(dets.shape[0])])
'score' : scores[k]} for k in range(dets.shape[0])])
return results

def _write_coco_results_file(self, all_boxes, res_file):
Expand All @@ -360,12 +360,12 @@ def _write_coco_results_file(self, all_boxes, res_file):
for cls_ind, cls in enumerate(self.classes):
if cls == '__background__':
continue
print 'Collecting {} results ({:d}/{:d})'.format(cls, cls_ind,
self.num_classes - 1)
print('Collecting {} results ({:d}/{:d})'.format(cls, cls_ind,
self.num_classes - 1))
coco_cat_id = self._class_to_coco_cat_id[cls]
results.extend(self._coco_results_one_category(all_boxes[cls_ind],
coco_cat_id))
print 'Writing results json to {}'.format(res_file)
print('Writing results json to {}'.format(res_file))
with open(res_file, 'w') as fid:
json.dump(results, fid)

Expand Down
4 changes: 2 additions & 2 deletions lib/datasets/factory.py
Original file line number Diff line number Diff line change
Expand Up @@ -33,10 +33,10 @@

def get_imdb(name):
"""Get an imdb (image database) by name."""
if not __sets.has_key(name):
if name not in __sets:
raise KeyError('Unknown dataset: {}'.format(name))
return __sets[name]()

def list_imdbs():
"""List all registered imdbs."""
return __sets.keys()
return list(__sets.keys())