Pytorch implementation of Faster R-CNN for real-time object detection (paper). Our detailed project report is available here.
- Abhinav Garg (garg19@illinois.edu)
- Refik Mert Cam (rcam2@illinois.edu)
- Sanyukta Deshpande (spd4@illinois.edu)
Here is an example of create environ from scratch with anaconda
# create conda env
conda create --name frcnn python=3.7
conda activate frcnn
# install pytorch
conda install pytorch torchvision cudatoolkit=10.2 -c pytorch
# install other dependancy
pip install visdom scikit-image tqdm fire ipdb pprint matplotlib torchnet
-
Download the training, validation, test data and VOCdevkit
wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtrainval_06-Nov-2007.tar wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtest_06-Nov-2007.tar wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCdevkit_08-Jun-2007.tar
-
Extract all of these tars into one directory named
VOCdevkit
tar xvf VOCtrainval_06-Nov-2007.tar tar xvf VOCtest_06-Nov-2007.tar tar xvf VOCdevkit_08-Jun-2007.tar
-
It should have this basic structure
$VOCdevkit/ # development kit $VOCdevkit/VOCcode/ # VOC utility code $VOCdevkit/VOC2007 # image sets, annotations, etc. # ... and several other directories ...
-
modify
voc_data_dir
andvoc_test_dir
cfg item inconfig/config.py
.
Update the parameters in config/config.py
as per the experiment.
Update save_path
to the path where model files are to be stored.
Note: Our implementation currently only supports vgg16 and resnet101 for pretrained_model
cfg item.
python approx_train.py
To run inference on select test images, update train=False
and save_path
to the path where trained model is located in config/config.py
.
python test.py