Skip to content

Commit

Permalink
Merge pull request #74 from ucbdrive/refactor
Browse files Browse the repository at this point in the history
[Refactor] Fixing refactor
  • Loading branch information
xinw1012 committed Oct 24, 2020
2 parents c427db0 + 11a7821 commit 5c396e1
Show file tree
Hide file tree
Showing 21 changed files with 1,168 additions and 330 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@ checkpoints/

# datasets
datasets/coco
datasets/cocosplit
datasets/VOC2007
datasets/VOC2012
datasets/vocsplit
Expand Down
114 changes: 0 additions & 114 deletions .vscode/.ropeproject/config.py

This file was deleted.

50 changes: 0 additions & 50 deletions .vscode/settings.json

This file was deleted.

11 changes: 7 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,10 @@ python3 -m venv fsdet
source fsdet/bin/activate
```
You can also use `conda` to create a new environment.
```angular2html
conda create --name fsdet
conda activate fsdet
```
* Install Pytorch 1.6 with CUDA 10.2
```angular2html
pip install torch torchvision
Expand Down Expand Up @@ -108,7 +112,7 @@ We provide a set of benchmark results and pre-trained models available for downl
for example, `COCO-detection/faster_rcnn_R_101_FPN_ft_all_1shot.yaml`.
2. We provide `demo.py` that is able to run builtin standard models. Run it with:
```
python demo/demo.py --config-file configs/COCO-detection/faster_rcnn_R_101_FPN_ft_all_1shot.yaml \
python3 -m demo.demo --config-file configs/COCO-detection/faster_rcnn_R_101_FPN_ft_all_1shot.yaml \
--input input1.jpg input2.jpg \
[--other-options]
--opts MODEL.WEIGHTS fsdet://coco/tfa_cos_1shot/model_final.pth
Expand Down Expand Up @@ -146,13 +150,12 @@ For ease of training and evaluation over multiple runs, we provided several help

You can use `tools/run_experiments.py` to do the training and evaluation. For example, to experiment on 30 seeds of the first split of PascalVOC on all shots, run
```angular2html
python tools/run_experiments.py --num-gpus 8 \
python3 -m tools.run_experiments --num-gpus 8 \
--shots 1 2 3 5 10 --seeds 0 30 --split 1
```

After training and evaluation, you can use `tools/aggregate_seeds.py` to aggregate the results over all the seeds to obtain one set of numbers. To aggregate the 3-shot results of the above command, run
```angular2html
python tools/aggregate_seeds.py --shots 3 --seeds 30 --split 1 \
python3 -m tools.aggregate_seeds --shots 3 --seeds 30 --split 1 \
--print --plot
```

5 changes: 3 additions & 2 deletions demo/demo.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,11 @@
import multiprocessing as mp
import os
import time
from detectron2.config import get_cfg

from demo.predictor import VisualizationDemo
from detectron2.data.detection_utils import read_image
from detectron2.utils.logger import setup_logger
from predictor import VisualizationDemo
from fsdet.config import get_cfg

# constants
WINDOW_NAME = "COCO detections"
Expand Down
2 changes: 1 addition & 1 deletion demo/predictor.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,9 @@
import multiprocessing as mp
from collections import deque
from detectron2.data import MetadataCatalog
from detectron2.engine.defaults import DefaultPredictor
from detectron2.utils.video_visualizer import VideoVisualizer
from detectron2.utils.visualizer import ColorMode, Visualizer
from fsdet.engine import DefaultPredictor


class VisualizationDemo(object):
Expand Down
12 changes: 6 additions & 6 deletions docs/TRAIN_INST.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ TFA is trained in two stages. We first train the entire object detector on the d

First train a base model. To train a base model on the first split of PASCAL VOC, run
```angular2html
python tools/train_net.py --num-gpus 8 \
python3 -m tools.train_net --num-gpus 8 \
--config-file configs/PascalVOC-detection/split1/faster_rcnn_R_101_base1.yaml
```

Expand All @@ -22,7 +22,7 @@ After training the base model, run ```tools/ckpt_surgery.py``` to obtain an init

To randomly initialize the weights corresponding to the novel classes, run
```angular2html
python tools/ckpt_surgery.py \
python3 -m tools.ckpt_surgery \
--src1 checkpoints/voc/faster_rcnn/faster_rcnn_R_101_FPN_base1/model_final.pth \
--method randinit \
--save-dir checkpoints/voc/faster_rcnn/faster_rcnn_R_101_FPN_all1
Expand All @@ -33,22 +33,22 @@ The resulting weights will be saved to `checkpoints/voc/faster_rcnn/faster_rcnn_

To use novel weights, fine-tune a predictor on the novel set. We reuse the base model trained in the previous stage but retrain the last layer from scratch. First remove the last layer from the weights file by running
```angular2html
python tools/ckpt_surgery.py \
python3 -m tools.ckpt_surgery \
--src1 checkpoints/voc/faster_rcnn/faster_rcnn_R_101_FPN_base1/model_final.pth \
--method remove \
--save-dir checkpoints/voc/faster_rcnn/faster_rcnn_R_101_FPN_all1
```

Next, fine-tune the predictor on the novel set by running
```angular2html
python tools/train_net.py --num-gpus 8 \
python3 -m tools.train_net --num-gpus 8 \
--config-file configs/PascalVOC-detection/split1/faster_rcnn_R_101_FPN_ft_novel1_1shot.yaml \
--opts MODEL.WEIGHTS checkpoints/voc/faster_rcnn/faster_rcnn_R_101_FPN_all1/model_reset_remove.pth
```

Finally, combine the base weights from the base model with the novel weights by running
```angular2html
python tools/ckpt_surgery.py \
python3 -m tools.ckpt_surgery \
--src1 checkpoints/voc/faster_rcnn/faster_rcnn_R_101_FPN_base1/model_final.pth \
--src2 checkpoints/voc/faster_rcnn/faster_rcnn_R_101_FPN_ft_novel1_1shot/model_final.pth \
--method combine \
Expand All @@ -60,7 +60,7 @@ The resulting weights will be saved to `checkpoints/voc/faster_rcnn/faster_rcnn_

We will then fine-tune the last layer of the full model on a balanced dataset by running
```angular2html
python tools/train_net.py --num-gpus 8 \
python3 -m tools.train_net --num-gpus 8 \
--config-file configs/PascalVOC-detection/split1/faster_rcnn_R_101_FPN_ft_all1_1shot.yaml \
--opts MODEL.WEIGHTS WEIGHTS_PATH
```
Expand Down
3 changes: 3 additions & 0 deletions fsdet/checkpoint/__init__.py
Original file line number Diff line number Diff line change
@@ -1 +1,4 @@
from . import catalog as _UNUSED # register the handler
from .detection_checkpoint import DetectionCheckpointer

__all__ = ["DetectionCheckpointer"]
Loading

0 comments on commit 5c396e1

Please sign in to comment.