Skip to content

Commit

Permalink
Initial commit
Browse files Browse the repository at this point in the history
  • Loading branch information
xuyinda committed Dec 31, 2019
0 parents commit 42cd519
Show file tree
Hide file tree
Showing 105 changed files with 6,909 additions and 0 deletions.
69 changes: 69 additions & 0 deletions .gitignore
@@ -0,0 +1,69 @@
model_store
*.swp
*.swo
*.pyc
*.png
*.jpg
*.pkl
core
*.out
__pycache__
*DS_STORE
res.cvs
*.csv
*.cvs
*.json
*.so
*.o
*.c
*~
train_log
weights
*.odgt
*.odgt.refine
*.bak
pred
epoch*
lastest
detetction
output_dir
conf_res
__pycache__
localib
model.dump.py
mAP.txt
.nfs*
*.refine

# TensorRT output
caffemodel
calibration_cache
engines

# face_evaluate_tools output
result

# IPython
.ipynb_*

# checkpoints
*.brainmodel
*.pkl
*epoch*

# input files
*.jpg
*.avi
*.mp4

configs/lizuoxin

log.txt
.idea/
tmp/
*.protobuf

/logs
/debug
/datasets
/models
9 changes: 9 additions & 0 deletions .isort.cfg
@@ -0,0 +1,9 @@
# Documentation: https://github.com/timothycrosley/isort/wiki/isort-Settings
# Tutorial: https://simpleisbetterthancomplex.com/packages/2016/10/08/isort.html

[isort]
# default_section = THIRDPARTY
known_first_party = videoanalyst # change it for the name of your django project
known_pytorch = torch
sections = FUTURE,STDLIB,THIRDPARTY,PYTORCH,FIRSTPARTY,LOCALFOLDER
# sections = FIRSTPARTY,FUTURE,STDLIB,PYTORCH,THIRDPARTY,LOCALFOLDER
17 changes: 17 additions & 0 deletions .travis.yaml
@@ -0,0 +1,17 @@
dist: bionic # ubuntu 18.04
language: python

python:
- "3.5"
- "3.6"
- "3.7"

env: UBUNTU_VERSION=ubuntu1804

before_script:
- yapf -p -r -d --style='{COLUMN_LIMIT:80}' ./
- isort -rc -w 80 -d ./
- autoflake -r ./

after_success:
- coverage report
70 changes: 70 additions & 0 deletions README.md
@@ -0,0 +1,70 @@
# Video Analyst
This is the implementation of a series of basic algorithms which is useful for video understanding, including Single Object Tracking (SOT)
, Video Object Segmentation (VOS), etc.

Currnetly implemenation list:
* SOT
* [SiamFC++: Towards Robust and Accurate Visual Tracking with Target Estimation Guidelines](https://arxiv.org/abs/1911.06188)


## Quick start
### Setup
Please refer to [SETUP.md](docs/SETUP.md)

### Test on VOT
```
python3 ./main/test.py --config 'experiments/siamfc++/siamfcpp_googlenet.yaml'
```
Check out the corresponding _exp_save_ path in _.yaml_ for result and raw result data, both named by _exp_name_ in _.yaml_.

#### Test all experiments
```
bash ./tools/test_VOT.sh
```

## Repository structure (in progress)
```
├── experiments # experiment configurations, in yaml format
├── main
│ ├── train.py # trainng entry point
│ └── test.py # test entry point
├── video_analyst
│ ├── data # modules related to data
│ │ ├── dataset # data fetcher of each individual dataset
│ │ ├── sampler # data sampler, including inner-dataset and intra-dataset sampling procedure
│ │ ├── dataloader.py # data loading procedure
│ │ └── transformer # data augmentation
│ ├── engine # procedure controller, including traiing control / hp&model loading
│ │ ├── hook # hook for tasks during training, including visualization / logging / benchmarking
│ │ ├── trainer.py # train a epoch
│ │ ├── tester.py # test a model on a benchmark
│ ├── model # model builder
│ │ ├── backbone # backbone network builder
│ │ ├── common_opr # shared operator (e.g. cross-correlation)
│ │ ├── task_model # holistic model builder
│ │ ├── task_head # head network builder
│ │ └── loss # loss builder
│ ├── pipeline # pipeline builder (tracking / vos)
│ │ ├── segmenter # segmenter builder for vos
│ │ ├── tracker # tracker builder for tracking
│ │ └── utils # pipeline utils
│ ├── config # configuration manager
│ ├── evaluation # benchmark
│ ├── optimize # optimization-related module (learning rate, gradient clipping, etc.)
│ │ ├── lr_schedule # learning rate scheduler
│ │ ├── optimizer # optimizer
│ │ └── grad_modifier # gradient-related operation (parameter freezing)
│ └── utils # useful tools
└── README.md
```

## Model ZOO
Please refer to [MODEL_ZOO.md](docs/MODEL_ZOO.md)

## TODO
* [] Training code
* [] Test code for OTB, GOT-10k, LaSOT, TrackingNet

## Acknowledgement
* video_analyst/evaluation/vot_benchmark and other related code have been borrowed from [PySOT](https://github.com/STVIR/pysot)
* video_analyst/evaluation/got_benchmark and other related code have been borrowed from [got-toolkit](https://github.com/got-10k/toolkit.git)
11 changes: 11 additions & 0 deletions check_format.sh
@@ -0,0 +1,11 @@
#!/bin/bash

DIFF=`yapf -p -r -d --style='{COLUMN_LIMIT:80}' ./`
if [ ! -z "$DIFF" ]
then
echo "yapf format check failed"
printf -- "$DIFF"
false
else
echo "yapf format check succeeded"
fi
4 changes: 4 additions & 0 deletions compile.sh
@@ -0,0 +1,4 @@
# complie evaluation toolkit
pushd videoanalyst/evaluation/vot_benchmark
bash make.sh
popd
24 changes: 24 additions & 0 deletions docs/FORMATTING_INSTRUCTIONS.md
@@ -0,0 +1,24 @@
# Formatting
It is recommended to format the code before committing it. Here is some useful commands for code formatting (need yapf / isort / autoflake to be installed).
* _check_ means "only show change, not apply".
* _apply_ means "apply directly"

## yapf
bash path_to_project/check_format.sh
### check
yapf -p -r -d --style='{COLUMN_LIMIT:80}' ./
### apply
yapf -p -r -i --style='{COLUMN_LIMIT:80}' ./

## isort
Order is defined in _video_analyst/.isort.cfg_
### check
isort -rc -w 80 -d ./
### apply
isort -rc -w 80 ./

## flake
### check
autoflake -r ./
### apply
autoflake -r ./ -i
18 changes: 18 additions & 0 deletions docs/MODEL_ZOO.md
@@ -0,0 +1,18 @@
## Download links
Models & Raw results:
* [Google Drive](https://drive.google.com/open?id=1XhWIU1KIt9wvFpzZqEDaX-GrgZ9AVcOC)
* [BaiduYun](https://pan.baidu.com/s/19GhRrv2RcEQBFAJ-TNs8mg), code: qvfq

## Models
| Backbone | Pipeline | Dataset | A | R | EAO | FPS | Config. Filename | Model filename |
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
| AlexNet | Single template | VOT2018 |0.588 | 0.243 | 0.373| ~200 | siamfcpp_alexnet.yaml | siamfcpp-alexnet-vot-md5_18fd31a2f94b0296c08fff9b0f9ad240.pkl|
| AlexNet | Simple multi-template strategy| VOT2018 | 0.597 | 0.215 | 0.370 | ~90 | siamfcpp_alexnet-multi_temp.yaml | siamfcpp-alexnet-vot-md5_18fd31a2f94b0296c08fff9b0f9ad240.pkl|
| GoogLeNet | Single template | VOT2018 | 0.583 | 0.173 | 0.426 | ~80 | siamfcpp_googlenet.yaml | siamfcpp-googlenet-vot-md5_f2680ba074213ee39d82fcb84533a1a6.pkl |
| GoogLeNet | Simple multi-template strategy | VOT2018 | 0.587 | 0.150 | 0.467 | ~50 | siamfcpp_googlenet-multi_temp.yaml | siamfcpp-googlenet-vot-md5_f2680ba074213ee39d82fcb84533a1a6.pkl |

#### Remarks
* The results reported in our paper were produced by the implement under the internal deep learning framework. Afterwards, we reimplement our tracking method under PyTorch and there could be some differences between the reported results (under internal framework) and the real results (under PyTorch).
* Differences in hardware configuration (e.g. CPU style / GPU style) may influence some indexes (e.g. FPS)
* Results here have been produced on a shared computing node equipped with _Intel(R) Xeon(R) Gold 6130 CPU @ 2.10GHz_ and _Nvidia GeForce RTX 2080Ti_ .
* For VOT benchmark, models have been trained on ILSVRC-VID/DET, YoutubeBB, COCO, LaSOT, and GOT-10k (as described in our paper).
67 changes: 67 additions & 0 deletions docs/SETUP.md
@@ -0,0 +1,67 @@
## Setup
###
### Install requirements
- Linux or MacOS
- Python >= 3.5
- GCC >= 4.9
```
git clone https://github.com/MegviiDetection/video_analyst.git
cd video_analyst
```
You can choose either using native python (with pip/pip3) or using virtual environment (with conda).
```
pip3 install -U -r requirements.txt
```

### Compile evaluation toolkit
```
bash compile.sh
```

### Set datasets
Set soft link to dataset directory (see [config example](../experiments/siamfcpp/siamfcpp_alexnet.yaml))
```
ln -s path_to_datasets datasets
```

At _path_to_datasets_:
```
path_to_datasets
└── VOT # experiment configurations, in yaml format
├── vot2018
│ ├── VOT2018
│ │ ├── ...
│ │ └── list.txt
│ └── VOT2018.json
└── vot2019
├── VOT2019
│ ├── ...
│ └── list.txt
└── VOT2019.json
```
Auxilary files (list.txt / VOTXXXX.json) located at _videoanalyst/evaluation/vot_benchmark/vot_list_

#### Download
We provide download links for VOT2018 / VOT2019:
* [Google Drive](https://drive.google.com/open?id=18vaGhvrr_rt70sZr_TisrWl7meO9NE0J)
* [Baidu Disk](https://pan.baidu.com/s/1HZkbWen4mEkxaJL3Rj9pig), code: xg4q

__Acknowledgement:__: Following datasets have been downloaded with [TrackDat](https://github.com/jvlmdr/trackdat)
* VOT2018
* VOT2019

### Set models
Set soft link to model directory
```
ln -s path_to_models models
```

At _path_to_models_:
```
path_to_datasets
└── siamfcpp
├── alexnet
│ └── epoch-19.pkl
└── googlenet
└── epoch-15.pkl
```
38 changes: 38 additions & 0 deletions experiments/siamfcpp/siamfcpp_alexnet-multi_temp.yaml
@@ -0,0 +1,38 @@
task_name: "track"
track:
exp_name: "siamfcpp_alexnet_multi_temp"
exp_save: "logs"
model:
backbone:
name: "AlexNet"
AlexNet:
pretrain_model_path: ""
losses:
names: []
task_head:
name: "DenseboxHead"
DenseboxHead:
total_stride: 8
score_size: 17
x_size: 303
num_conv3x3: 3
head_conv_bn: [False, False, True]
task_model:
name: "SiamTrack"
SiamTrack:
pretrain_model_path: "models/siamfcpp/siamfcpp-alexnet-vot-md5_18fd31a2f94b0296c08fff9b0f9ad240.pkl"
pipeline:
name: "SiamFCppMultiTempTracker"
SiamFCppMultiTempTracker:
test_lr: 0.52
window_influence: 0.21
penalty_k: 0.04
num_conv3x3: 3
mem_step: 5
mem_len: 5
st_mem_coef: 0.5
tester:
names: ["VOTTester",]
VOTTester:
device_num: 1
dataset_names: ["VOT2018"]
35 changes: 35 additions & 0 deletions experiments/siamfcpp/siamfcpp_alexnet.yaml
@@ -0,0 +1,35 @@
task_name: "track"
track:
exp_name: "siamfcpp_alexnet"
exp_save: "logs"
model:
backbone:
name: "AlexNet"
AlexNet:
pretrain_model_path: ""
losses:
names: []
task_head:
name: "DenseboxHead"
DenseboxHead:
total_stride: 8
score_size: 17
x_size: 303
num_conv3x3: 3
head_conv_bn: [False, False, True]
task_model:
name: "SiamTrack"
SiamTrack:
pretrain_model_path: "models/siamfcpp/siamfcpp-alexnet-vot-md5_18fd31a2f94b0296c08fff9b0f9ad240.pkl"
pipeline:
name: "SiamFCppTracker"
SiamFCppTracker:
test_lr: 0.52
window_influence: 0.21
penalty_k: 0.04
num_conv3x3: 3
tester:
names: ["VOTTester",]
VOTTester:
device_num: 1
dataset_names: ["VOT2018"]

0 comments on commit 42cd519

Please sign in to comment.