Skip to content

This is an unofficial implementation of the Paper "Depth-Adaptive Computational Policies for Efficient Visual Tracking" By Chris Ying, Katerina Fragkiadaki

Notifications You must be signed in to change notification settings

mihirp1998/Depth-Adaptive-Visual-Tracking-tf

Repository files navigation

Depth Adaptive Visual Tracking

First ever open source Implementation of Computation Adaptive Siamese network for Visual Tracking

Introduction

The following is an Unofficial implementation of Depth-Adaptive Computational Policies for Efficient Visual Tracking by Chris Ying and Katerina Fragkiadaki.

The folllowing topics are covered by my project:

  • Data-Preprocessing. Key and Search frame extraction from Imagenet 2017 VID dataset
  • Intermediate Supervision VGG Model. Built using Intermediate Supervision as mentioned in Paper.
  • Budgeted Gating Loss. Implemented the g* function mentioned in Paper with Shallow Feature Extractor.
  • Hard Gating for Evaluation. Hard gating which stops the computation when confidence score exceeds threshold.
  • Readability. The code is very clear,well documented and consistent.
Search Frame                                                     Cross Correlation Frame
                               

Model Keys

Model Structure

  • Build Key & Search Inputs
  • Build Vgg Nets for each
  • Build 5 Blocks of Cross-Corr & Flops for each
  • Build Non-Diff Shallow Feature Extractor from Cross-Corr
  • Build Confidence Score ~ Gfunction
  • Build Intermediate Supervision Block Loss
  • Build Budgeted Gates & Gate Loss
  • Build Hard Gates for Evaluation

Block Loss is implemented in model/compAdaptiveSiam/block_loss().This loss also results in exploding Gradients for which to prevent there is L2 Regularization and Gradient Clipping Implemented

BLockLoss

Budgeted Gates implemented in model/compAdaptiveSiam/gStarFunc()

BudgetedGate

Gate Loss implemented in model/compAdaptiveSiam/gateLoss

GateLoss

Cropped Section of TensorBoard Graph

Prerequisite

The main requirements can be installed by:

pip install -r requirements.txt

Data Collection and Preprocessing

One can download the ImageNet Vid dataset from the link

The data can be preprocessed to Key frame & Search frame using the following code

Change the location of the dataset from main function in the file

python scripts/preprocess_VID_data.py

Finally data can be split into train and validation and pickled by the following code

python scripts/build_VID2015_imdb.py

The credit for the scripts to preprocess the Visual Tracking DataSet goes to Huazhong University of Science and Technology

Training

It will iteratively train the Vgg Weights using Intermediate Supervison and then use the weights to train the Gated Weights. This process will happen iteratively

python main.py train

Evaluation

Hard Gating will stop the computation when the confidence score exceeds the threshold It will return the Cross Correlation Map,Flops Computed and the index of the block where computation stopped

python main.py eval

Pretrained Model

If you are training from beginning then use the vgg pretrained model provided here link

The pretrained model for VggNet and Gates trained by me is here

About

This is an unofficial implementation of the Paper "Depth-Adaptive Computational Policies for Efficient Visual Tracking" By Chris Ying, Katerina Fragkiadaki

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages