Skip to content
MDNet PyTorch implementation
Branch: master
Clone or download
Latest commit 28ff286 Apr 22, 2019
Type Name Latest commit message Commit time
Failed to load latest commit information.
datasets add an example sequence Apr 5, 2019
figs figs Apr 20, 2019
models support imagenet-vid pretraining Apr 20, 2019
modules support imagenet-vid pretraining Apr 20, 2019
pretrain refactoring Apr 20, 2019
tracking bugfix Apr 20, 2019
.gitignore support imagenet-vid pretraining Apr 20, 2019
LICENSE first commit May 28, 2017 Update Apr 22, 2019


by Hyeonseob Nam and Bohyung Han at POSTECH

Update (April, 2019)

  • Migration to python 3.6 & pyTorch 1.0
  • Efficiency improvement (~5fps)
  • ImagNet-VID pretraining
  • Code refactoring


PyTorch implementation of MDNet, which runs at ~5fps with a single CPU core and a single GPU (GTX 1080 Ti).

[Project] [Paper] [Matlab code]

If you're using this code for your research, please cite:

author = {Nam, Hyeonseob and Han, Bohyung},
title = {Learning Multi-Domain Convolutional Neural Networks for Visual Tracking},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2016}

Results on OTB


  • python 3.6+
  • opencv 3.0+
  • PyTorch 1.0+ and its dependencies
  • for GPU support: a GPU with ~3G memory



 python tracking/ -s DragonBaby [-d (display fig)] [-f (save fig)]
  • You can provide a sequence configuration in two ways (see tracking/
    • python tracking/ -s [seq name]
    • python tracking/ -j [json path]


  • Download VGG-M (matconvnet model) and save as "models/imagenet-vgg-m.mat"
  • Pretraining on VOT-OTB
    • Download VOT datasets into "datasets/VOT/vot201x"
     python pretrain/
     python pretrain/ -d vot
  • Pretraining on ImageNet-VID
    • Download ImageNet-VID dataset into "datasets/ILSVRC"
     python pretrain/
     python pretrain/ -d imagenet
You can’t perform that action at this time.