Skip to content

erikalu/class-agnostic-counting

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Class-Agnostic Counting

This repo contains a Keras implementation of the paper, Class-Agnostic Counting (Lu et al., ACCV 2018). It includes code for training the GMN (Generic Matching Network) and adapting it to specific datasets.

Dependencies

For Conda users, you can create a new Conda environment using conda env create -f environment.yml.

Pretrained Models

The pretrained GMN weights are available here.

Demo

The GMN is trained on ImageNet Video data.

To run the pretrained GMN on the unseen example cell image, first download and save the model weights as ./checkpoints/pretrained_gmn.h5.

Then run:

python demo.py --im images/cells.jpg --exemplar images/exemplar_cell.jpg

The predicted heatmap visualization will be saved as heatmap_vis.jpg.

For improved performance, the GMN can be adapted to any new dataset (see here).

Data

Download and preprocess the data for training the GMN following the instructions at: https://github.com/bertinetto/siamese-fc/tree/master/ILSVRC15-curation [1]. Before preprocessing the dataset, change the following variables:

    exemplar_size = 63;
    instance_size = 255;
    context_amount = 0.1;

The following datasets were used for the adaptation experiments:

Labels should be in the form of dot annotation images.

Training the GMN

To train the Generic Matching Network (GMN) on the ImageNet video data, run

python src/main.py --mode pretrain --data_path /path/to/ILSVRC2015_crops/train/

The code expects ImageNet pretrained Resnet50 weights at

models/resnet50_weights_tf_dim_ordering_tf_kernels_notop.h5

Adapting the GMN

To adapt a trained GMN to a specific dataset, e.g. vgg cells, run

python src/main.py --mode adapt --dataset vgg_cell --data_path /path/to/data --gmn_path /path/to/pretrained_gmn_model

References

[1] L. Bertinetto, J. Valmadre, J.F. Henriques, A. Vedaldi, P.H.S. Torr. "Fully-Convolutional Siamese Networks for Object Tracking." In ECCV Workshop 2016.
[2] V. Lempitsky and A. Zisserman. "Learning to Count Objects in Images." In NIPS 2010.
[3] C. Arteta, V. Lempitsky, J. A. Noble, A. Zisserman. "Learning to Detect Cells Using Non-overlapping Extremal Regions." In MICCAI 2012.
[4] M. Hsieh, Y. Lin, W. Hsu. "Drone-based Object Counting by Spatially Regularized Regional Proposal Networks." In ICCV 2017.
[5] W. Xie, J. A. Noble, A. Zisserman. "Microscopy Cell Counting with Fully Convolutional Regression Networks." In MICCAI Workshop 2016.

Citation

@InProceedings{Lu18,
  author       = "Lu, E. and Xie, W. and Zisserman, A.",
  title        = "Class-agnostic Counting",
  booktitle    = "Asian Conference on Computer Vision",
  year         = "2018",
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages