Skip to content

Latest commit

 

History

History
66 lines (35 loc) · 4.21 KB

megadetector.md

File metadata and controls

66 lines (35 loc) · 4.21 KB

MegaDetector overview

This page hosts a model we’ve trained to detect (but not identify) animals in camera trap images, using several hundred thousand bounding boxes from a variety of ecosystems. The current model is based on Faster-RCNN with an InceptionResNetv2 base network, and was trained with the TensorFlow Object Detection API. We use this model as our first stage for classifier training and inference.

Downloading the model(s)

MegaDetector v3, 2019.05.30

Release notes

In addition to incorporating additional data, this release adds a preliminary “human” class. Our animal training data is still far more comprehensive than our humans-in-camera-traps data, so if you’re interested in using our detector but find that it works better on animals than people, stay tuned.

Download links

MegaDetector v2, 2018

Release notes

First MegaDetector release!

Download links

Using the models

We provide three ways to apply this model to new images:

  • To “test drive” this model on small sets of images and get super-satisfying visual output, we provide run_tf_detector.py, an example script for invoking this detector on new images. This script doesn’t depend on anything else in our repo, so you can download it and give it a try. Let us know how it works on your images!
  • To apply this model to larger image sets on a single machine, we recommend a slightly different script, run_tf_detector_batch. This outputs data in the same format as our batch processing API, so you can leverage all of our post-processing tools.
  • Speaking of which, when we process loads of images from collaborators, we use our batch processing API, which we can make available externally on request. Email us for more information.

Pretty picture

Here’s a “teaser” image of what detector output looks like:

alt text

Image credit University of Washington.

Mesmerizing video

Here’s a neat video of our v2 detector running in a variety of ecosystems, on locations unseen during training.


Image credit eMammal.

Can you share the training data?

This model is trained on bounding boxes from a variety of ecosystems, and many of the images we use in training are not publicly-shareable for license reasons. We do train in part on bounding boxes from two public data sets:

...so if our detector performs really well on those data sets, that’s great, but it’s a little bit cheating, because we haven’t published the set of locations from those data sets that we use during training.