Skip to content


Repository files navigation

Pytorch-Wildlife: A Collaborative Deep Learning Framework for Conservation

Update highlights (Version 1.0.2)

  • Added Snapshot Serengeti classification model into the model zoo.
  • Added Classification fine-tuning module.

Future feature highlights

  • Direct Timelapse format outputs for both detection and classifcation.
  • A detection modul fine-tuning module to fine-tune your own detection model for Pytorch-Wildlife.
  • Direct LILA connection for more training/validation data.
  • More pretrained detection and classification models to expand the current model zoo.


At the core of our mission is the desire to create a harmonious space where conservation scientists from all over the globe can unite, share, and grow. We are expanding the CameraTraps repo to introduce Pytorch-Wildlife, a Collaborative Deep Learning Framework for Conservation, where researchers can come together to share and use datasets and deep learning architectures for wildlife conservation.

We've been inspired by the potential and capabilities of Megadetector, and we deeply value its contributions to the community. As we forge ahead with Pytorch-Wildlife, under which Megadetector now resides, please know that we remain committed to supporting, maintaining, and developing Megadetector, ensuring its continued relevance, expansion, and utility.

To use the newest version of MegaDetector with all the exisitng functionatlities, you can use our newly developed user interface or simply load the model with Pytorch-Wildlife and the weights will be automatically downloaded:

from PytorchWildlife.models import detection as pw_detection
detection_model = pw_detection.MegaDetectorV5()

If you'd like to learn more about Pytorch-Wildlife, please continue reading.

For those interested in accessing the previous MegaDetector repository, which utilizes the same MegaDetector v5 model weights and was primarily developed by Dan Morris during his time at Microsoft, please visit the archive directory, or you can visit this forked repository that Dan Morris is actively maintaining.

If you have any questions regarding MegaDetector and Pytorch-Wildlife, please email us!

Table of contents

Welcome to Pytorch-Wildlife Version 1.0

PyTorch-Wildlife is a platform to create, modify, and share powerful AI conservation models. These models can be used for a variety of applications, including camera trap images, overhead images, underwater images, or bioacoustics. Your engagement with our work is greatly appreciated, and we eagerly await any feedback you may have.

The Pytorch-Wildlife library allows users to directly load the MegaDetector v5 model weights for animal detection. We've fully refactored our codebase, prioritizing ease of use in model deployment and expansion. In addition to MegaDetector v5, Pytorch-Wildlife also accommodates a range of classification weights, such as those derived from the Amazon Rainforest dataset and the Opossum classification dataset. Explore the codebase and functionalities of Pytorch-Wildlife through our interactive Gradio web app and detailed Jupyter notebooks, designed to showcase the practical applications of our enhancements at PyTorchWildlife. You can find more information in our documentation.

Here is a brief example on how to perform detection and classification on a single image using PyTorch-wildlife:

import torch
from PytorchWildlife.models import detection as pw_detection
from PytorchWildlife.models import classification as pw_classification

img = torch.randn((3, 1280, 1280))

# Detection
detection_model = pw_detection.MegaDetectorV5() # Model weights are automatically downloaded.
detection_result = detection_model.single_image_detection(img)

classification_model = pw_classification.AI4GAmazonRainforest() # Model weights are automatically downloaded.
classification_results = classification_model.single_image_classification(img)

Explore Pytorch-Wildlife and MegaDetector with Our User Interface

If you want to directly try Pytorch-Wildlife with the AI models available, including MegaDetector v5, you can use our Gradio interface. This interface allows users to directly load the MegaDetector v5 model weights for animal detection. In addition, Pytorch-Wildlife also has two classification models in our initial version. One is trained from an Amazon Rainforest camera trap dataset and the other from a Galapagos opossum classification dataset (more details of these datasets will be published soon). To start, please follow the installation instructions on how to run the Gradio interface! We also provide multiple Jupyter notebooks for demonstation.

Core Features of Pytorch-Wildlife

What are the core components of Pytorch-Wildlife?

  • Unified Framework: Pytorch-Wildlife integrates four pivotal elements:

    • Machine Learning Models
    • Pre-trained Weights
    • Datasets
    • Utilities
  • Our work: In the provided graph, boxes outlined in red represent elements that will be added and remained fixed, while those in blue will be part of our development.

  • Inaugural Model: We're kickstarting with YOLO as our first available model, complemented by pre-trained weights from MegaDetector v5. This is the same MegaDetector v5 model from the previous repository.

  • Expandable Repository: As we move forward, our platform will welcome new models and pre-trained weights for camera traps and bioacoustic analysis. We're excited to host contributions from global researchers through a dedicated submission platform.

  • Datasets from LILA: Pytorch-Wildlife will also incorporate the vast datasets hosted on LILA, making it a treasure trove for conservation research.

  • Versatile Utilities: Our set of utilities spans from visualization tools to task-specific utilities, many inherited from Megadetector.

  • User Interface Flexibility: While we provide a foundational user interface, our platform is designed to inspire. We encourage researchers to craft and share their unique interfaces, and we'll list both existing and new UIs from other collaborators for the community's benefit.

Let's shape the future of wildlife research, together!

Progress on the core tasks

Below you can find our progress in these core tasks:

Development roadmap

Here you can find details of the milestone roadmap for Pytorch-Wildlife:

We are working on a new roadmap for 2024! To check the previous roadmap and finished tasks for 2023, please click here.


Image detection using MegaDetector v5

Credits to Universidad de los Andes, Colombia.

Image classification with MegaDetector v5 and AI4GAmazonRainforest

Credits to Universidad de los Andes, Colombia.

Opossum ID with MegaDetector v5 and AI4GOpossum

Credits to the Agency for Regulation and Control of Biosecurity and Quarantine for Galápagos (ABG), Ecuador.


This project welcomes contributions and suggestions. If you would like to submit a pull request, we will publish a contribution guideline soon.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact with any additional questions or comments.

Citing PyTorchWildlife

If you find this code useful, please consider citing

    title={Pytorch-Wildlife: A Collaborative Deep Learning Framework for Conservation},
    author={Miao, Z., Hernández, A., Vargas, L., Nasir, M. Gupta, Amrita, Dodhia, R., Ferres, J.},

We are currently working on a technical report to make PyTorchWildlife a citable paper! We will update this readme once it is published.

Existing Collaborators

Here are a few of the organizations that have used MegaDetector. We're only listing organizations who (a) we know about and (b) have given us permission to refer to them here (or have posted publicly about their use of MegaDetector).

The extensive collaborative efforts of Megadetector have genuinely inspired us, and we deeply value its significant contributions to the community. As we continue to advance with Pytorch-Wildlife, our commitment to delivering technical support to our existing partners on MegaDetector remains the same.

If you have any questions regarding MegaDetector, want to become a collaborator, or would like to be added to this list, please email us!


This repository is licensed with the MIT license.