This project contains the training code for the Microsoft AI for Earth Species Classification API, along with the code for our API demo page. This API classifies handheld photos of around 5000 plant and animal species. There is also a pipeline included for training detectors, and an API layer that simplifies running inference with an existing model, either on whole images or on detected crops.
The training data is not provided in this repo, so you can think of this repo as a set of tools for training fine-grained classifiers. If you want lots of animal-related data to play around with, check out our open data repository at lila.science, including LILA's list of other data sets related to conservation.
I don't want to train anything, I just want your model
No problem! The model is publicly available:
- PyTorch model file
- ONNX model file
- TFLite model file
- Class list (scientific names)
- Class list (common names)
- Taxonomy file used for Latin → common mapping
Your one-stop-shop for learning how to run this model is the classify_images.py script in the root of this repo.
Thanks to Joe Syzmanski for converting the model to TFLite.
Getting started with model training
See the README in the
PyTorchClassification directory to get started training your own classification models with this PyTorch-based framework.
And if you love snakes...
This repository is licensed with the MIT license.
The FasterRCNNDetection directory is based on https://github.com/chenyuntc/simple-faster-rcnn-pytorch.
The PyTorchClassification directory is based on the ImageNet example from the PyTorch codebase.
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.
When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.