Skip to content

GrunCrow/cv4ecology

Repository files navigation

AI-Census: Automated Census and Biodiversity Monitoring System using Deep Learning

AI-Census Logo

Description

The "Automated Census and Biodiversity Monitoring System using Deep Learning" is an ongoing project focused on developing an advanced protocol and workflow for wildlife monitoring using camera trapping, citizen science, and deep learning techniques. The primary objective of this project is to create a powerful Neural Network for species classification using camera-trap images obtained from Doñana National Park.

By leveraging cutting-edge deep learning methods, the project aims to achieve unbiased estimates of species and community dynamics. This will enable cost-effective and prompt responses to ecological changes, ultimately contributing to the conservation and understanding of biodiversity in the region.

Features

  • Utilization of camera-trap images from Doñana National Park.
  • Generation of bounding boxes and JSON annotations for images in COCO format.
  • Development of various datasets for training, validation and evaluation to check data poisoning effects and try to eliminate it.
  • Development of a comprehensive dataset for training, validation and test.
  • Creation of a state-of-the-art Neural Network for species detection and classification.
  • Modification of YOLOv8 scripts to obtain activation scores.
  • Creation of a script to develop a hierarchical postprocessing classification with variables to modify the performance.

Project Structure

The repository is organized as follows:

  • AI_Census/: Contains the main code related to the model.
  • Assignments/: Contains workshop assignments before the workshop started.
    • Assignment0_DataExploration/: Data Visualization from COCO file.
  • configs/: Contains model configurations.
  • Data/: Contains the dataset corresponding annotations in different formats:
    • CSVs/: Contains the dataset corresponding annotations in CSV format.
    • JSONs/: Contains the dataset corresponding annotations in JSON format.
    • TXTs/: Contains the dataset corresponding annotations in TXT format.
    • YAMLs/: Contains the dataset corresponding annotations in YAML format.
  • Dataset/: Contains the dataset:
    • images/: Contains the dataset images.
    • labels/: Contains the dataset labels (class + bounding box). It replicates the images structure but replaces images with TXT files containing the corresponding annotations.
    • multispecies.jpeg: Multispecies image not present in the main dataset, for experimental purposes.
    • test.txt: Test Dataset TXT file.
    • train.txt: Train Dataset TXT file.
    • validation.txt: Validation Dataset TXT file.
    • val_unique_locations.txt: Validation Dataset TXT file, including only images from unique locations.
  • runs/detect: Contains the prediction and validation folders generated by YOLOv8.
  • Scripts/: Includes scripts for data preprocessing, bounding box generation, different file format generation, images visualization with bounding boxes, data splitting, dataset distribution visualization...

Model Information

Using YOLOv8 trained on custom data for detection and classification.

If you want more information about the decision that I made for the project you can check this

Current best model:

Cool stuff related to this workshop

  • Contributed a Bug Fix to YOLOv8 that was merged into the main repository.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published