Skip to content

a lightweight audio classification model that can leverage transfer learning, along with minimal data and training, to create an audio classification model capable of distinguishing between healthy and degraded reefs

Notifications You must be signed in to change notification settings

Olli365/CS-project

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CS Project – Reef Soundscape Classification

This project explores the use of underwater acoustics and deep learning to classify reef health states from audio recordings.
It builds a pipeline for data exploration, model training, and lightweight deployment models.


Project Overview

Healthy coral reefs are vibrant acoustic environments, whereas degraded reefs are quieter and less diverse.
By training models on reef soundscapes, we can automatically classify reef condition and potentially monitor ecosystem health at scale.

This repository contains three main stages:

  1. Exploratory Data Analysis

    • data.ipynb
      Explore reef audio datasets and visualize sound representations (e.g., spectrograms).
  2. Baseline Model Training

    • AudioClassification_all.ipynb
      Train a deep learning classifier on a large, labeled dataset of reef audio recordings from multiple reef sites.
  3. Lightweight Transfer Learning Model

    • AudioClassification_Final.ipynb
      Fine-tune a compact model on 24 hours of data from a single site (healthy vs degraded reef),
      initializing from the baseline model’s weights for efficient transfer learning.

Requirements

  • Linux (Ubuntu 20.04+ recommended) or WSL2 with GPU support
  • Conda
  • NVIDIA GPU drivers + CUDA/cuDNN compatible with TensorFlow

Setup

Clone this repository:

git clone https://github.com/Olli365/CS-project.git
cd CS-project

1. Create Conda environment

cd setup
cd env_setup
conda env create -f cs_conda_env.yml

2. Activate environment

conda activate cs_project

3. Verify TensorFlow & GPU

python tf_test.py

Expected output includes TensorFlow version and available GPUs.
If no GPU is detected, check CUDA paths and run:

export NVIDIA_DIR=$(dirname $(dirname $(python -c "import nvidia.cudnn; print(nvidia.cudnn.__file__)")))
export LD_LIBRARY_PATH=$(echo ${NVIDIA_DIR}/*/lib/ | sed -r 's/\s+/:/g')${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}

Usage

Pipeline:

  1. Data exploration:

    jupyter notebook data.ipynb
  2. Baseline model training:

    jupyter notebook AudioClassification_all.ipynb
  3. Final lightweight model (for new locations):

    jupyter notebook AudioClassification_Final.ipynb


About

a lightweight audio classification model that can leverage transfer learning, along with minimal data and training, to create an audio classification model capable of distinguishing between healthy and degraded reefs

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published