Skip to content

Analysis of ultrasonic vocalizations from mice using computer vision and machine learning

License

Notifications You must be signed in to change notification settings

ahof1704/VocalMat

Repository files navigation

VocalMat - Dietrich Lab
Analysis of ultrasonic vocalizations from mice using computer vision and machine learning

If you use VocalMat or any part of it in your own work, please cite Fonseca et al:

@article{Fonseca2021AnalysisOU,
  title={Analysis of ultrasonic vocalizations from mice using computer vision and machine learning},
  author={Antonio H. O. Fonseca and Gustavo Madeira Santana and Gabriela M Bosque Ortiz and Sergio Bampi and Marcelo O. Dietrich},
  journal={eLife},
  year={2021},
  volume={10}
}

For more information, visit our website: VocalMat - Dietrich Lab

Dataset and audios used in the paper are available at: OSF Repository

Table of Contents


Description

VocalMat is an automated tool that identifies and classifies mice vocalizations.

VocalMat is divided into two main components. The VocalMat Identifier, and the VocalMat Classifier.

VocalMat Workflow

VocalMat Identifier detects vocalization candidates in the audio file. Vocalization candidates are detected through a series of image processing operations and differential geometry analysis over spectrogram information. The VocalMat Identifier outputs (optional) a MATLAB formatted file (.MAT) with information about the spectral content of detected vocalizations (e.g., frequency, intensity, timestamp), that is later used by the VocalMat Classifier.

VocalMat Classifier uses a Convolutional Neural Network (CNN) to classify each vocalization candidate into 12 distinct labels: short, flat, chevron, reverse chevron, downward frequency modulation, upward frequency modulation, complex, multi steps, two steps, step down, step up, and noise.

VocalMat labels

Features

  • 11 Classification Classes: VocalMat is able to distinguish between 11 classes of vocalizations (see Figure above), according to adapted definitions from Grimsley et al, 2011.
  • Noise Detection: eliminates vocalization candidates associated to mechanical or segmentation noise.
  • Harmonic Detection: detects vocalizations with components overlapping in time.
  • Manifold Visualization and Alignment: visualize the vocal reportoire using Diffusion Maps and align manifolds to compare different animals.
  • Fast Performance: optimized versions for personal computers and high-performance computing (clusters)

Getting Started

Recordit GIF

You must have Git LFS installed to be able to fully clone the repository. Download Git LFS

If in doubt, proceed to the Manual Download section

Latest Stable Version

$ git clone https://github.com/ahof1704/VocalMat.git

Latest (Unreleased) Version

$ git clone -b VocalMat_RC --single-branch https://github.com/ahof1704/VocalMat.git
Using a Git client

You can use a Git client to clone our repository, we recommend GitHub's own client:

Download at: https://desktop.github.com

Manual download

You can download VocalMat by using GitHub's Download Zip option. However, since we use Git LFS (Git Large File Storage), two necessary files will not be downloaded automatically. Follow these instructions if downloading manually:

Download this repository as a zip file: Download Zip

Extract the .zip file. This is the VocalMat directory.

Download the example audio file: Download Audio

Place the audio file in the audios folder inside the VocalMat directory.

Download the neural network model file: Download Model

Place the model file in the vocalmat_classifier folder inside the VocalMat directory.

Directory Structure

  • vocalmat_identifier: Directory with all files and scripts related to the VocalMat Identifier
  • vocalmat_classifier: Directory with all files and scripts related to the VocalMat Classifier
  • audios: Directory with audio files you want to analyze in the audios directory

Usage

VocalMat Manual Execution

Navigate to the VocalMat directory in MATLAB and run VocalMat.m by either opening the file or typing VocalMat in MATLAB's command window. Once VocalMat is running, choose the audio file you want to analyze. An example audio file is provided, and you can use it to test VocalMat.

VocalMat Output Files

VocalMat outputs a directory with the same name as the audio file that was analyzed. Inside that directory there will be two directories (All, and All_axes if save_plot_spectrograms=1), and two Microsoft Excel (.xlsx) files. Inside All_axes you will find one image for each vocalization candidate detetcted with the resulting segmentation illusrated by blue circles. The raw original images are available inside All. The main Excel file has the same name of the audio file analyzed (audio_file_name.xlsx). This file contains information on each vocalization, such as start and end time, duration, frequency (minimum, mean and maximum), bandwidth, intensity (minimum, mean, maximum and corrected based on the backgroun), existence of harmonic components or distortions (noisy), and call type. The second excel file named as audio_file_name_DL.xlsx shows the probability distribution for each vocalization candidate for the different vocal classes.

Requirements

Recordings
  • Recording protocol: follow the protocol established by Ferhat et al, 2016.
  • Sampling rate: we recommend using a sampling rate of 250kHz (Fmax=125kHz).
Software Requirements
  • MATLAB: versions 2017a through 2019b. For other versions refer to the FAQ.
  • MATLAB Add-Ons:
    • Signal Processing Toolbox
    • Deep Learning Toolbox
    • Image Processing Toolbox
    • Statistics and Machine Learning Toolbox

FAQ

  • Will VocalMat work with my MATLAB version?

VocalMat was developed and tested using MATLAB 2017a through 2019b versions. We cannot guarantee that it will work in other versions of MATLAB. If your MATLAB version supports all the required Add-Ons, VocalMat should work.

  • What are the hardware requirements to run VocalMat?

The duration of the audio files that can be processed in VocalMat is limited to the amount of RAM your computer has. We estimate around 1GB of RAM for every minute of recording using one minute segments. For a 10 minute recording, your computer should have at least 10GB of RAM available. RAM usage will vary depending on your MATLAB version and computer, these numbers are just estimates.

  • Will VocalMat work with my HPC Cluster?

In order for our script to work in your Cluster it must have Slurm support. Minor changes might have to be made to adapt the script to your Cluster configuration.

  • I want a new feature for VocalMat, can I contribute?

Yes! If you like VocalMat and want to help us add new features, please create a pull request!

About

Analysis of ultrasonic vocalizations from mice using computer vision and machine learning

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published