Skip to content

dattalab/moseq-vame

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

VAME workflow

VAME in a Nutshell

VAME is a framework to cluster behavioral signals obtained from pose-estimation tools. It is a PyTorch based deep learning framework which leverages the power of recurrent neural networks (RNN) to model sequential data. In order to learn the underlying complex data distribution we use the RNN in a variational autoencoder setting to extract the latent state of the animal in every time step.

behavior

The workflow of VAME consists of 5 steps and we explain them in detail here.

Installation

To get started we recommend using Anaconda with Python 3.6 or higher. Here, you can create a virtual enviroment to store all the dependencies necessary for VAME.

  • Install the current stable Pytorch release using the OS-dependent instructions from the Pytorch website. Currently, VAME is tested on PyTorch 1.5.
  • Go to the locally cloned VAME directory and run python setup.py install in order to install VAME in your active Python environment.

Getting Started

First, you should make sure that you have a GPU powerful enough to train deep learning networks. In our paper, we were using a single Nvidia GTX 1080 Ti to train our network. A hardware guide can be found here. Once you have your hardware ready, try VAME following the workflow guide.

News

  • November 2020: We uploaded an egocentric alignment script to allow more researcher to use VAME
  • October 2020: We updated our manuscript on Biorxiv
  • May 2020: Our preprint "Identifying Behavioral Structure from Deep Variational Embeddings of Animal Motion" is out! Read it on Biorxiv!

Authors and Code Contributors

VAME was developed by Kevin Luxem and Pavol Bauer.

The development of VAME is heavily inspired by DeepLabCut. As such, the VAME project management codebase has been adapted from the DeepLabCut codebase. The DeepLabCut 2.0 toolbox is © A. & M. Mathis Labs www.deeplabcut.org, released under LGPL v3.0.

References

VAME preprint: Identifying Behavioral Structure from Deep Variational Embeddings of Animal Motion

License: GPLv3

See the LICENSE file for the full statement.

About

Variational Animal Motion Embedding

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%