Skip to content

Lostbelt/behaviour_analysis

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Behavior Pipeline UI

A GUI application for YOLO Pose inference and behavior analysis.

Installation

Create an environment:

conda create -n lab_env Python=3.10
conda activate lab_env

git clone https://github.com/Lostbelt/behaviour_analysis.git
cd behaviour_analysis
pip install -r requirements.txt
# for gpu inference needs cuda PyTorch (choose the wheel appropriate for your system/driver)
# CUDA example:
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu126

Windows tip: Install PySide6 via pip (not conda) to avoid Qt DLL conflicts.


Getting Started

Run the app:

python behav_ano.py

Typical workflow:

  1. Pick model (.pt, Ultralytics YOLO keypoints model) and select device (cuda/cpu).
  2. Add groups → choose folders with videos.
  3. Open Videos tab to play annotated outputs.
  4. See Table for per-video metrics. Adjust conf, rear ratio threshold, research displacement threshold, and trim duration as needed, then click Rebuild table (no inference rerun needed). Use Load JSON to import cached predictions and Export table to save the sheet.
  5. In Classifier & SHAP, select two groups → Train RF + SHAP.

You can download model weights and video examples on google drive link.


Creating Custom Datasets

For creating custom datasets or fine-tuning the model with your own data, we recommend using our annotation tool.

Pose Annotator

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages