A GUI application for YOLO Pose inference and behavior analysis.
Create an environment:
conda create -n lab_env Python=3.10
conda activate lab_env
git clone https://github.com/Lostbelt/behaviour_analysis.git
cd behaviour_analysis
pip install -r requirements.txt
# for gpu inference needs cuda PyTorch (choose the wheel appropriate for your system/driver)
# CUDA example:
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu126Windows tip: Install PySide6 via pip (not conda) to avoid Qt DLL conflicts.
Run the app:
python behav_ano.pyTypical workflow:
- Pick model (
.pt, Ultralytics YOLO keypoints model) and select device (cuda/cpu). - Add groups → choose folders with videos.
- Open Videos tab to play annotated outputs.
- See Table for per-video metrics. Adjust conf, rear ratio threshold, research displacement threshold, and trim duration as needed, then click Rebuild table (no inference rerun needed). Use Load JSON to import cached predictions and Export table to save the sheet.
- In Classifier & SHAP, select two groups → Train RF + SHAP.
You can download model weights and video examples on google drive link.
For creating custom datasets or fine-tuning the model with your own data, we recommend using our annotation tool.
