Skip to content

timnaher/gazeannot

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

GazeAnnot

GazeAnnot Logo

Interactive desktop application for labeling eye movement events (saccades, microsaccades, PSO) and training U'n'Eye deep learning models for automatic saccade detection.

Built with PyQt6 and pyqtgraph.

GazeAnnot Demo

Features

Labeling

  • Load eye position data from CSV, MAT (v7.3), or NumPy files
  • Interactive time series viewer with synchronized eye position (X+Y overlaid) and velocity plots
  • Click-drag to select time regions, press Space/Enter to apply labels
  • User-definable event classes with custom names, colors, and keyboard shortcuts
  • Right-click on labeled segments to change class
  • Double-click to erase a labeled segment
  • Undo/redo for all label edits
  • Labels auto-save to CSV alongside your data files
  • Gaussian smoothing and 5-point median filter toggles

Training

  • Full Training: train on all labeled data, save weights
  • K-fold Cross-Validation: honest generalization metrics with per-fold loss curves
  • Live loss curve visualization (overlaid per fold in CV mode)
  • Per-class metrics: F1, precision, recall, and support for each event class
  • Option to collapse multi-class labels to binary (saccade vs fixation) for training
  • Configurable: learning rate, max epochs, data augmentation, min saccade duration/distance
  • "Retrain on All Data" button after CV for final deployment model

Evaluation

  • Run predictions with any trained weights file
  • Per-class metrics table (F1, precision, recall per class)
  • Overall metrics: Cohen's kappa, accuracy, weighted F1
  • Onset/offset error histograms
  • Confusion browser: navigate trials with prediction errors

Navigation

  • Sidebar with three views: Label, Train, Evaluate
  • Keyboard shortcuts: Left/Right arrows for trial navigation, 1-9 for class selection
  • Trackpad support: horizontal swipe to pan, vertical scroll to zoom time axis
  • Auto-loads last dataset on startup

Installation

1. Create a conda environment

conda create -n gazeannot python=3.11 -y
conda activate gazeannot

2. Install dependencies

pip install PyQt6 pyqtgraph numpy scipy pandas mat73 torch scikit-learn scikit-image matplotlib

3. Clone and run

git clone https://github.com/yourusername/gazeannot.git
cd gazeannot
python main.py

Quick Start

  1. Launch: python main.py
  2. Load data: File > Open Data (Ctrl+O) — select your eye X file, Y file is auto-detected
  3. Label: Drag on the plot to select a time range, press Space to apply the active class
  4. Switch classes: Press 1 (Saccade), 2 (Microsaccade), 3 (PSO), or click the buttons
  5. Navigate: Left/Right arrow keys to browse trials
  6. Train: Click "Train Model" or switch to Train view via the sidebar

Data Format

GazeAnnot expects eye position data as:

Format Structure
CSV Rows = trials, columns = timepoints. Provide separate X and Y files (e.g., eye_x.csv, eye_y.csv). Y file is auto-detected from X filename.
MAT MATLAB v7.3 files. A variable picker dialog lets you select which variables are X and Y.
NPY/NPZ NumPy arrays. Same structure as CSV (trials x timepoints).
  • Sampling frequency: 1000 Hz (default)
  • Units: degrees of visual angle
  • Labels: integer per timepoint (0 = unlabeled/fixation, 1 = saccade, 2 = microsaccade, etc.)

Auto-detected label files

When loading data, GazeAnnot searches for label files in the same directory:

  • <prefix>_labels.csv or <prefix>_binary_labels.csv
  • labels.csv or binary_labels.csv

Labels are auto-saved to CSV whenever you navigate between trials.

Keyboard Shortcuts

Key Action
Ctrl+O Open data
Ctrl+S Save labels
Ctrl+Z / Ctrl+Shift+Z Undo / Redo
Left / Right Previous / Next trial
1-9 Select event class
Space / Enter Apply label to selection
Escape Clear selection
G Toggle Gaussian smooth
M Toggle median filter
Ctrl+1 / 2 / 3 Switch to Label / Train / Eval view

Mouse Controls

Action Effect
Left-drag Create selection region
Single click Clear selection
Double-click on label Erase that labeled segment
Right-click on label Change class (context menu)
Scroll up/down Zoom time axis
Two-finger swipe left/right Pan in time

Sample Data

The data/ directory contains sample eye tracking data (54 trials at 1000 Hz) for testing.

Citation

If you use GazeAnnot in your research, please cite the U'n'Eye model:

Bellet, M.E., Bellet, J., Nienborg, H., Hafed, Z.M., & Berens, P. (2019). Human-level saccade detection performance using deep neural networks. Journal of Neurophysiology, 121(2), 646-661.

License

MIT License. See LICENSE.

The bundled U'n'Eye model code is by Bellet et al. — see uneye/LICENSE_NOTICE.md.

About

Interactive eye movement event labeling and U'n'Eye model training

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages