Skip to content
Go to file

EyeLoop License: GPL v3 contributions welcome Build Status version lab beta


EyeLoop is a Python 3-based eye-tracker tailored specifically to dynamic, closed-loop experiments on consumer-grade hardware. This software is actively maintained: Users are encouraged to contribute to its development.


  • High-speed on non-specialized hardware (no dedicated processing units necessary).
  • Modular, readable, customizable.
  • Open-source, and entirely Python 3.
  • Works on any platform, easy installation.
  • Actively maintained.


How it works

EyeLoop consists of two functional domains: the engine and the optional modules. The engine performs the eye-tracking, whereas the modules perform optional tasks, such as:

  • Experiments
  • Data acquisition
  • Importing video sequences to the engine

The modules import or extract data from the engine, and are therefore called Importers and Extractors, respectively.

One of EyeLoop's most appealing features is its modularity: Experiments are built simply by combining modules with the core Engine. Thus, the Engine has one task only: to compute eye-tracking data based on an imported sequence, and offer the generated data for extraction.

How does the Engine work?
How does the Importer work?
How does the Extractor work?

Getting started


Install EyeLoop simply by cloning the repository:

git clone

Dependencies: python -m pip install -r requirements.txt

Using pip: pip install .

You may want to use a Conda or Python virtual environment when installing eyeloop, to avoid mixing up with your system dependencies.

Using pip and a virtual environment: python -m venv venv source venv/bin/activate (venv) pip install .


  • numpy: python pip install numpy
  • opencv: python pip install opencv-python

To download full examples with footage, check out EyeLoop's playground repository:

git clone


EyeLoop is initiated through the command-line utility eyeloop.


To access the video sequence, EyeLoop must be connected to an appropriate importer class module. Usually, the default opencv importer class (cv) is sufficient. For some machine vision cameras, however, a vimba-based importer (vimba) is neccessary.

eyeloop --importer cv/vimba

Click here for more information on importers.

To perform offline eye-tracking, we pass the video argument --video with the path of the video sequence:

eyeloop --video [file]/[folder]

EyeLoop can be used on a multitude of eye types, including rodents, human and non-human primates. Specifically, users can suit their eye-tracking session to any species using the --model argument.

eyeloop --model ellipsoid/circular

In general, the ellipsoid pupil model is best suited for rodents, whereas the circular model is best suited for primates.

To see all command-line arguments, pass:

eyeloop --help

Designing your first experiment

In EyeLoop, experiments are built by stacking modules. For your first experiment, we'll need a data acquisition class to log the data, and an experiment class to produce a stimulus. Both classes are passed to as an extractor array:

extractors = [Experiment(), DAQ()]

Extractors are instantiated at start-up. At every subsequent time-step, the extractor's fetch() function is called by the engine.

class Extractor:
    def __init__(self) -> None:
    def fetch(self, engine) -> None:

fetch() gains access to all eye-tracking data in real-time via the engine pointer.

Click here for more information on extractors.

Data acquisition class

For our data acquisition class, we define the directory path for our log using the instantiator:

class DAQ:
    def __init__(self) -> None:
        self.log_path = ...

Then, we extract the eye-tracking data via fetch() and save it in JSON format.

    def fetch(self, engine) -> None:

Note, EyeLoop includes a standard data-parser utility (, that may be used to convert JSON into CSV. By using, users can easily compute and plot the eye's angular coordinates and the size of the pupil.

Experiment class

For our experiment class, we'll design a simple open-loop where the brightness of a PC monitor is linked to the phase of the sine function. We define the sine frequency and phase using the instantiator:

class Experiment:
    def __init__(self) -> None:
        self.frequency = ...
        self.phase = 0

By using fetch(), we shift the phase of the sine function at every time-step, and use this to control the brightness of a cv-render.

    def fetch(self, engine) -> None:
        self.phase += self.frequency
        sine = numpy.sin(self.phase) * .5 + .5
        brightness = numpy.ones((height, width), dtype=float) * sine
        cv2.imshow("Experiment", brightness)

That's it! Test your experiment using:


See Examples for demo recordings and experimental designs.

Graphical user interface

The default graphical user interface in EyeLoop is minimum-gui.

EyeLoop is compatible with custom graphical user interfaces through its modular logic. Click here for instructions on how to build your own.

Running unit tests

Install testing requirements by running in a terminal:

pip install -r requirements_testing.txt

Then run tox: tox

Reports and results will be outputted to /tests/reports

Known issues

  • Respawning/freezing windows when running minimum-gui in Ubuntu.


If you use any of this code or data, please cite [Arvin et al. 2020] (preprint).

@article {Arvin2020.07.03.186387,
	author = {Arvin, Simon and Rasmussen, Rune and Yonehara, Keisuke},
	title = {EyeLoop: An open-source, high-speed eye-tracker designed for dynamic experiments},
	elocation-id = {2020.07.03.186387},
	year = {2020},
	doi = {10.1101/2020.07.03.186387},
	publisher = {Cold Spring Harbor Laboratory},
	URL = {},
	eprint = {},
	journal = {bioRxiv}


This project is licensed under the GNU General Public License v3.0. Note that the software is provided "as is", without warranty of any kind, express or implied.


Lead Developer: Simon Arvin,


Corresponding Author: Keisuke Yonehera,



You can’t perform that action at this time.