Skip to content

denisemw/OWLET

Repository files navigation

OWLET


OWLET

About The Project

Thanks for checking out our software! OWLET is designed to process infant gaze and looking behavior using webcam videos recorded on laptops or smartphones. If you use this software in your research, please cite as:

Werchan, D. M., Thomason, M. E., & Brito, N. H. (2022). OWLET: An Automated, Open-Source Method for Infant Gaze Tracking using Smartphone and Webcam Recordings. Behavior Research Methods.

Instructions for downloading and running the source code for OWLET is below. In addition, a beta version of a MacOS app to run OWLET through a user interface can be found at: https://denisewerchan.com/owlet

(back to top)

User Guide

A user guide for OWLET, which describes options for processing gaze data with OWLET in more detail, can be found at: https://denisewerchan.com/owlet


OWLET was built using Python v. 3.8.8. with

(back to top)

How it works

For a given, pre-recorded webcam/smartphone video, OWLET will:

  • Calibrate the subject's gaze using default settings (determined using generalized estimates from prior data, see Werchan et al., 2023) or by using a custom calibration video of the subject looking to the left, right, top, and bottom of the smartphone/computer screen (if available)
  • Determine the subject's point-of-gaze (x/y coordinate estimates of where they were fixating on teh screen) for each frame of the video
  • Save a CSV file with the frame-by-frame x/y point-of-gaze coordinates and the screen region that the subject's point-of-gaze fell within (left, right, or away from the screen)

OWLET also provides the following additional options:

  • Automatically determine the time that the task begins in the subject's pre-recorded video by matching the auditory pattern of the subject's video with the auditory pattern of the video shown to the subject
  • Integrate the frame-by-frame CSV output with information on the trial timings
  • Ability to specify custom areas of interest (AOIs) when tagging the region of subject's point of gaze in the CSV output
  • Combine the subject's pre-recorded video with a video of the subject's point-of-gaze overlayed on the task video

Getting Started

1. Install miniconda following the directions here

2. Install OWLET by cloning the GitHub repository:

git clone https://github.com/denisemw/OWLET.git

3. Navigate to the OWLET directory and install required dependenices:

cd /path/to/OWLET
conda env create -n owlet_env -f owlet_environment.yml
conda activate owlet_env

(back to top)

Setting up your experiment for OWLET

1. Create a directory with the subject videos

Create a directory that contains your subject video(s) and the optional corresponding calibration video(s).

  • If calibration files are inlcuded in the directory, they should have the same name as the subject videos with ‘_calibration’ added at the end

  • If calibration files are not included in the same directory, OWLET will process the videos using default settings.

2. (optional) Create a directory with the task information

This step is optional, but will allow you to automatically link the frame-by-frame eye tracking data with information about the task. To do this, create a folder(s) that specifies optional information for each task(s):

A video of the task in .mov or .mp4 format (maximum frame rate of 30 fps)

  • This will save a video of the subject's point-of-gaze overlayed on the task video.

A csv file with trial timings

  • This will tag the start of each trial in the frame-by-frame csv output

A csv file with x/y areas of interest (AOIs)

  • This will tag which custom AOI that the child's’s point-of-gaze was in for each video frame in the csv output;

(back to top)

Running OWLET using Terminal commands

Before running OWLET, navigate to the directory where you installed OWLET and make sure the virtual environment is activated (if used):

   cd /path/to/OWLET
   conda activate owlet_env

To analyze a child's frame-by-frame gaze coordinates for the entire video recording, use the following:

   python owlet.py /path/to/subject/video.mp4

To automatically link the frame-by-frame gaze output with information about the task, include the '--experiment_info' option:

 python OWLET.py /path/to/subject/video --experiment_info /path/to/experiment/folder

Additional tips

When task information is included using the '--experiment_info' option, OWLET will automatically find where the task began in the recorded video of the child by matching the audio patterns in the subject and task videos. This is helpful for automating processing, as it removes the need to manually trim the recordings. However, this can fail occasionally when an audio match is not found (e.g., if the subject video or task video does not contain sound). If you have issues with audio matching but still wish to automatically link the subject recordings with task information, follow these steps:

  1. Manually trim the subject recording so that only the task of interest is in the video.
  2. Run OWLET using the '--override_audio_matching' option:
    • python OWLET.py /path/to/subject/video --experiment_info /path/to/experiment/folder --override_audio_matching

(back to top)

Usage

Below is an example of a Zoom video processed using OWLET.

OWLET Demo:

OWLET Demo

(back to top)

Best Practices and Helpful Tips

OWLET works best with high quality videos, and some tips are shown below. In addition, you can alter videos in editing software (e.g., iMovie) to change the contrast/brightness or crop in on the subject’s face, which can improve performance for poor quality videos.

Tips for Recording Videos:

Best Practices

(back to top)

License

Distributed under the GNU General Public License v3.0. See LICENSE for more information.

(back to top)

Contact

Denise Werchan - denisewerchan.com@DeniseWerchandenise.werchan@nyulangone.org

Project Link: https://github.com/denisemw/OWLET

(back to top)

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages