Skip to content

pupil-labs/real-time-screen-gaze

Repository files navigation

Real-time Screen Gaze

This package is designed to allow users of the Pupil Labs eyetracking hardware, especially Neon, to acquire screen-based gaze coordinates in realtime without relying on Pupil Core software.

This works by identifying the image of the display as it appears in the scene camera. We accomplish this with AprilTags, 2D barcodes similar to QR codes. This package provides a marker_generator module to create AprilTag image data.

from pupil_labs.real_time_screen_gaze import marker_generator
...

marker_pixels = marker_generator.generate_marker(marker_id=0)

More markers will yield higher accuracy, and we recommend a minimum of four. Each marker must be unique, and the marker_id parameter is provided for this purpose.

Once you've drawn the markers to the screen using your GUI toolkit of choice, you'll next need to setup a GazeMapper object. This requires calibration data for the scene camera. For Neon, this is very simple:

from pupil_labs.realtime_api.simple import discover_one_device
from pupil_labs.real_time_screen_gaze.gaze_mapper import GazeMapper
...

device = discover_one_device()
calibration = device.get_calibration()
gaze_mapper = GazeMapper(calibration)

For Pupil Invisible, you'll need to extract the scene_camera.json file the Time Series Data of a recording which has been been uploaded to Pupil Cloud. This method will also work with Neon recordings in a non-realtime context.

import json
from pupil_labs.real_time_screen_gaze.gaze_mapper import GazeMapper
...

with open("scene_camera.json") as calibration_file:
   calibration_data = json.load(calibration_file)
   if "dist_coefs" in calibration_data:
      calibration_data["distortion_coefficients"] = calibration_data["dist_coefs"]

   calibration = {
      "scene_camera_matrix": [calibration_data["camera_matrix"]],
      "scene_distortion_coefficients": [calibration_data["distortion_coefficients"]],
   }

gaze_mapper = GazeMapper(calibration)

Now that we have a GazeMapper object, we need to specify which AprilTag markers we're using and where they appear on the screen.

marker_verts = {
   0: [ # marker id 0
      (32, 32), # Top left marker corner
      (96, 32), # Top right
      (96, 96), # Bottom right
      (32, 96), # Bottom left
   ],
   ...
}

screen_size = (1920, 1080)

screen_surface = gaze_mapper.add_surface(
   marker_verts,
   screen_size
)

Here, marker_verts is a dictionary whose keys are the IDs of the markers we'll be drawing to the screen. The value for each key is a list of the 2D coordinates of the four corners of the marker, starting with the top left and going clockwise.

With that, setup is complete and we're ready to start mapping gaze to the screen! On each iteration of our main loop we'll grab a video frame from the scene camera and gaze data from the Realtime API. We pass those along to our GazeMapper instance for processing, and it returns our gaze positions mapped to screen coordinates.

from pupil_labs.realtime_api.simple import discover_one_device
...

device = discover_one_device(max_search_duration_seconds=10)

while True:
   frame, gaze = device.receive_matched_scene_video_frame_and_gaze()
   result = gaze_mapper.process_frame(frame, gaze)

   for surface_gaze in result.mapped_gaze[screen_surface.uid]:
      printf(f"Gaze at {surface_gaze.x}, {surface_gaze.y}")

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages