Skip to content
/ gazenet Public

Gaze tracking (where on screen am I looking) augmented using synthetic data. (PyTorch, Unity, OpenCV)

Notifications You must be signed in to change notification settings

hu-po/gazenet

Repository files navigation

** NO LONGER MAINTAINED, USE AT YOUR OWN RISK **

GazeNet

GazeNet tracks the gaze location of a laptop user using the image from their webcam. Rather than collecting a large dataset of user data, GazeNet is trained using a combination of large synthetic or "fake" datasets and smaller dataset of real images.

Fake Images

Fake images are generated automatically using Unity3D. I make use of the free MCS Male character model to easily generate over 10,000 synthetic training images. Simulation is an easy way to generate large datasets

alt text alt text alt text

Real Images

Real images are collected using a custom utility collect_gazedata.py. This utility uses OpenCV to plot gaze locations on the screen and takes a picture of the laptop user as they look at these gaze locations. Real data collection is expensive and uses precious time.

alt text alt text alt text

Instructions

  • Use collect_gazedata.py to collect a real dataset. Follow the instructions in the pop-up.
  • Build and run the Unity simulator to create an arbitrarily large synthetic dataset.
  • Use train_gazenet.py to train a GazeNet (uses pre-trained feature extractors).
  • Use run_gazenet.py to run your GazeNet. Hopefully the green dot on the screen tracks your gaze!

Requirements

Sources

[1] Learning from Simulated and Unsupervised Images through Adversarial Training. [2] Eye Tracking for Everyone

About

Gaze tracking (where on screen am I looking) augmented using synthetic data. (PyTorch, Unity, OpenCV)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published