Skip to content

jswati31/cuda-ghr

Repository files navigation

CUDA-GHR: Controllable Unsupervised Domain Adaptation for Gaze and Head Redirection

Requirements

We used Python 3.7.10 and torch 1.18.1 to test our experiments. We ran our codebase on Ubuntu 20.04.

To install all the packages:

pip install -r requirements.txt

Usage

Data

Download the three datasets: GazeCapture, MPIIFaceGaze, Columbia.

To pre-process the datasets, please use this repository and follow instructions provided to generate eye-strip images for FAZE. Put the h5 files in the data folder.

Train

Create a config json file similar to configs/config_gc_to_mpii.json describing all the training parameters and paths to the input files.

To train the task network, run this command:

python train_tasknet.py --config-json configs/config_tasknet.json

To train and evaluate the CUDA-GHR model in the paper, run this command:

GazeCapture → MPIIGaze:

python train_cudaghr.py --config_json configs/config_gc_to_mpii.json

GazeCapture → Columbia:

python train_cudaghr.py --config_json configs/config_gc_to_col.json --columbia

The training images, losses and evaluation metrics will be loggged in Tensorboard. We also save generated images in the save folder.

Evaluate

To evaluate CUDA-GHR model, run this command:

python eval_cudaghr.py --model_path <path to model> --config_json <path to config file> --test_people <subset to test>

Add '--columbia' option to test on Columbia dataset.

Pre-trained Models

You can download pretrained models here:

Acknowledgement

The code is adapted from FAZE and STED-Gaze. We thank authors for their awesome work!!

About

PyTorch implementation for paper "CUDA-GHR: Controllable Unsupervised Domain Adaptation for Gaze and Head Redirection"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages