Skip to content

Latest commit

 

History

History
44 lines (27 loc) · 3.96 KB

README.md

File metadata and controls

44 lines (27 loc) · 3.96 KB

UG2 2020 Challenge 2 Development Kit

This repository contains the development kit for Challenge 2 of the CVPR 2020 UG2 Challenge.

Sample Submissions

Submissions must be Docker images uploaded to Dockerhub. Sample submissions provided by the organizers for each of the subchallenges can be found in the following Dockerhub repository:

https://hub.docker.com/repository/docker/tanjasper/ug2_2020_samples

We also provide the source Dockerfiles and accompanying scripts in sample_submissions. To build the Docker images, simply navigate to the correct subchallenge directory in sample_submissions and run Docker build -t <image_name> ..

Sample Evaluation Code

We also provide sample evaluation code, found in sample_evaluation based on the code we will use to grade your submissions. These can be used to ensure your submission follows the expected format.

For subchallenges 1 and 2, we will run off-the-shelf face verification algorithms on your provided images. For the sample evaluation code, we simply use a dummy face verification algorithm.

The sample evaluation data is a subset of the validation set provided with the FlatCam Face Dataset (based on subjects 61-87). You may use this data to validate your methods. The actual test data consists of images of subjects not included in the FCFD (but captured in the same manner).

To run the sample evaluation code, follow the corresponding steps:

  1. Choose one of the following two ways to download sample test data
    1. Install gdown (pip install gdown in Terminal), and then navigate to sample_evaluation and run ./download_evaluation_data.sh in Terminal.
    2. Manually download and unzip the following two directories from Google Drive and place them in sample_evaluation:
  2. Navigate to sample_evaluation/challenge2-# where # is the subchallenge number.
  3. Set the DOCKER_IMAGE variable in evaluate_challenge2-#.sh to the name of your Docker image submission in Dockerhub.
  4. If you would like to use a GPU for your submission, either replace the first docker run in evaluate_challenge2-#.sh with nvidia-docker run or add the --gpus flag according to your Docker and nvidia-docker versions.
  5. Run evaluate_challenge2-#.sh. The outputs will be placed in sample_evaluation/challenge2-#/outputs. Check that verification_scores.txt was saved in the outputs folder to see if the evaluation successfully ran all the way through.

Tikhonov Reconstruction Code

We provide code to perform Tikhonov reconstruction in tikhonov_reconstruction_code. Both Python and Matlab functions are provided. Examples are available in tikhonov_reconstruction_code/matlab/demo.m and tikhonov/reconstruction_code/python/demo.py.

For details on the reconstruction procedure, see the following paper (in particular, Sections III and IV):

https://ieeexplore.ieee.org/document/7517296 (or pre-print available here: https://arxiv.org/abs/1509.00116)

We now describe in a high-level the reconstruction pipeline performed in the code. The following image represents the pipeline:

FlatCam pipeline

The FlatCam sensor is in a Bayer-filtered format. That is, pixels measure only one of the 3 RGB colors, and they are grouped into four pixels: 1 blue, 2 green, and 1 red. The FlatCam model is that the measurement for each of those four channels is a separable linear function of the corresponding color channel of the scene. In particular, the model is P1 X Q1^T, where P1 and Q1 (Phi_L and Phi_R in Asif, et al 2017) are matrices obtained via calibration for each color channel. These matrices are saved in flatcam_calibdata.mat. Tikhonov reconstruction (Eq. 7 in Asif, et al 2017) is then performed on each of the color channels, and the reconstruction of each color channel is merged to form the RGB image.