UG2 2020 Challenge 2 Development Kit
This repository contains the development kit for Challenge 2 of the CVPR 2020 UG2 Challenge.
Submissions must be Docker images uploaded to Dockerhub. Sample submissions provided by the organizers for each of the subchallenges can be found in the following Dockerhub repository:
We also provide the source Dockerfiles and accompanying scripts in sample_submissions. To build the Docker images, simply navigate to the correct subchallenge directory in sample_submissions and run
Docker build -t <image_name> ..
Sample Evaluation Code
We also provide sample evaluation code, found in sample_evaluation based on the code we will use to grade your submissions. These can be used to ensure your submission follows the expected format.
For subchallenges 1 and 2, we will run off-the-shelf face verification algorithms on your provided images. For the sample evaluation code, we simply use a dummy face verification algorithm.
The sample evaluation data is a subset of the validation set provided with the FlatCam Face Dataset (based on subjects 61-87). You may use this data to validate your methods. The actual test data consists of images of subjects not included in the FCFD (but captured in the same manner).
To run the sample evaluation code, follow the corresponding steps:
- Choose one of the following two ways to download sample test data
- Install gdown (
pip install gdownin Terminal), and then navigate to
- Manually download and unzip the following two directories from Google Drive and place them in
- Install gdown (
- Navigate to sample_evaluation/challenge2-# where # is the subchallenge number.
- Set the
DOCKER_IMAGEvariable in evaluate_challenge2-#.sh to the name of your Docker image submission in Dockerhub.
- If you would like to use a GPU for your submission, either replace the first
docker runin evaluate_challenge2-#.sh with
nvidia-docker runor add the
--gpusflag according to your Docker and nvidia-docker versions.
- Run evaluate_challenge2-#.sh. The outputs will be placed in sample_evaluation/challenge2-#/outputs. Check that verification_scores.txt was saved in the outputs folder to see if the evaluation successfully ran all the way through.
Tikhonov Reconstruction Code
We provide code to perform Tikhonov reconstruction in tikhonov_reconstruction_code. Both Python and Matlab functions are provided. Examples are available in tikhonov_reconstruction_code/matlab/demo.m and tikhonov/reconstruction_code/python/demo.py.
For details on the reconstruction procedure, see the following paper (in particular, Sections III and IV):
We now describe in a high-level the reconstruction pipeline performed in the code. The following image represents the pipeline:
The FlatCam sensor is in a Bayer-filtered format. That is, pixels measure only one of the 3 RGB colors, and they are grouped into four pixels: 1 blue, 2 green, and 1 red. The FlatCam model is that the measurement for each of those four channels is a separable linear function of the corresponding color channel of the scene. In particular, the model is P1 X Q1^T, where P1 and Q1 (Phi_L and Phi_R in Asif, et al 2017) are matrices obtained via calibration for each color channel. These matrices are saved in flatcam_calibdata.mat. Tikhonov reconstruction (Eq. 7 in Asif, et al 2017) is then performed on each of the color channels, and the reconstruction of each color channel is merged to form the RGB image.