The subdirectories in this directory contain evaluation scripts, sample submission data, sample ground truth data, and sample evaluation script output for each of the 3 challenge problems. There is also a README.md file in each subdirectory with instructions on how to run the evaluation scripts and a description of the expected output.
Note that sample submission and sample ground truth data are provided solely for the purposes of a) testing the output of the evaluation script and b) providing a template for the submission data against which each team can compare their submissions. Team submissions must conform to the submission data template provided for each challenge problem. Deviations from the submission data template may cause errors in the evaluation workflow, resulting in a failed evaluation.