TRAIN (2,304 images of 166 subjects) 1,500 annotated images (150 subjects) from part A and all unannotated images from part C (804 - 16 subjects).
TEST (9,500 images of 3,540 subjects) 1,800 unannotated images (180 subjects) from part A and all unannotated impostor images (7,700 - 3,360 subjects).
PART A The main datasets of 3,300 ear images belonging to 330 distinct identities (with 10 images per subject) that is used for the recognition experiments (training and testing). These images are annotated, but for the 1,800 images that are used as the test dataset, the annotations were removed.
PART B A set of 804 ear images of 16 subjects (with a variable number of images per subject) that will be used for the recognition experiments (training),
PART C An additional set of 7,700 ear images of 3,360 identities that will be used to test the scalability of the submitted algorithms.
The images that are annotated with annotations.json files contain various annotations, such as the level of occlusion, rotation (yaw, roll and pitch angles), presence of accessories, gender and soon. This information is only available during training and can be exploited to build specialized recognition techniques.