-
Notifications
You must be signed in to change notification settings - Fork 3.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GTSRB dataset issue #156
Comments
Thanks for catching this. We were not aware of the distinction and seems like we happened to use the training images from the competition and the testing images from the "official" version. We'd expect the linear probes (for all models compared) would perform slightly better when it's trained on more images in the official version, so I assume the reported accuracies can still serve as a "baseline" in future studies. We also note that the same set of training images are used across all models, so the comparisons in the paper can still be considered "fair". The zero-shot evaluations are not affected by this, since it only uses the test split. |
We recently uploaded the labels at https://github.com/openai/CLIP/blob/main/data/prompts.md#gtsrb |
Thanks! Just found it there as well :-D |
According to the official website, there are two versions of GTSRB:
The dataset stats (Table 9, Page 39) seem to suggest it is using train set from IJCNN 2011 version but test set from official version.
Given that Official-Train = IJCNN-Train + IJCNN-Test (Source), is CLIP using IJCNN Train as train set, IJCNN Test set as val set to tune hyper-param, Official Test set as test set? Thanks!
The text was updated successfully, but these errors were encountered: