-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to create, train and evaluate DTD dataset #5
Comments
And the same question for the steel dataset in the appendix; looks like this didn't get in the code? |
Hi! Thank you again for your interest! Before answering the question, we found that there is a minor value mistake in Table 6 (DTD to ImageNet detection). Even after fixing the minor bug, we found out that our message doesn't change.
To use DTD as in-liers, you should first divide the DTD dataset into train/test sets. The following code is the one I have implemented to divide the set. Run this code at the
After dividing the set, you can use DTD as a training dataset with some modification on your For CSI training, we have used unlabeled multiclass training for DTD. For CSI evaluation: For the steel dataset, we didn't open the code since it shows similar results with the DTD dataset. Of course, you can download the dataset and run the code: https://www.kaggle.com/c/severstal-steel-defect-detection/data Thank you again for your interest and feel free to ask if you have any questions! |
Thank you again for your great support and responsiveness and your great work! |
Hi, would you be so kind to explain how to do the DTD training and evaluation?
In the paper you mention that DTD are the inliers and imagenet30 the outliers. How is the folder structure of "~/data/dtd/" supposed to look like?
For training, do I assume correctly unlabeled multi-class? I.e.
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node=4 train.py --dataset dtd --model resnet18 --mode simclr_CSI --shift_trans_type rotation --batch_size 32 --one_class_idx None
And for evaluation, how do I specify the out-distribution? I only see the "dataset" flag, but I would need to specify in-distribution and out-of-distribution datasets, right?
Thank you again for your help!
The text was updated successfully, but these errors were encountered: