New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Wrongly valid on test_loader. Unfair evaluation. #27
Comments
Thanks for your question! Yes, you can split the dataset into three subsets, train, val and test. And then change this dataloader as valid_loader. In our code, the valid loss only affects the early stop. |
Thanks for your prompt reply! In your code, you split the dataset into three subsets: What's my concern is that the validation, the testing and the threshold selection are all evaluated on the Please correct me if I misunderstood. Thanks. |
Yes, you are right. It is better if the validation and the threshold selection are evaluated on the validation set. (1) Since our paper focuses on unsupervised settings, we merge train and valid at last to enlarge the dataset. |
Thanks for your explanation. |
It seems like the validation is on
test_loader
while notvali_loader
, which is unfair to some extent and would make the results a little bit different.Anomaly-Transformer/solver.py
Line 196 in 72a71e5
Moreover, directly using
thre_loader
to find thresholds would cause test datasets leakage, sincethre_loader
is built ontest_data
while notvalid_data
.Anomaly-Transformer/solver.py
Line 254 in 72a71e5
Anomaly-Transformer/data_factory/data_loader.py
Lines 66 to 69 in 72a71e5
The text was updated successfully, but these errors were encountered: