Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Validation Set. #2

Open
Hui-design opened this issue Aug 15, 2023 · 2 comments
Open

Validation Set. #2

Hui-design opened this issue Aug 15, 2023 · 2 comments

Comments

@Hui-design
Copy link

Hi, thanks for your great work!
It seems that you directly use the test set as the validation set in code:

FOD/trainer.py

Line 207 in d97b44b

img_auc, pix_auc = self.test(vis=False)
. However, I find there is indeed no explicit validation set in the MvTec dataset. As I am not familiar with the field of anomaly detection, I would like to know if using the test set as the validation set is a common practice for this task.

@xcyao00
Copy link
Owner

xcyao00 commented Aug 16, 2023

Yes. In anomaly detection, directly testing in the test set is a common practice.

@Hui-design
Copy link
Author

Hui-design commented Sep 15, 2023

感谢上次您的及时回复!关于使用测试集我又有一个疑问。
您在Table 8 中复现了DRAEM在MvTec3d数据集上的表现,仅得到75.7% I-AUROC。 然而在另一篇基于DRAEM的工作EasyNet [https://arxiv.org/pdf/2307.13925.pdf] 中,它在MvTec3d上的RGB效果达到了90.4%,包括我自己的复现结果也是90%左右。
我怀疑原因是这样的:因为DRAEM代码中没有使用测试集来选择最好的epoch,而是采用了最后一个epoch. 您可能也是这样?而EasyNet和我的复现使用了测试集做筛选。
实际上,我觉得DRAEM这种合成噪声的方法非常不稳定,它每个epoch都会产生不同的噪声,导致不同epoch模型的效果有很大差异,只有使用测试集去选模型才能有一个好的效果。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants