You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Why did you set the epoch to 3 when training the MVTec dataset, but to 15 when training the Visa dataset? I noticed that the loss on MVTec was still decreasing after the third epoch
The text was updated successfully, but these errors were encountered:
We hope that the added linear layers learn a general mapping, rather than one that is specific to a particular dataset. Therefore, when training reaches a certain point, continued reduction in loss may lead to decreased generalization, which is not desirable.
It is worth noting that it's indeed challenging to determine the best time to stop training, which is a problem faced by all fine-tuning zero-shot methods currently (AnomalyCLIP, CLIP-AD). I suggest using three different datasets for train, valid, and test respectively, as a intuitive solution. I also look forward to future works addressing this issue.
Why did you set the epoch to 3 when training the MVTec dataset, but to 15 when training the Visa dataset? I noticed that the loss on MVTec was still decreasing after the third epoch
The text was updated successfully, but these errors were encountered: