You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Having tested a few methods using the changes made in PR #1378, I have noticed that the segmentation results do not correspond to the classification results.
In the following photographs, you will see that "normal" images may contain defective areas: this is particularly prevalent with DRAEM, but it happens with other methods, such as PaDiM and EfficientAD.
EfficientAD:
PaDiM:
DRAEM:
Dataset
MVTec
Model
N/A
Steps to reproduce the behavior
git clone anomalib cd anomalib build container via VSCode pip install -e . checkout phcarval:more_segmentation_info python3 tools/train.py --config src/anomalib/models/{model}/config.yaml
OS information
OS information:
OS: Ubuntu 20.04
Expected behavior
Predicted masks should only appear when the classification result is "anomalous".
I think this might come from the way image and pixel threshold is calculated. Since these are independent thresholds, calculated on a different (yet not entirely independent) data, I believe that it can happen that you get some anomalous pixels in the segmentation map but the entire image is still classified as normal.
This most likely happens due to the fact, that the anomaly score in most models is produced by taking the maximum from an anomaly map, not as a separate process that would actually calculate anomaly score (like PatchCore for example, or some other models that have sort of a sub-network to get score out of anomaly map and other features).
I understand that if the anomaly score is calculated differently than the anomaly map (Patchcore), then this behavior is expected. However, for methods that take the anomaly map's maximum value, wouldn't it make more sense to tie the choice of the anomaly threshold to the score threshold?
I'm not sure really, but right now these are separate values, calculated separately to maximize each task. I assume it would depend on the model used, but I can't really say anything for sure.
Describe the bug
Hello,
Having tested a few methods using the changes made in PR #1378, I have noticed that the segmentation results do not correspond to the classification results.
In the following photographs, you will see that "normal" images may contain defective areas: this is particularly prevalent with DRAEM, but it happens with other methods, such as PaDiM and EfficientAD.
EfficientAD:
PaDiM:
DRAEM:
Dataset
MVTec
Model
N/A
Steps to reproduce the behavior
git clone anomalib
cd anomalib
build container via VSCode
pip install -e .
checkout phcarval:more_segmentation_info
python3 tools/train.py --config src/anomalib/models/{model}/config.yaml
OS information
OS information:
Expected behavior
Predicted masks should only appear when the classification result is "anomalous".
Screenshots
No response
Pip/GitHub
GitHub
What version/branch did you use?
phcarval:more_segmentation_info
Configuration YAML
Logs
I have lost them but can try to provide if needed
Code of Conduct
The text was updated successfully, but these errors were encountered: