You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I tried to train and test the Patchcore model with the WOOD Anomaly Detection dataset, which can be found here: https://euhubs4data.eu/datasets/iti-wood-anomaly-detection-one-class-classification/ . The dataset is divided into train, validation, test, and masks folders but I rearranged the files in order to match the dataset structure required by the .yaml file for a custom dataset (i.e., normal, abnormal, normal_test, and masks folder). The train was ok, but at the beginning of testing this error occurs:
My code:
from anomalib.utils.callbacks import LoadModelCallback, get_callbacks
from anomalib.config import get_configurable_parameters
from anomalib.data import get_datamodule
from pytorch_lightning import Trainer
# Path to model configuration file
CONFIG_PATH = f"datasets/config_patchcore.yaml"
# Get configurable parameters from yaml file
config = get_configurable_parameters(config_path=CONFIG_PATH)
# Load and prepare dataset
datamodule = get_datamodule(config)
datamodule.setup()
datamodule.prepare_data()
# Model definition
model = get_model(config)
# Callbacks
callbacks = get_callbacks(config)
# Trainer definition
trainer = Trainer(**config.trainer, callbacks=callbacks)
# Training
trainer.fit(model=model, datamodule=datamodule)
# Testing
trainer.test(datamodule=datamodule, model=model)
My yaml configuration file:
dataset:
name: WOOD_dataset
format: folder
path: datasets/WOOD_dataset
normal_dir: normal # name of the folder containing normal images.
abnormal_dir: abnormal # name of the folder containing abnormal images.
normal_test_dir: normal_test # name of the folder containing normal test images.
task: segmentation # classification or segmentation
mask: datasets/WOOD_dataset/masks/ #optional
extensions: null
split_ratio: # ratio of the normal images that will be used to create a test split
image_size: 224
train_batch_size: 32
test_batch_size: 1
num_workers: 0
transform_config:
train: null
val: null
create_validation_set: false
tiling:
apply: false
tile_size: null
stride: null
remove_border_count: 0
use_random_tiling: False
random_tile_count: 16
model:
name: patchcore
backbone: wide_resnet50_2 #wide_resnet50_2
pre_trained: true
layers:
- layer2
- layer3
coreset_sampling_ratio: 0.1
num_neighbors: 9
normalization_method: min_max # options: [null, min_max, cdf]
metrics:
image:
- Precision
- Recall
- F1Score
- AUPR
pixel:
#- F1Score
- AUPR
threshold:
image_default: 0
pixel_default: 0
adaptive: true
visualization:
show_images: False # show images on the screen
save_images: True # save images to the file system
log_images: True # log images to the available loggers (if any)
image_save_path: null # path to which images will be saved
mode: full # options: ["full", "simple"]
project:
seed: 0
path: ./results
logging:
logger: [] # options: [comet, tensorboard, wandb, csv] or combinations.
log_graph: false # Logs the model graph to respective logger.
optimization:
export_mode: null # options: onnx, openvino
# PL Trainer Args. Don't add extra parameter here.
trainer:
accelerator: auto # <"cpu", "gpu", "tpu", "ipu", "hpu", "auto">
accumulate_grad_batches: 1
amp_backend: native
auto_lr_find: false
auto_scale_batch_size: false
auto_select_gpus: false
benchmark: false
check_val_every_n_epoch: 10 # Don't validate before extracting features.
default_root_dir: null
detect_anomaly: false
deterministic: false
devices: 1
enable_checkpointing: true
enable_model_summary: true
enable_progress_bar: true
fast_dev_run: false
gpus: null # Set automatically
gradient_clip_val: 0
ipus: null
limit_predict_batches: 1.0
limit_test_batches: 1.0
limit_train_batches: 1.0
limit_val_batches: 1.0
log_every_n_steps: 25
log_gpu_memory: null
max_epochs: 1
max_steps: -1
max_time: null
min_epochs: null
min_steps: null
move_metrics_to_cpu: false
multiple_trainloader_mode: max_size_cycle
num_nodes: 1
num_processes: null
num_sanity_val_steps: 0
overfit_batches: 0.0
plugins: null
precision: 32
profiler: null
reload_dataloaders_every_n_epochs: 0
replace_sampler_ddp: true
strategy: null
sync_batchnorm: false
tpu_cores: null
track_grad_norm: -1
val_check_interval: 1.0 # Don't validate before extracting features.
All libraries versions are as required by the library.
Thanks for the help
The text was updated successfully, but these errors were encountered:
@aleperat97 apologies for the delay in responding. Are you still having this problem in the latest version of Anomalib? We have recently made some changes to the PatchCore score computations, so it's possible that this also solved the problem you reported. Please let us know if you're still having this problem so we could have a look.
I tried to train and test the Patchcore model with the WOOD Anomaly Detection dataset, which can be found here: https://euhubs4data.eu/datasets/iti-wood-anomaly-detection-one-class-classification/ . The dataset is divided into train, validation, test, and masks folders but I rearranged the files in order to match the dataset structure required by the .yaml file for a custom dataset (i.e., normal, abnormal, normal_test, and masks folder). The train was ok, but at the beginning of testing this error occurs:
My code:
My yaml configuration file:
All libraries versions are as required by the library.
Thanks for the help
The text was updated successfully, but these errors were encountered: