You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am writing to you today to express my admiration for your recent article on segmentation evaluation. I have read your article and I found your approach to be highly innovative and insightful.
I am currently working on a project involving the evaluation of a segmentation dataset that is not included in the mmsegmentation library. Additionally, both the prediction and ground truth images in my dataset are binary. I believe your method could be extremely valuable in this context, and I was wondering if you might have any suggestions or guidance on how I could adapt it for my specific use case.
From a student’s perspective,any insights you could offer would be greatly appreciated. Thank you for your time and consideration.
The text was updated successfully, but these errors were encountered:
Here's a snippet showing how you can use our method on your binary dataset:
from beyond_iou import Evaluator, build_segmentation_loader, Result
evaluator = Evaluator(
class_names=['background', 'foreground'],
ignore_index=255, # optionally modify for your case if there are ignore pixels
boundary_width=0.01, # choose reasonably for your case
)
loader = build_segmentation_loader(
pred_dir='path/to/your/predictions', # as 0-1 images, otherwise use pred_label_map to map label ids accordingly
gt_dir='path/to/your/groundtruth', # as 0-1 images, otherwise use gt_label_map to map label ids accordingly
gt_label_map=None,
pred_label_map=None,
num_workers=0,
pred_suffix='',
gt_suffix=''
)
result = evaluator.evaluate(loader)
result.create_report('your/output/directory', exist_ok=True)
This treats background (0) and foreground (1) as separate classes and computes the metrics as the mean over the two classes. If you are interested in the results for the foreground class only, you can access them through result.dataframe.
The text was updated successfully, but these errors were encountered: