Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can I use it to evaluate other segmentation datasets which is not included in the mmsegmentation #2

Closed
yanhaijun11902 opened this issue Apr 21, 2024 · 2 comments

Comments

@yanhaijun11902
Copy link

yanhaijun11902 commented Apr 21, 2024

I am writing to you today to express my admiration for your recent article on segmentation evaluation. I have read your article and I found your approach to be highly innovative and insightful.
I am currently working on a project involving the evaluation of a segmentation dataset that is not included in the mmsegmentation library. Additionally, both the prediction and ground truth images in my dataset are binary. I believe your method could be extremely valuable in this context, and I was wondering if you might have any suggestions or guidance on how I could adapt it for my specific use case.
From a student’s perspective,any insights you could offer would be greatly appreciated. Thank you for your time and consideration.
@mxbh
Copy link
Owner

mxbh commented May 6, 2024

Hi,
thanks for your interest.

Here's a snippet showing how you can use our method on your binary dataset:

from beyond_iou import Evaluator, build_segmentation_loader, Result

evaluator = Evaluator(
    class_names=['background', 'foreground'],
    ignore_index=255, # optionally modify for your case if there are ignore pixels
    boundary_width=0.01, # choose reasonably for your case
)
loader = build_segmentation_loader(
    pred_dir='path/to/your/predictions', # as 0-1 images, otherwise use pred_label_map to map label ids accordingly
    gt_dir='path/to/your/groundtruth', # as 0-1 images, otherwise use gt_label_map to map label ids accordingly
    gt_label_map=None,
    pred_label_map=None,
    num_workers=0,
    pred_suffix='',
    gt_suffix=''
)
result = evaluator.evaluate(loader)
result.create_report('your/output/directory', exist_ok=True)

This treats background (0) and foreground (1) as separate classes and computes the metrics as the mean over the two classes. If you are interested in the results for the foreground class only, you can access them through result.dataframe.

@yanhaijun11902
Copy link
Author

Got it ,I am deeply indebted to you for your help.

@mxbh mxbh closed this as completed May 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants