Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -538,7 +538,6 @@ def post_process_panoptic_segmentation(
# create the area, since bool we just need to sum :)
mask_k_area = mask_k.sum()
# this is the area of all the stuff in query k
# TODO not 100%, why are the taking the k query here????
original_area = (mask_probs[k] >= 0.5).sum()

mask_does_exist = mask_k_area > 0 and original_area > 0
Expand All @@ -565,5 +564,5 @@ def post_process_panoptic_segmentation(
)
if is_stuff:
stuff_memory_list[pred_class] = current_segment_id
results.append({"segmentation": segmentation, "segments": segments})
results.append({"segmentation": segmentation, "segments": segments})
return results
20 changes: 20 additions & 0 deletions tests/maskformer/test_modeling_maskformer.py
Original file line number Diff line number Diff line change
Expand Up @@ -404,3 +404,23 @@ def test_with_annotations_and_loss(self):
outputs = model(**inputs)

self.assertTrue(outputs.loss is not None)

def test_panoptic_segmentation(self):
model = MaskFormerForInstanceSegmentation.from_pretrained(self.model_checkpoints).to(torch_device).eval()
feature_extractor = self.default_feature_extractor

inputs = feature_extractor(
[np.zeros((3, 384, 384)), np.zeros((3, 384, 384))],
annotations=[
{"masks": np.random.rand(10, 384, 384).astype(np.float32), "labels": np.zeros(10).astype(np.int64)},
{"masks": np.random.rand(10, 384, 384).astype(np.float32), "labels": np.zeros(10).astype(np.int64)},
],
return_tensors="pt",
)

with torch.no_grad():
outputs = model(**inputs)

panoptic_segmentation = feature_extractor.post_process_panoptic_segmentation(outputs)

self.assertTrue(len(panoptic_segmentation) == 2)
Copy link
Contributor

@Narsil Narsil Mar 3, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure what's the best approach, but testing against real values seems better if you can easily craft one.

Like maybe we can craft (1, 2, 2) tensors that simulate a model output on 2x2 images and make sure everything including the masks. (WIthtout even ever loading the model, we're testing the feature extractor here is what I am implying)

Something else if crafting the values is too tedious/brittle, in the pipeline I use nested_simplify to enable doing that:

self.assertEqual(panoptic_segmentation, [{"segmentation": [[1, 2,3 ]], "segments": [{"id":1, "category_id": ...}, {...}, ....]},
 {"segmentation": [[1, 2,3 ]], "segments": [{"id":1, "category_id": ...}, {...}, ....]}])

It currently better than no test. And if the testing is hard to make more complete, we don't necessarily have to.
Tests are relevant as long as they are readable IMO, so a complex tests looses value.
This one is simple and that's good.

Not sure if this applies here and so forth, but I think the test could check a little more than the outer length format IMO, is what I was trying to convey

Copy link
Contributor Author

@FrancescoSaverioZuppichini FrancescoSaverioZuppichini Mar 3, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are definitely correct, I'll have to manually send an image through maskformer and record some of the outputs. Unfortunately, the size of the input can't be that small. I can get away without using maskformer outputs