Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix cropping overflow issue #1187

Merged
merged 2 commits into from
Nov 13, 2023
Merged

Conversation

Justin900429
Copy link
Contributor

@Justin900429 Justin900429 commented Nov 12, 2023

Add clip to the coordinates to ensure the cropping results can be correct.
This might happen when having the negative coordinates or coordinates bigger than the image size.

Reference: BoT-SORT.

@mikel-brostrom
Copy link
Owner

mikel-brostrom commented Nov 13, 2023

Great!! Should not be needed if the detector is trained following best practices, but I have seen that is not always the case... Which models did generate incorrect outputs?

@mikel-brostrom mikel-brostrom merged commit bfcd197 into mikel-brostrom:master Nov 13, 2023
3 checks passed
@Justin900429
Copy link
Contributor Author

Actually YOLOX used in all the trackers (ByteTrack, OC-Sort, BoT-SORT) for MOT17 get this problem. For pure tracker without ReID model (e.g, ByteTrack, OCSort) might not be the case but can break the program when cropping is needed.

I guess this is caused by the overflow coordinates provided in the GroundTruth annotations.
Actually if we consider the overflow coordinates followed the gt annotations without clipping, we can get a higher score on DetA (so does higher HOTA).

@mikel-brostrom
Copy link
Owner

mikel-brostrom commented Nov 13, 2023

I guess this is caused by the overflow coordinates provided in the GroundTruth annotations.

I did not know there where such issues in MOT17. I though all those issues were gone after the MOT16 refinement that lead to MOT17... Good to know!

@Justin900429
Copy link
Contributor Author

I see. Maybe this is my own issue?
For example, I checked the annotation gt_val_half.txt generated by convert_mot17_to_coco.py data for MOT17-05-FRCNN and found that line 1563~1565 has the negative coordinates.

It would be great if you would like to share whether you found this issue as well!

@mikel-brostrom
Copy link
Owner

mikel-brostrom commented Nov 13, 2023

I see this:

227,14,484,183,35,97,1,1,1
228,14,484,183,35,97,1,1,1
229,14,485,183,35,97,1,1,0.97222

in MOT17-05-FRCNN gt.txt, lines 1563~1565. Shouldn't lead to negative values...

@Justin900429
Copy link
Contributor Author

I mean gt_val_half.txt:

291,81,-24,74,116,338,1,1,0.786320
292,81,-43,60,124,364,1,1,0.648000
293,81,-72,35,138,399,1,1,0.474820

or in gt.txt line 4305~4307

710,81,-24,74,116,338,1,1,0.78632
711,81,-43,60,124,364,1,1,0.648
712,81,-72,35,138,399,1,1,0.47482

@mikel-brostrom
Copy link
Owner

mikel-brostrom commented Nov 13, 2023

Oooh, your are right.

710,81,-24,74,116,338,1,1,0.78632
711,81,-43,60,124,364,1,1,0.648
712,81,-72,35,138,399,1,1,0.47482

So yes, there are still issues in MOT17... Actually quite a lot. I get +2000 rows with negative values just in MOT17-05-FRCNN

@mikel-brostrom
Copy link
Owner

I would just cleanup the GT. It is obviously wrong and detrimental for your detector's performance

@Justin900429
Copy link
Contributor Author

I would just cleanup the GT. It is obviously wrong and detrimental for your detector's performance

Maybe clipping is enough for training but current research community seems to follow the original settings without any preprocessing. It would still be better to remain its original values for comparison with current SOTAs.

@mikel-brostrom
Copy link
Owner

Maybe clipping is enough for training but current research community seems to follow the original settings without any preprocessing

Would be interesting to check the metric difference between the detector trained on clipped values and un-clipped...

@mikel-brostrom
Copy link
Owner

mikel-brostrom commented Nov 13, 2023

In the 04 seq there are 14498 lines with negative values in the GT. Maybe all these trackers are better than reported. They are just trained sub-optimally and evaluated on misleading GT

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants