Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add detection_threshold arg for all Lightning adapters #987

Closed
Anjum48 opened this issue Dec 8, 2021 · 12 comments
Closed

Add detection_threshold arg for all Lightning adapters #987

Anjum48 opened this issue Dec 8, 2021 · 12 comments
Labels
enhancement New feature or request help wanted Extra attention is needed

Comments

@Anjum48
Copy link

Anjum48 commented Dec 8, 2021

🚀 Feature

Is your feature request related to a problem? Please describe.
At the moment, the detection threshold for inside validation_step is at or near zero for Lightning adapters. This results in pessimistic validation performance due to false positives.


Describe the solution you'd like
A clear and concise description of what you want to happen.
Add a kwarg in the init of the Lightning adapter

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Happy to submit a PR if you guys agree

@Anjum48 Anjum48 added enhancement New feature or request help wanted Extra attention is needed labels Dec 8, 2021
@FraPochetti
Copy link
Contributor

@potipot @fstroth @ai-fast-track wdyt?

@fstroth
Copy link
Contributor

fstroth commented Dec 19, 2021

Sounds good to me, we should go forward with this.

@FraPochetti
Copy link
Contributor

@Anjum48 would you like to submit a PR?
We'd love that!

@potipot
Copy link
Contributor

potipot commented Dec 19, 2021

Ah, this is actually the way it should be. When calculating COCOMetric (or mAP), we don't put the confidence threshold on detections but keep the best 100 of them for each image.

@FraPochetti
Copy link
Contributor

FraPochetti commented Dec 19, 2021

Ah wait, so are you saying that we should keep the threshold to 0?
I think this doesn't change in the PR (e.g. the threshold is still kept at 0, we just provide the possibility to change it if we ever wanted to), but can you take a look too?

@Anjum48
Copy link
Author

Anjum48 commented Dec 20, 2021

Thanks for the clarification @potipot! I guess I was coming from the angle that a custom metric may need a confidence threshold, hence the need for this (in my case I implemented the F2 score from this ongoing competition.

I guess there might be some complexity in the future if you are using something like:

metrics = [COCOMetric(), F2Score()]

where COCOMetric does not require a threshold but F2Score does. In this particular case, would it be more sensible to put the threshold within the custom metric also?

@potipot
Copy link
Contributor

potipot commented Dec 20, 2021

yes exactly, I think it should be up to the metric to handle the predictions.

For running inference here we use the same underlying functions of convert_raw_predictions and we explicitly specify the condifence_threshold we want to use.

@Anjum48
Copy link
Author

Anjum48 commented Dec 20, 2021

Ok @FraPochetti , @fstroth @ai-fast-track what do you guys think? I'm happy to close the PR as @potipot's suggestion makes a lot of sense

@potipot
Copy link
Contributor

potipot commented Dec 20, 2021

as a counter argument for exposing that variable in model (or rather LightningModel) init - when user sets self.detection_threshold to 0.5 they might observe a decrease in the mAP metric as in its formula the penalty for FN is higher than for FP.
I was trying to find a link explaining that phenomena in more detail but couldn't right now. I remember reading about this on a couple of instances and discussions.

@FraPochetti
Copy link
Contributor

I don't think there is a need to close the PR.
Your PR does not change the default metrics calculation behavior given the threshold is still set to 0.
You are just exposing the parameter which is a good thing for potential future use cases.
No?
As far as I am concerned we could merge it.

@FraPochetti
Copy link
Contributor

Ok @FraPochetti , @fstroth @ai-fast-track what do you guys think? I'm happy to close the PR as @potipot's suggestion makes a lot of sense

Hi @Anjum48, first of all, thanks again for all the effort here. We truly appreciate that.

We have been discussing internally and, eventually, we have come to the conclusion that @potipot's idea makes the most sense.
Predictions, hence thresholds, are best handled at the metrics level.
Therefore, I am afraid we can't incorporate your PR into IceVision.
Would you mind closing it, please?

Once again, I hope you understand, and thanks for the exchange!
We hope you'll keep hanging out in our community and, most importantly, keep contributing!

@Anjum48
Copy link
Author

Anjum48 commented Dec 22, 2021

No problems at all, I totally agree with you guys. Thanks for the great discourse - I learnt a lot! 😊

I'll go ahead and close the PR

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

4 participants