We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hello and many thanks for your work and sharing your code.
I have a question regarding the way you compute your IoU metric and how it compares against Lift-splat.
You use stat_scores_multiple_classes from PLmetrics to compute the iou. Correct me if I am wrong, but by default the threshold of this method is 0.5
On the other hand, in get_batch_iou of LFS they use a threshold of 0: pred = (preds > 0) https://github.com/nv-tlabs/lift-splat-shoot/blob/master/src/tools.py
get_batch_iou
pred = (preds > 0)
Wouldn't this have an impact on the evaluation results ,and thus, on how you compare to them ?
The text was updated successfully, but these errors were encountered:
My bad, LFS uses preds > 0 where preds is before sigmoid, which makes it > 0.5 after sigmoid
preds > 0
preds
Sorry, something went wrong.
No branches or pull requests
Hello and many thanks for your work and sharing your code.
I have a question regarding the way you compute your IoU metric and how it compares against Lift-splat.
You use stat_scores_multiple_classes from PLmetrics to compute the iou. Correct me if I am wrong, but by default the threshold of this method is 0.5
On the other hand, in
get_batch_iou
of LFS they use a threshold of 0:pred = (preds > 0)
https://github.com/nv-tlabs/lift-splat-shoot/blob/master/src/tools.py
Wouldn't this have an impact on the evaluation results ,and thus, on how you compare to them ?
The text was updated successfully, but these errors were encountered: