Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

An interesting experiment on how the thres_map affect results #349

Open
leidahhh opened this issue Sep 28, 2022 · 4 comments
Open

An interesting experiment on how the thres_map affect results #349

leidahhh opened this issue Sep 28, 2022 · 4 comments

Comments

@leidahhh
Copy link

**作者您好,我对自适应阈值一块比较感兴趣。在实验中发现一个比较有意思的现象,就是我把阈值损失以及binary的损失都删除了,只保留了模型预测的损失,结果模型的整体性能有了很大的提升。我想跟您讨论一下这个现象,以及您当初设计这个模块的想法,期待您的回复
Hello, the author. I'm interested in adaptive threshold. An interesting phenomenon was found in the experiment, that is, I deleted the threshold loss and binary loss, and only retained the loss predicted by the model. As a result, the overall performance of the model has been greatly improved. I want to discuss this phenomenon with you

删除前的结果:
2022-09-14 07:58:42,295 DBNet.pytorch INFO: [287/1200], train_loss: 0.4967, time: 133.9059, lr: 0.0007836829637320193
2022-09-14 07:58:45,779 DBNet.pytorch INFO: FPS:30.785972625664495
2022-09-14 07:58:45,780 DBNet.pytorch INFO: test: recall: 0.458333, precision: 0.964912, f1: 0.621469
删除后的结果:
2022-09-28 09:09:11,810 DBNet.pytorch INFO: [287/1200], train_loss: 0.1195, time: 145.5552, lr: 0.0007836829637320193
2022-09-28 09:09:33,585 DBNet.pytorch INFO: FPS:34.50438997451759
2022-09-28 09:09:33,589 DBNet.pytorch INFO: test: recall: 0.762254, precision: 0.931540, f1: 0.838438
模型均训练了287个epoch
**

@suhas004
Copy link

Hello
The training loss is calculated by combining threshold loss and binary loss. So if you delete those, the loss calculated by the model reduces, but regarding the performance side, it may not be that good (you can calculate precession & recall metrics and validate the results )

@leidahhh
Copy link
Author

Hello The training loss is calculated by combining threshold loss and binary loss. So if you delete those, the loss calculated by the model reduces, but regarding the performance side, it may not be that good (you can calculate precession & recall metrics and validate the results )

hello,I‘m glad to receive your replay. As show above, after I remove the threshold loss and binary loss, the model turn to be an normal semantic segmentation model,but the results turn out to be better than before.
here are results of origin model, trained for 287epoch: recall: 0.458333, precision: 0.964912, f1: 0.621469
here are ones of changed model: trained for 287epoch: recall: 0.762254, precision: 0.931540, f1: 0.838438
Maybe you can also have a try to test how the threshld module benefit the results

@oszn
Copy link

oszn commented Oct 27, 2022

Hello The training loss is calculated by combining threshold loss and binary loss. So if you delete those, the loss calculated by the model reduces, but regarding the performance side, it may not be that good (you can calculate precession & recall metrics and validate the results )

hello,I‘m glad to receive your replay. As show above, after I remove the threshold loss and binary loss, the model turn to be an normal semantic segmentation model,but the results turn out to be better than before. here are results of origin model, trained for 287epoch: recall: 0.458333, precision: 0.964912, f1: 0.621469 here are ones of changed model: trained for 287epoch: recall: 0.762254, precision: 0.931540, f1: 0.838438 Maybe you can also have a try to test how the threshld module benefit the results

maybe your dataset is more suitable for the only seg case

@WongVi
Copy link

WongVi commented Feb 2, 2023

@leidahhh could you please share the code where I neet to change to do your approach?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants