-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] Support RLE for COCO #1424
Conversation
Codecov Report
@@ Coverage Diff @@
## dev-0.28 #1424 +/- ##
============================================
+ Coverage 84.35% 84.44% +0.09%
============================================
Files 232 233 +1
Lines 19279 19402 +123
Branches 3468 3504 +36
============================================
+ Hits 16262 16384 +122
+ Misses 2152 2151 -1
- Partials 865 867 +2
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
@Indigo6 Thank you very much! We will add other experiment settings (res101, res152) in this PR. |
You're welcome. I may also open other PRs for coord regression with HRNet, Uncertainty L1/L2 loss |
I finished experiments on res101_256x192 but got a result worse than res50. |
Thanks a lot for sharing these! May I ask for your settings and logs? |
For the three models mentioned above, we use the config you provided. The config files and logs are uploaded to Google Drive. Further results will also be saved in this folder. |
Thanks a lot! I'll try again on my devices. |
In the paper, even without pertaining with heatmap loss, the APs of r50/r152 are 70.5/74.2, respectively. They are higher than our results (69.3/72.0). The official implementation uses different hyper-parameters from default setting of deeppose in MMPose, which might be the cause of different performance. |
You're right, especially end epoch 210/270. |
In our experiments, model gains neglectable performance improvement after epoch 210.
We also tried the setting used in official implementation (batch size, data aug). We found that the loss is lower in training, due to simpler data aug. However the performance reduces with this setting. |
Two experimental results on my devices
|
configs/body/2d_kpt_sview_rgb_img/deeppose/coco/res50_coco_256x192_rle.py
Show resolved
Hide resolved
@@ -0,0 +1,80 @@ | |||
<!-- [ALGORITHM] --> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This markdown file can not be rendered. Could you please check if there is something wrong with the format? @Ben-Louis
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, it has been fixed.
* [rle_fix]add kpt_scores calculation * [rle_fix]add rle_coco cfgs * [rle_fix]update rle_coco cfgs * [rle_fix]fix inference bug with out_sigma * add unittest for deeppose head decode interface * mend test pipeline for rle * update rle configs * remove unused module * add rle results on coco * fix resnet_rle_coco.md format error * update unittest for rle test pipeline Co-authored-by: lupeng <penglu2097@gmail.com>
* [rle_fix]add kpt_scores calculation * [rle_fix]add rle_coco cfgs * [rle_fix]update rle_coco cfgs * [rle_fix]fix inference bug with out_sigma * add unittest for deeppose head decode interface * mend test pipeline for rle * update rle configs * remove unused module * add rle results on coco * fix resnet_rle_coco.md format error * update unittest for rle test pipeline Co-authored-by: lupeng <penglu2097@gmail.com>
* [rle_fix]add kpt_scores calculation * [rle_fix]add rle_coco cfgs * [rle_fix]update rle_coco cfgs * [rle_fix]fix inference bug with out_sigma * add unittest for deeppose head decode interface * mend test pipeline for rle * update rle configs * remove unused module * add rle results on coco * fix resnet_rle_coco.md format error * update unittest for rle test pipeline Co-authored-by: lupeng <penglu2097@gmail.com>
* [rle_fix]add kpt_scores calculation * [rle_fix]add rle_coco cfgs * [rle_fix]update rle_coco cfgs * [rle_fix]fix inference bug with out_sigma * add unittest for deeppose head decode interface * mend test pipeline for rle * update rle configs * remove unused module * add rle results on coco * fix resnet_rle_coco.md format error * update unittest for rle test pipeline Co-authored-by: lupeng <penglu2097@gmail.com>
Motivation
Support RLE for COCO and add pre-trained models & log to the model zoo
Log and pre-trained model: Google Drive
Modification
BC-breaking (Optional)
Use cases (Optional)
Checklist
Before PR:
After PR: