New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
dynamic reweighting causes performance degradation in reproducing #4
Comments
command (exactly the same as the one provided in readme, except the output_dir): python3 -m torch.distributed.launch \
--nproc_per_node=8 \
--use_env \
main.py \
--pretrained params/detr-r50-pre-2stage-q64.pth \
--output_dir logs/base \
--dataset_file hico \
--hoi_path data/hico_20160224_det \
--num_obj_classes 80 \
--num_verb_classes 117 \
--backbone resnet50 \
--num_queries 64 \
--dec_layers_hopd 3 \
--dec_layers_interaction 3 \
--epochs 90 \
--lr_drop 60 \
--use_nms_filter
python3 -m torch.distributed.launch \
--nproc_per_node=8 \
--use_env \
main.py \
--pretrained logs/base/checkpoint_last.pth \
--output_dir logs/base \
--dataset_file hico \
--hoi_path data/hico_20160224_det \
--num_obj_classes 80 \
--num_verb_classes 117 \
--backbone resnet50 \
--num_queries 64 \
--dec_layers_hopd 3 \
--dec_layers_interaction 3 \
--epochs 10 \
--freeze_mode 1 \
--obj_reweight \
--verb_reweight \
--use_nms_filter
echo "base"
corresponding result (log): 31.5 after 1st training stage and 31.0 after 2nd training stage: |
for the 2nd run:
corresponding result (log): 31.2 after 1st training stage and 30.4 after 2nd training stage: |
This module is implemented by @zhangaixi2008, and he will reply you later. |
aha 31.5%, a new SOTA with CDN-S. |
You can have a try with the following command. |
@zhangaixi2008 It does not work for me either. The reweighting retraining leads to performance drop.
CDN S:
CDN B:
I'm using the above script to re-run cdn-s finetuning. |
@Haak0 please upload your model here, let me have a look. |
@zhangaixi2008 Hi, some of my checkpoints are overwritten. I am re-running the experiments. |
Hi, I made a mistake in the previous readme for running the fine-tune process. Please use the script I provide above under this issue. As we claimed in the paper, we use a small learning rate to fine-tune the first model. Thus, we set lr as 5e-6 and lr_backbone as 5e-7 for bs=8, or lr as 1e-5 and lr_backbone as 1e-6 for bs=16. Please try again and let us see the results. |
@zhangaixi2008 hi, I reproduce the finetune result following your script, and the result is reasonable.
BTW, what's the meaning of the "vis_tag" in hico_eval.py? |
For CDN-base, you have already surpassed our reported (official matlab 31.78, python 31.86) in our paper. Good job^^ |
The issue about the "Re-weight module" seems to be solved. If any other issues, feel free to open a new issue. |
Hi,
thanks for sharing the code! Great work!
I have a small question in reproducing your result.
I run the CDN-S model (res50, 3+3). It gave a result of about 31.5 or 31.2 (I run 2 times) after the first training stage (train the whole model with re gular loss). But after the second training stage (decoupled training) is finished, the performance downgrades to 31.0 and 30.4 for these 2 runs separately. For full mAP, rare mAP and non-rare mAP, this trick seems to be not helpful.
So I wonder what could goes wrong during my reproduction or what can be the reason. I will paste the commands and log below. Thanks. Nice day :3
The text was updated successfully, but these errors were encountered: