Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions about code and dataset #7

Closed
SherlockHolmes221 opened this issue Mar 25, 2020 · 12 comments
Closed

Questions about code and dataset #7

SherlockHolmes221 opened this issue Mar 25, 2020 · 12 comments
Labels
V-COCO Improvements or additions to documentation

Comments

@SherlockHolmes221
Copy link

  1. PPDM achieves high mAP in HICO and HOI-A dataset which are both big dataset, what about the performance on small dataset like v-coco?
  2. What is the effect of the affine_transform when producing labeled heatmap?
    Thanks
@YueLiao
Copy link
Owner

YueLiao commented Mar 25, 2020

  1. We only evaluate our PPDM based on our rewriting evaluation script for V-COCO. Hourglass-104 achieved about 59.
  2. Because we follow Objects as points to apply a data augmentation during training, the affine_transform function is used to generate the corresponding GT heatmap for the transformed input, e.g., scale crop aug, flip etc.

@SherlockHolmes221
Copy link
Author

The v-coco mAP is the same with the official one?

@YueLiao
Copy link
Owner

YueLiao commented Mar 25, 2020

We do not make sure it. We haven't conducted the evaluation in the official one. Its work is in the to-do list but not now, may next month. You can try it by yourself, and we have provided the corresponding training code for vcoco.

@SherlockHolmes221
Copy link
Author

Thanks, closing it

@SherlockHolmes221
Copy link
Author

  1. We only evaluate our PPDM based on our rewriting evaluation script for V-COCO. Hourglass-104 achieved about 59.
  2. Because we follow Objects as points to apply a data augmentation during training, the affine_transform function is used to generate the corresponding GT heatmap for the transformed input, e.g., scale crop aug, flip etc.

Could you share vcoco(Hourglass-104) training scripts or hyperparameters ? I tried but can not reach 59.

@YueLiao
Copy link
Owner

YueLiao commented Aug 4, 2020

All hyper-parameters are the same as hico-det's. Additionally, we only conduct an evaluation based on the non-official evolutional script, which cannot be compared to the methods evaluated by the official script.

@SherlockHolmes221
Copy link
Author

Thanks

@SherlockHolmes221
Copy link
Author

All hyper-parameters are the same as hico-det's. Additionally, we only conduct an evaluation based on the non-official evolutional script, which cannot be compared to the methods evaluated by the official script.

It is weird that I use the same hyper-parameters(lr3e-4 bs 31) in hico-det's training scripts to train vcoco, but still can not reach 59 using your vcoco_eval.py test script. I just get about 54.

@YueLiao
Copy link
Owner

YueLiao commented Aug 14, 2020

All hyper-parameters are the same as hico-det's. Additionally, we only conduct an evaluation based on the non-official evolutional script, which cannot be compared to the methods evaluated by the official script.

It is weird that I use the same hyper-parameters(lr3e-4 bs 31) in hico-det's training scripts to train vcoco, but still can not reach 59 using your vcoco_eval.py test script. I just get about 54.

I try to find the checkpoint, but I found all of them are deleted due to the space limitation. The 59 is got by the older version code, and you can try it based on this repo. Hope you can reproduce it.

@YueLiao
Copy link
Owner

YueLiao commented Aug 14, 2020

All hyper-parameters are the same as hico-det's. Additionally, we only conduct an evaluation based on the non-official evolutional script, which cannot be compared to the methods evaluated by the official script.

It is weird that I use the same hyper-parameters(lr3e-4 bs 31) in hico-det's training scripts to train vcoco, but still can not reach 59 using your vcoco_eval.py test script. I just get about 54.

I try to find the checkpoint, but I found all of them are deleted due to the space limitation. The 59 is got by the older version code, and you can try it based on this repo. Hope you can reproduce it.

Aha, I have just found the checkpoint, OneDrive or BaiduDrive? The only preserved one. However, I don't think the current training and test strategy is fit for the official vcoco evaluation method. It may be better to give a threshold to remove the non-interaction predictions or to take the non-interaction images into training.

@SherlockHolmes221
Copy link
Author

Thanks, both OneDrive and BaiduDrive is ok

@YueLiao
Copy link
Owner

YueLiao commented Aug 14, 2020

Thanks, both OneDrive and BaiduDrive is ok

OneDrive.

@YueLiao YueLiao added the V-COCO Improvements or additions to documentation label Sep 18, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
V-COCO Improvements or additions to documentation
Projects
None yet
Development

No branches or pull requests

2 participants