Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About the accuracy of tgif-qa #7

Closed
Wanan-ni opened this issue Jul 28, 2020 · 2 comments
Closed

About the accuracy of tgif-qa #7

Wanan-ni opened this issue Jul 28, 2020 · 2 comments

Comments

@Wanan-ni
Copy link

Hi,
I downloaded code、features、pre-trained models, but I got the accuracy of Count about 4.05/4.04/4.05 on test. When I train the model, I got 4.0639/4.0802/4.0599 on Count test and 0.7476/0.7454/0.7449 on Action test. I wonder if the parameters of configs/tgif_qa_xx.yml need to be adjusted, or I need do other settings.

@thaolmk54
Copy link
Owner

Hi,

I'm sorry for the issue you got.
I'm not really sure what is the problem here. I have asked someone to independently reproduce results from the pretrained models and pretty much it is similar to what in the paper.

As for the configuration files, I do think you need any adjustments. The issue you got possibly comes from the feature joining. Can you please remove the current features in your local storage then re-download and join them all together again to see if it helps? Also, please note that if you train the network yourself the results might vary a bit as PyTorch is not deterministic on different GPUs.

@Wanan-ni
Copy link
Author

OK,I‘ll try again. Thanks a lot : )

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants