Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Code for calculating accuracy metric #27

Closed
erobic opened this issue Jun 26, 2018 · 4 comments
Closed

Code for calculating accuracy metric #27

erobic opened this issue Jun 26, 2018 · 4 comments

Comments

@erobic
Copy link

erobic commented Jun 26, 2018

The README file says that accuracy was calculated with the VQA metric. Is the code for that calculation in the repo? I am unsure how to use the upper bound score alongside the actual score to get to this metric.

@jnhwkim
Copy link
Contributor

jnhwkim commented Jun 26, 2018

GT-Vision-Lab/VQA#1
#18

@erobic
Copy link
Author

erobic commented Jun 27, 2018

I see now that the hard-coded scores make sense.
Should the final accuracy then be: score/upper_bound? (I am running it in a different dataset and the upper bound is much lower than 100)

@ZhuFengdaaa
Copy link

ZhuFengdaaa commented Jun 28, 2018

@erobic the answer label is so disperse that many questions whose answers with the highest frequent are <= 2 ? You can try some text preprocess tricks written in compute_softscore.py

@erobic
Copy link
Author

erobic commented Jun 28, 2018

Right, it was some rough idea I wanted to try, but am discarding it for now. Thanks for the help though!

@erobic erobic closed this as completed Jun 28, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants