Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

answer normalization #5

Closed
elbaro opened this issue Nov 24, 2017 · 3 comments
Closed

answer normalization #5

elbaro opened this issue Nov 24, 2017 · 3 comments

Comments

@elbaro
Copy link

elbaro commented Nov 24, 2017

The answer normalization code is different from the official VQA code.

  1. Can we assume this?
    "normalizations is not needed, assuming that the human answers are already normalized."

  2. The official code removes the articles (a, the) but this code doesn't.
    There are actually "the" in answers.

@Cyanogenoid
Copy link
Owner

Cyanogenoid commented Nov 24, 2017

You are correct, my evaluation code is slightly different. After looking into this, it seems to me that their evaluation specification is badly designed.

As far as I can tell, in the official code, machine-given answers are normalized in several ways -- such as the removal of articles as you mention -- but the ground-truth answers it checks against only have their punctuation normalized. I made the assumption that you mention based on the fact that their evaluation would clearly be wrong if the ground truth is not normalized: if an unnormalized ground truth answer would be changed due to normalization if it were given as a machine answer, it is impossible for any machine-given answer to be considered equal to this ground truth answer. For example, if all ground truth answers are "the cat" and the machine predicts "the cat", then normalization will only change the machine answer to "cat", which is unequal to "the cat" and so receive a score of 0 in the official code.

As you correctly point out, the ground truth answers are indeed not normalized. This should mean that my code evaluates slightly more things to be correct (the ones where the normalization doesn't make a correct answer wrong) and that it might be worth filing an issue in their repository. A quick search through the data suggests that about 1750 questions in the training set are affected by this (most of them are answers that contain the word "one"). I would rather not introduce this "bug" into my code to make it equal to theirs.

@elbaro
Copy link
Author

elbaro commented Nov 30, 2017

I agree the official method is broken. In my guess the original paper still used the old evaluation method, otherwise it cannot be compared to other papers.

You might want to clarify in README the list of differences to the paper (or any nontrivial assumption).

@elbaro elbaro closed this as completed Nov 30, 2017
@Cyanogenoid
Copy link
Owner

I'll do that, thanks for flagging up these issues!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants