Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

inconsistency in predictions #33

Closed
nehaboob opened this issue Jul 2, 2018 · 4 comments
Closed

inconsistency in predictions #33

nehaboob opened this issue Jul 2, 2018 · 4 comments

Comments

@nehaboob
Copy link

nehaboob commented Jul 2, 2018

We have trained QA net for our own question and answers data. But when we run it in demo mode for prediction it is giving different results for the same question.

Some times it picks correct answer for the same question and some time does not, but ideally it should pick the same answer, right ? Any ideas what could be the reason for this behaviour of trained model ?

I have commented out below section from test/demo code:

"""
if config.decay < 1.0:
sess.run(model.assign_vars)
"""

@localminimum
Copy link
Owner

Hi @nehaboob , as far as I'm concerned, there shouldn't be uncertainty at inference time. Could you explain in detail what the issue is? Are the questions exactly identical? (Not that the lower and upper case letters are treated differenty in QANet)

@nehaboob
Copy link
Author

nehaboob commented Jul 9, 2018

yes, questions are exactly identical, but same model is giving different predictions at different runs.

@theSage21
Copy link
Contributor

Hmm. Dropout could be the cause of this. I'm not able to find where dropout is forced to 0 for demo.

@nehaboob
Copy link
Author

Sorry, I was doing some mistake in graph initialisation which was causing different predictions.
This can be closed now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants