-
Notifications
You must be signed in to change notification settings - Fork 310
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
inconsistency in predictions #33
Comments
Hi @nehaboob , as far as I'm concerned, there shouldn't be uncertainty at inference time. Could you explain in detail what the issue is? Are the questions exactly identical? (Not that the lower and upper case letters are treated differenty in QANet) |
yes, questions are exactly identical, but same model is giving different predictions at different runs. |
Hmm. Dropout could be the cause of this. I'm not able to find where dropout is forced to 0 for demo. |
Sorry, I was doing some mistake in graph initialisation which was causing different predictions. |
We have trained QA net for our own question and answers data. But when we run it in demo mode for prediction it is giving different results for the same question.
Some times it picks correct answer for the same question and some time does not, but ideally it should pick the same answer, right ? Any ideas what could be the reason for this behaviour of trained model ?
I have commented out below section from test/demo code:
"""
if config.decay < 1.0:
sess.run(model.assign_vars)
"""
The text was updated successfully, but these errors were encountered: