Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A whole solution for question-answering? #5

Closed
guotong1988 opened this issue Mar 12, 2018 · 4 comments
Closed

A whole solution for question-answering? #5

guotong1988 opened this issue Mar 12, 2018 · 4 comments

Comments

@guotong1988
Copy link

guotong1988 commented Mar 12, 2018

I have not found the natural language questions in the datasets.
So I think I should turn the natural language questions to logic form like (e1,r,?) first.
Am I right? Thank you very much!

@shehzaadzd
Copy link
Owner

Hi,
We used the WikiMovies dataset where the questions are of the form (e1, r_text, ?). r_text is a question relationship.
Eg: (one crazy summer, who starred in PLACE_HOLDER, ?)

Hope this helps!

@guotong1988
Copy link
Author

Thank you for your quick reply!

@guotong1988
Copy link
Author

guotong1988 commented Mar 12, 2018

I have not found the WikiMovies dataset in this project.
So should I do it myself?

@rajarshd rajarshd reopened this Mar 12, 2018
@shehzaadzd
Copy link
Owner

Hi,
The code for running MINERVA on WikiMovies is not yet public. The basic mechanism used was to simply sum up the embeddings for each word in the query relation. This naive approach worked for WikiMovies largely because the questions in WikiMovies, all though in natural language, were very structured. (You can check out the dataset to get a sense on the question templates and types).
I hope this gives you some idea on how to use our code for your problem!
Do ask me if you need any help :)
-S

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants