Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bad predictions of the models #26

Closed
404akhan opened this issue Mar 25, 2019 · 1 comment
Closed

Bad predictions of the models #26

404akhan opened this issue Mar 25, 2019 · 1 comment

Comments

@404akhan
Copy link

Try to input first:
trump is good, but obama is bad, with trump being target.
Then, second with obama being target.
None of the models achieve the result they claim to achieve, because they output the same sentiment for both sentences.
(Sorry if wrongly tested, but i think i didn't).

@GeneZC
Copy link
Collaborator

GeneZC commented Mar 25, 2019

I think you are doing the right thing, and you could refer to issue #25.
Here I try to give you two reasons why most models suffer from the problem you mentioned:
Firstly, you are testing models with sentences from the domain which they never met with in training dataset, i.e., in laptop or restaurant domain;
Secondly, models occasionally make wrong prediction in such samples as they don't consider the relationships between different aspects within one sentence.
From my perspective, above-mentioned two flaws are the directions we should concentrate in later works.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants