-
Notifications
You must be signed in to change notification settings - Fork 518
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to analyse sentiment according to the aspect term using infer_example.py? #25
Comments
Maybe you should give the way you combined them so that we could further give it a guess. |
Attached is the file I used. Thanks. |
Generally, it's because both 'good' and 'bad' could be descriptive of 'battery', and most models only concentrate on semantic correlations, leading to the result of misidentifying the correct modifier. |
Thanks. I found out that if I input 'good laptop, bad battery', the sentiment predictions are correct for both aspect terms: 'laptop' and 'battery'. Is it true that the context words behind the aspect word weigh more for sentiment prediction? |
Actually, prior works mainly focus on how to obtain a aspect-specific context representation properly so that we could easily judge which part in the context is much more important than the rest. |
ok thanks for your answers. |
HI,
In the file infer_example.py, I have the following code
def evaluate(self, raw_texts):\n ......\n aspect_seqs = [self.tokenizer.text_to_sequence('battery')] * len(raw_texts)\n ......
t_probs = inf.evaluate(['laptop is good but battery is bad'])\n print(t_probs.argmax(axis=-1) - 1)
why is the sentiment score = 1?
I tested the sentence 'laptop is good' and 'battery is bad and the output are 1 and -1 respectively. But when I combine the sentence together, the output is always 1 no matter the aspect term.
The model I am using is AOA, and it is trained on the laptop reviews and the accuracy is 0.7304.
The text was updated successfully, but these errors were encountered: