You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Try to input first: trump is good, but obama is bad, with trump being target.
Then, second with obama being target.
None of the models achieve the result they claim to achieve, because they output the same sentiment for both sentences.
(Sorry if wrongly tested, but i think i didn't).
The text was updated successfully, but these errors were encountered:
I think you are doing the right thing, and you could refer to issue #25.
Here I try to give you two reasons why most models suffer from the problem you mentioned:
Firstly, you are testing models with sentences from the domain which they never met with in training dataset, i.e., in laptop or restaurant domain;
Secondly, models occasionally make wrong prediction in such samples as they don't consider the relationships between different aspects within one sentence.
From my perspective, above-mentioned two flaws are the directions we should concentrate in later works.
Try to input first:
trump is good, but obama is bad
, withtrump
being target.Then, second with
obama
being target.None of the models achieve the result they claim to achieve, because they output the same sentiment for both sentences.
(Sorry if wrongly tested, but i think i didn't).
The text was updated successfully, but these errors were encountered: