Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Weightage given to smileys (when negated) #28

Open
sourcedexter opened this issue Apr 4, 2017 · 6 comments
Open

Weightage given to smileys (when negated) #28

sourcedexter opened this issue Apr 4, 2017 · 6 comments

Comments

@sourcedexter
Copy link

So, Vader gives 1.0 positive for " :) " and 1.0 negative for " :( " and with that I know that the smileys are being detected correctly. However, it fails to identify the polarity correctly for this particular case:

sentence = "nothing for redheads :("
polarity got: {'neg': 0.0, 'neu': 0.555, 'pos': 0.445, 'compound': 0.3412}

It is surprising that this sentence is tipping towards the positive polarity while the negative remains at 0.0.
Now if I remove the smiley and find the polarity, this is what I get:

sentence = "nothing for redheads"
polarity got: {'neg': 0.0, 'neu': 1.0, 'pos': 0.0, 'compound': 0.0}

And this result is absolutely correct. It is a neutral statement. So, why is that a negative lexicon, tending the sentence towards a positive outcome? I wanted to know if I can manipulate the weight of smileys to reduce such errors. Since Vader is capable of handling many tricky sentences, this should not have been an issue right ? or is it just an outlier condition ?

@Hiestaa
Copy link

Hiestaa commented May 9, 2017

The problem comes from the word nothing. It is considered as a negating word and there is a rule that will 'flip' the valence of a token when such word is found preceding your input.

If you try the same case with a sentence slightly longer, it won't make the link between the negating word and the smiley, and you will get the expected weightage:

sentence: nothing at all for redheads :(
polarity got: {'neg': 0.367, 'neu': 0.633, 'pos': 0.0, 'compound': -0.44}

@ltbringer
Copy link

Would it be better if the negating words don't negate emoticons? Emoticons, unlike other words in a sentence, have a meaning which summarises the emotion felt while writing the text. So if they are treated as a separate sentence.

example: "nothing for redheads :(" -> "nothing for redheads. sad."

I don't mean to say, swap the emoticon with a word, but the scoring to be done this way?

@Hiestaa
Copy link

Hiestaa commented Sep 14, 2017

This makes sense for sure, then to say if this is better or not would require implementing the rule, measuring the accuracy of the new algorithm and comparing it to the current one. The repo contains quite a lot of human-scored sentences, so that shouldn't be an issue if you want to spend the time to look into this idea 😃

@ltbringer
Copy link

@Hiestaa I do want to try out a few things, but I don't yet understand how to go about the values in the vader_lexicon.txt.

Going through the source code I inferred that only the word and the valence? are being taken into consideration for scoring sentences. So if I need to add more words to the list, do the other two values not matter?

@Hiestaa
Copy link

Hiestaa commented Sep 14, 2017

@CodeWingX The README provides a description of the values in the lexicon:

We collected intensity ratings on each of our candidate lexical features from ten independent human raters (for a total of 90,000+ ratings). Features were rated on a scale from "[–4] Extremely Negative" to "[4] Extremely Positive", with allowance for "[0] Neutral (or Neither, N/A)".

We kept every lexical feature that had a non-zero mean rating, and whose standard deviation was less than 2.5 as determined by the aggregate of ten independent raters.

I assume based on this information that vader_lexicon.txt holds the following format:

Token Valence Standard Deviation Human Ratings
(:< -0.2 2.03961 [-2, -3, 1, 1, 2, -1, 2, 1, -4, 1]
amorphous -0.2 0.4 [0, 0, 0, 0, 0, 0, -1, 0, 0, -1]

If you want to follow the same rigorous process as the author of the study, you should find 10 independent humans to evaluate each word you want to add to the lexicon, make sure the standard deviation doesn't exceed 2.5, and take the average rating for the valence. This will keep the file consistent.

Now if you just want to make the algorithm work on these new cases quickly, the standard deviation and human ratings are indeed not necessary. Only the token and valences are used.

@cjhutto cjhutto changed the title Weightage given to smileys. Weightage given to smileys (when negated) Apr 19, 2018
@cjhutto
Copy link
Owner

cjhutto commented Mar 20, 2020

Is there a study that shows empirical effects of emoticons and emojis in negated sentences? I've seen papers showing emojis/emoticons as sentence negations themselves... e.g., "I love my job 👎 ". But I haven't (yet) found anything describing a negation effect on the emoji/emoticon... e.g., your example "nothing for redheads :(". My intuition is that the general rule (in most cases) is that sentence negations (not, isn't, nothing, ain't) don't affect emoji/emoticon, and that in most cases the emoji/emoticon is what people actually key in on for judging overall sentiment.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants