-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Weightage given to smileys (when negated) #28
Comments
The problem comes from the word If you try the same case with a sentence slightly longer, it won't make the link between the negating word and the smiley, and you will get the expected weightage: sentence: nothing at all for redheads :( |
Would it be better if the negating words don't negate emoticons? Emoticons, unlike other words in a sentence, have a meaning which summarises the emotion felt while writing the text. So if they are treated as a separate sentence. example: "nothing for redheads :(" -> "nothing for redheads. sad." I don't mean to say, swap the emoticon with a word, but the scoring to be done this way? |
This makes sense for sure, then to say if this is better or not would require implementing the rule, measuring the accuracy of the new algorithm and comparing it to the current one. The repo contains quite a lot of human-scored sentences, so that shouldn't be an issue if you want to spend the time to look into this idea 😃 |
@Hiestaa I do want to try out a few things, but I don't yet understand how to go about the values in the vader_lexicon.txt. Going through the source code I inferred that only the word and the valence? are being taken into consideration for scoring sentences. So if I need to add more words to the list, do the other two values not matter? |
@CodeWingX The README provides a description of the values in the lexicon:
I assume based on this information that vader_lexicon.txt holds the following format:
If you want to follow the same rigorous process as the author of the study, you should find 10 independent humans to evaluate each word you want to add to the lexicon, make sure the standard deviation doesn't exceed 2.5, and take the average rating for the valence. This will keep the file consistent. Now if you just want to make the algorithm work on these new cases quickly, the standard deviation and human ratings are indeed not necessary. Only the token and valences are used. |
Is there a study that shows empirical effects of emoticons and emojis in negated sentences? I've seen papers showing emojis/emoticons as sentence negations themselves... e.g., "I love my job 👎 ". But I haven't (yet) found anything describing a negation effect on the emoji/emoticon... e.g., your example "nothing for redheads :(". My intuition is that the general rule (in most cases) is that sentence negations (not, isn't, nothing, ain't) don't affect emoji/emoticon, and that in most cases the emoji/emoticon is what people actually key in on for judging overall sentiment. |
So, Vader gives 1.0 positive for " :) " and 1.0 negative for " :( " and with that I know that the smileys are being detected correctly. However, it fails to identify the polarity correctly for this particular case:
sentence = "nothing for redheads :("
polarity got: {'neg': 0.0, 'neu': 0.555, 'pos': 0.445, 'compound': 0.3412}
It is surprising that this sentence is tipping towards the positive polarity while the negative remains at 0.0.
Now if I remove the smiley and find the polarity, this is what I get:
sentence = "nothing for redheads"
polarity got: {'neg': 0.0, 'neu': 1.0, 'pos': 0.0, 'compound': 0.0}
And this result is absolutely correct. It is a neutral statement. So, why is that a negative lexicon, tending the sentence towards a positive outcome? I wanted to know if I can manipulate the weight of smileys to reduce such errors. Since Vader is capable of handling many tricky sentences, this should not have been an issue right ? or is it just an outlier condition ?
The text was updated successfully, but these errors were encountered: