Skip to content

rohitrs0908/Twitter-Hate-Speech-Combat

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 

Repository files navigation

DESCRIPTION Using NLP and ML, make a model to identify hate speech (racist or sexist tweets) in Twitter. Problem Statement:
Twitter is the biggest platform where anybody and everybody can have their views heard. Some of these voices spread hate and negativity. Twitter is wary of its platform being used as a medium to spread hate. You are a data scientist at Twitter, and you will help Twitter in identifying the tweets with hate speech and removing them from the platform. You will use NLP techniques, perform specific cleanup for tweets data, and make a robust model. Domain: Social Media Analysis to be done: Clean up tweets and build a classification model by using NLP techniques, cleanup specific for tweets data, regularization and hyperparameter tuning using stratified k-fold and cross validation to get the best model. Content: id: identifier number of the tweet Label: 0 (non-hate) /1 (hate) Tweet: the text in the tweet Tasks:

  1. Load the tweets file using read_csv function from Pandas package.
  2. Get the tweets into a list for easy text cleanup and manipulation.
  3. To cleanup:
  4. Normalize the casing.
  5. Using regular expressions, remove user handles. These begin with '@’.
  6. Using regular expressions, remove URLs.
  7. Using TweetTokenizer from NLTK, tokenize the tweets into individual terms.
  8. Remove stop words.
  9. Remove redundant terms like ‘amp’, ‘rt’, etc.
  10. Remove ‘#’ symbols from the tweet while retaining the term.
  11. Extra cleanup by removing terms with a length of 1.
  12. Check out the top terms in the tweets:
  13. First, get all the tokenized terms into one large list.
  14. Use the counter and find the 10 most common terms.
  15. Data formatting for predictive modeling:
  16. Join the tokens back to form strings. This will be required for the vectorizers.
  17. Assign x and y.
  18. Perform train_test_split using sklearn.
  19. We’ll use TF-IDF values for the terms as a feature to get into a vector space model.
  20. Import TF-IDF vectorizer from sklearn.
  21. Instantiate with a maximum of 5000 terms in your vocabulary.
  22. Fit and apply on the train set.
  23. Apply on the test set.
  24. Model building: Ordinary Logistic Regression
  25. Instantiate Logistic Regression from sklearn with default parameters.
  26. Fit into the train data.
  27. Make predictions for the train and the test set.
  28. Model evaluation: Accuracy, recall, and f_1 score.
  29. Report the accuracy on the train set.
  30. Report the recall on the train set: decent, high, or low.
  31. Get the f1 score on the train set.
  32. Looks like you need to adjust the class imbalance, as the model seems to focus on the 0s.
  33. Adjust the appropriate class in the LogisticRegression model.
  34. Train again with the adjustment and evaluate.
  35. Train the model on the train set.
  36. Evaluate the predictions on the train set: accuracy, recall, and f_1 score.
  37. Regularization and Hyperparameter tuning:
  38. Import GridSearch and StratifiedKFold because of class imbalance.
  39. Provide the parameter grid to choose for ‘C’ and ‘penalty’ parameters.
  40. Use a balanced class weight while instantiating the logistic regression.
  41. Find the parameters with the best recall in cross validation.
  42. Choose ‘recall’ as the metric for scoring.
  43. Choose stratified 4 fold cross validation scheme.
  44. Fit into the train set.
  45. What are the best parameters?
  46. Predict and evaluate using the best estimator.
  47. Use the best estimator from the grid search to make predictions on the test set.
  48. What is the recall on the test set for the toxic comments?
  49. What is the f_1 score?

Twitter-Hate-Speech-Combat

Twitter Hate Speech Combat Using NLP and Machine Learning

About

Twitter Hate Speech Combat Using NLP and Machine Learning

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published