Skip to content

bilal0110/Toxic-Comment-Classification

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

3 Commits
ย 
ย 
ย 
ย 

Repository files navigation

๐Ÿ›‘ Toxic Comment Classification

๐Ÿงฉ Problem

Online platforms receive thousands of comments โ€” some of them harmful.
This project detects toxic behavior in text using machine learning.


๐Ÿท๏ธ Labels

Each comment can belong to one or more categories:

  • toxic
  • severe_toxic
  • obscene
  • threat
  • insult
  • identity_hate

๐Ÿ‘‰ This is a multi-label classification problem


๐Ÿง  Approach

  • Convert text into numerical features
  • Train models to predict multiple labels
  • Use One-vs-Rest strategy for handling multi-label outputs

๐Ÿค– Models Used

  • Logistic Regression
  • Multinomial Naive Bayes
  • Wrapped with OneVsRestClassifier

๐Ÿ“Š Evaluation

  • Accuracy Score
  • ROC-AUC Score
  • Classification Report
  • ROC Curve

๐Ÿ› ๏ธ Tools

  • Python
  • Scikit-learn
  • NLP preprocessing (tokenization, cleaning)

โ–ถ๏ธ Run

git clone https://github.com/your-username/toxic-comment-classification.git
cd toxic-comment-classification
pip install -r requirements.txt
python app.py

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors