Skip to content

AnuS2003/Toxic_Comment_Classification

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Toxic_Comment_Classification

This project aims to classify comments into various toxicity categories using advanced machine learning models. We compare the performance of BERT (Bidirectional Encoder Representations from Transformers) and LSTM (Long Short-Term Memory) models to determine the best approach for detecting toxic content in text.

Models

BERT: Utilizes bidirectional attention to capture nuanced contextual information for precise classification of toxic comments.
LSTM: Processes text sequentially to understand context over time, aiming to predict comment toxicity effectively.

Streamlit Application

The project includes a user-friendly Streamlit application that provides:

Real-Time Toxicity Detection -> Input a comment and receive immediate feedback on its toxicity level.
Interactive Interface -> A clean and intuitive UI to interact with the model.
Real-time Feedback -> Displays detailed feedback for various toxicity categories.

Results

The BERT model has shown superior performance with higher accuracy and precision in distinguishing between toxic and non-toxic comments, while LSTM may exhibit limitations in capturing complex textual nuances.

About

Detects whether a comment is toxic or not

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors