This project aims to classify comments into various toxicity categories using advanced machine learning models. We compare the performance of BERT (Bidirectional Encoder Representations from Transformers) and LSTM (Long Short-Term Memory) models to determine the best approach for detecting toxic content in text.
BERT: Utilizes bidirectional attention to capture nuanced contextual information for precise classification of toxic comments.
LSTM: Processes text sequentially to understand context over time, aiming to predict comment toxicity effectively.
Real-Time Toxicity Detection -> Input a comment and receive immediate feedback on its toxicity level.
Interactive Interface -> A clean and intuitive UI to interact with the model.
Real-time Feedback -> Displays detailed feedback for various toxicity categories.
The BERT model has shown superior performance with higher accuracy and precision in distinguishing between toxic and non-toxic comments, while LSTM may exhibit limitations in capturing complex textual nuances.