This repository contains code for a spam detection model using BERT (Bidirectional Encoder Representations from Transformers) in TensorFlow. BERT is a powerful pre-trained model developed by Google that has achieved state-of-the-art performance on various natural language processing tasks. The model is trained on a dataset of spam and ham (non-spam) messages.
The dataset used for training the model is stored in the spam.csv
file. It contains a collection of messages labeled as spam or ham.
To run the code in this repository, it needs to have the following dependencies installed:
- TensorFlow
- TensorFlow Hub
- TensorFlow Text
- pandas
- scikit-learn
- matplotlib
- seaborn