Skip to content
#

bert-model

Here are 21 public repositories matching this topic...

Fine-tune BERT for sentiment analysis. I have done text preprocessing (special tokens, padding, and attention masks) and build a Sentiment Classifier using the amazing Transformers library by Hugging Face! I have train my model on kaggle notebook on gpu. The model give the accuracy of 95.14% on validation dataset.

  • Updated May 25, 2020
  • Jupyter Notebook
Fine-Tuning-of-BERT-Model

This project fine-tunes the BERT model on Kaggle's spam-ham dataset to classify messages. It explores transformers and large language models (LLMs) in generative AI, with potential for customization to other sentiment analysis tasks. The repository includes a single Jupyter Notebook for complete code on preprocessing, training, and prediction.

  • Updated Jul 1, 2024
  • Jupyter Notebook

This project explores methods to classify song lyrics by genre across two notebooks. In Notebook 1, various models (Transformer, LSTM, Random Forest) were tested, achieving accuracies up to 93.5%. Notebook 2 investigated Graph Neural Networks (GNNs) with TF-IDF and BERT embeddings, with the highest accuracy of 85.55% using GCN and BERT embeddings.

  • Updated Jul 15, 2024
  • Jupyter Notebook

Improve this page

Add a description, image, and links to the bert-model topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the bert-model topic, visit your repo's landing page and select "manage topics."

Learn more