Skip to content

Building fine-tuned sentiment analysis model trained on The Stanford Sentiment Treebank consists of sentences from movie reviews and human annotations of their sentiment. The task is to predict the sentiment of a given sentence. It uses the two-way (positive/negative) class split, with only sentence-level labels.

License

pavaris-pm/distillbert-sst2

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 

Repository files navigation

Sentiment-analysis-model-GLUE-SST2

This fine-tuned model is tuned with Standford Sentiment Treebank (SST-2) datasets, and the pre-trained model that i consider to use in this case is DistilBERT base uncased finetuned SST-2 which was trained on SST-2 dataset before. The fine-tuned version acheives 90% accuracy on the SST-2 validation set. However, SST-2 labels of test dataset on huggingface was hidden so that our prediction on the test set cannot be computed. By the way, after try on many review cases, the model can predict customers' sentiment as POSITIVE and NEGATIVE effectively.

Link of the pre-trained model of DistilBERT on huggingface

https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english

About

Building fine-tuned sentiment analysis model trained on The Stanford Sentiment Treebank consists of sentences from movie reviews and human annotations of their sentiment. The task is to predict the sentiment of a given sentence. It uses the two-way (positive/negative) class split, with only sentence-level labels.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published