Skip to content

JoungheeKim/Pytorch-BERT-Classification

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Pytorch-BERT-Classification

This is pytorch simple implementation of Pre-training of Deep Bidirectional Transformers for Language Understanding (BERT) by using awesome pytorch BERT library

Dataset

  1. IMDB(Internet Movie Database) To test model, I use a dataset of 50,000 movie reviews taken from IMDb. It is divied into 'train', 'test' dataset and each data has 25,000 movie reviews and labels(positive, negetive). You can access to dataset with this link

  2. Naver Movie review It is well scrapped dataset from Naver movie review(Korean). link

How to use it?

Follow the example

1 Train Model

There is a lot of options to check.

  1. train_path : A File to train model
  2. valid_path : A File to valid model
  3. max_length : Maximum length of word to analysis (BERT model restrict this parameter under 512)
  4. save_path : A Path to save result of BERT classfier model
  5. bert_name : The name of pretrained BERT model. Defalut : bert-base-uncased ( More information about pytorch-BERT model can be found in this link )
  6. bert_finetuning : If you want to fintune BERT model with classfier layer, set "True" for this option
  7. dropout_p : Drop probability of BERT result vector before enter to classfier layer
  8. boost : If you don't need to fine tune BERT, you can make model faster to preconvert tokens to BERT result vectors.
  9. n_epochs : A number of epoches to train
  10. lr : learning rate of classfier layer
  11. lr_main : learning rate of BERT for fine tune
  12. early_stop : A early_stop condition. If you don't want to use this options, put -1
  13. batch_size : Batch size to train
  14. gradient_accumulation_steps : BERT is very heavy model to handle large batch size with light GPU. So I implement gradient accumulation to handle samller batch size but almost same impact of using large batch size
python train.py --train_path source/train.csv --valid_path source/test.csv --batch_size 16 --gradient_accumulation_steps 4 --boost True 

Result

Result with hyper parameter settings

Dataset BERT pretrained BERT finetune Max token Length Best Epoch train loss valid loss valid accuracy
IMDB bert-base-uncased True 256 1 0.0169 0.0129 0.9181
IMDB bert-base-uncased True 512 1 0.0151 0.0112 0.9292
IMDB bert-base-uncased False 256 10 0.0289 0.0276 0.8027
IMDB bert-base-uncased False 512 10 0.0269 0.0259 0.8194
Naver bert-base-multilingual-cased True 512 4 0.0135 0.0199 0.8743
Naver bert-base-multilingual-uncased True 512 4 0.0126 0.0198 0.8743
Naver kobert True 512 2 0.0145 0.0163 0.8961

Comment

Fintuning result is remarkable and stunning. But just using a BERT output(wihtout fintuning) and put it through a single linear layer is not enought to handle data.

Reference

My pytorch implementation is highly impressed by other works. Please check below to see other works.

  1. https://github.com/huggingface/pytorch-transformers
  2. https://towardsdatascience.com/bert-classifier-just-another-pytorch-model-881b3cf05784

About

This is pytorch implemantation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published