Skip to content

sumehta/bilstm_crf_extract

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Note: A notebook will be added soon detailing the usage

Bi-LSTM-CRF in PyTorch

  • Compared with PyTorch BI-LSTM-CRF, following changes are made:
  • In the original implementation tag token indices start from '0'. '0' is also used for padding token. This leads to '0' token for both padding and a dictionary token while tokenizing the tag sequences. This can make for erroneous training. This error is fixed by adding an appropriate unused token for padding tags sequences.

Installation

  • dependencies
  • install
    $ pip install bi-lstm-crf

Training

corpus

training

$ python -m bi_lstm_crf corpus_dir --model_dir "model_xxx"

training curve

import pandas as pd
import matplotlib.pyplot as plt

# the training losses are saved in the model_dir
df = pd.read_csv(".../model_dir/loss.csv")
df[["train_loss", "val_loss"]].ffill().plot(grid=True)
plt.show()

The CRF module can be easily embeded into other models:

from bi_lstm_crf import CRF

# a BERT-CRF model for sequence tagging
class BertCrf(nn.Module):
    def __init__(self, ...):
        ...
        self.bert = BERT(...)
        self.crf = CRF(in_features, num_tags)

    def loss(self, xs, tags):
        features, = self.bert(xs)
        masks = xs.gt(0)
        loss = self.crf.loss(features, tags, masks)
        return loss

    def forward(self, xs):
        features, = self.bert(xs)
        masks = xs.gt(0)
        scores, tag_seq = self.crf(features, masks)
        return scores, tag_seq

References

  1. Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional LSTM-CRF Models for Sequence Tagging. arXiv:1508.01991.
  2. PyTorch tutorial ADVANCED: MAKING DYNAMIC DECISIONS AND THE BI-LSTM CRF

About

Entity Extraction from text using BiLSTM-CRF

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages