Skip to content

Aim to be a convenient NLP library with the help from HuggingFace

License

Notifications You must be signed in to change notification settings

anhquan0412/that-nlp-library

Repository files navigation

Welcome to that-nlp-library

1. Installation

pip install that_nlp_library

It is advised that you manually install torch (with your compatible cuda version if you GPU). Typically it’s

pip3 install torch

Visit Pytorch page for more information

2. High-Level Overview

2.1. Supervised Learning

For supervised learning, the main pipeline contains 2 parts:

TextDataController: For High-Speed and Customizable Text Processing

Here is a list of processings that you can use (in order). You also can skip any processing if you want to.

Here is an example of the Text Controller for a classification task (predict Division Name), without any text preprocessing. The code will also tokenize your text field.

tdc = TextDataController.from_csv('sample_data/Womens_Clothing_Reviews.csv',
                                  main_text='Review Text',
                                  label_names='Division Name',
                                  sup_types='classification',                                  
                                 )
tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
tdc.process_and_tokenize(tokenizer,max_length=100,shuffle_trn=True)

And here is an example when all processings are applied

# define a custom augmentation function
from underthesea import text_normalize
import nlpaug.augmenter.char as nac

def nlp_aug(x,aug=None):
    results = aug.augment(x)
    if not isinstance(x,list): return results[0]
    return results
aug = nac.KeyboardAug(aug_char_max=3,aug_char_p=0.1,aug_word_p=0.07)
nearby_aug_func = partial(nlp_aug,aug=aug)

# initialize the TextDataController
tdc = TextDataController.from_csv(dset,
                                  main_text='Review Text',
                                  
                                  # metadatas
                                  metadatas='Title',
                                  
                                  # label
                                  label_names='Division Name',
                                  sup_types='classification',
                                  label_tfm_dict={'Division Name': lambda x: x if x!='Initmates' else 'Intimates'},
                                  
                                  # row filter
                                  filter_dict={'Review Text': lambda x: x is not None,
                                               'Division Name': lambda x: x is not None,
                                              },
                                              
                                  # text transformation
                                  content_transformation=[text_normalize,str.lower],
                                  
                                  # validation split
                                  val_ratio=0.2,
                                  stratify_cols=['Division Name'],
                                  
                                  # upsampling
                                  upsampling_list=[('Division Name',lambda x: x=='Intimates')]
                                  
                                  # text augmentation
                                  content_augmentations=nearby_aug_func
                                 )

tokenizer = RobertaTokenizer.from_pretrained('roberta-base')
tdc.process_and_tokenize(tokenizer,max_length=100,shuffle_trn=True)

For an in-depth tutorial on Text Controller for Supervised Learning (TextDataController), please visit here

This library also supports a streamed version of Text Controller (TextDataControllerStreaming), allowing you to work with data without having it entirely on your memory. You can still perform all the processings in the non-streamed version, except for Train/Validation split (which means you have to define your validation set beforehand), and Upsampling.

For more details on streaming, visit how to create a streamed dataset and how to train a model with a streamed dataset

If you are curious on the time and space efficiency between streamed and non-streamed version, visit the benchmark here

ModelController: For customizable model training/inference/interpretation

Here is an example of using ModelController to train a simple 6-class classification RoBERTa model, with the data and the preprocessing steps stored in our previous TextDataController object

# Load the model from HuggingFace, with the number of classes defined in our TextDataController object
num_labels = len(tdc.label_lists[0])
model = RobertaForSequenceClassification.from_pretrained('roberta-base',num_labels=num_labels)


# Create the `ModelController` object
controller = ModelController(model,data_store=tdc,seed=42)

# You can define multiple metrics for model evaluation
metric_funcs = [partial(f1_score,average='macro'),accuracy_score] 

# Training the model for 3 epochs, and save all checkpoints to 'my_saved_weights' directory
controller.fit(epochs = 3,
               learning_rate = 1e-4,
               metric_funcs = metric_funcs,
               batch_size = 32,
               save_checkpoint=True,
               o_dir='my_saved_weights',
               compute_metrics=compute_metrics
              )

The library can perform the following:

  • Classification

  • Regression

  • Multilabel classification

  • Multiheads, where each head can be either classification or regression

    • “Multihead” is when your model needs to predict multiple outputs at once, for example, given a sentence (e.g. a review on an e-commerce site), you have to predict what category the sentence is about, and the sentiment of the sentence, and maybe the rating of the sentence.

    • For the above example, this is a 3-head problem: classification (for category), classification (for sentiment), and regression (for rating from 1 to 5)

  • For 2-head classification where there’s hierarchical relationship between the first output and the second output (e.g. the first output is level 1 clothing category, and the second output is the level 2 clothing subcategory), you can utilize two specific approaches for this use-case: training with conditional probability, or with deep hierarchical classification

Decoupling of Text Controller and Model Controller

In this library, you can utilize TextDataController only to handle all the text processings, and have the final processed-HuggingFace-DatasetDict returned to you. But if you have your own processed DatasetDict, you can skip the TextDataController and use only the ModelController for training your data. There’s a quick tutorial on this decoupling here

2.2 Language Modeling

For language modeling, the main pipeline also contains 2 parts

TextDataLMController: Text Data Controller for Language Model

Similarly to TextDataController, TextDataLMController also provide a list of processings (except for Label Processing, Upsampling and Text Augmentation). The controller also allows tokenization line-by-line or by token concatenation.

Visit the tutorial here

There’s also a streamed version (TextDataLMControllerStreaming). Here is a tutorial on how to train a language model with a streamed dataset

ModelLMController: Language Model Controller

The library can train a masked language modeling (BERT, RoBERTa …) or a causal language model (GPT) either from scratch or from existing pretrained language models.

Hidden States Extraction

The library also allow you to extract the hidden states of your choice, for further analysis or other downstream tasks that requires a vector representation of your text input.

3. Documentation

Visit https://anhquan0412.github.io/that-nlp-library/

About

Aim to be a convenient NLP library with the help from HuggingFace

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published