feature: end-to-end NER pipeline#664
Merged
JulesBelveze merged 17 commits intorelease/1.2.0from Aug 1, 2023
Merged
Conversation
chakravarthik27
approved these changes
Jul 24, 2023
ArshaanNazir
approved these changes
Jul 24, 2023
Contributor
Author
|
I am stuck trying to update the |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
This PR aims at providing an end to end pipeline to perform the following workflow:
This way the user is able to train a model and tests behaviours that matter using
langtest. Based on the outcome of those testslangtestwill augment the original training set with samples on which the model failed. The model will then be retrained on this augmented dataset and compared to the original on the generated set of tests.It for now supports the transformers library and the NER task. The datasets can be passed in
conllorcsvformat.Usage
To use the end to end pipeline you can run the following one liner with your own parameters:
python langtest/pipelines/transformers_pipelines.py run \ --model-name=MODEL_NAME \ --train-data=TRAIN_FILE \ --eval-data=EVAL_FILE \ --training-args=ARGS_DICT \ --feature-col=NAME_OF_FEATURE_COL \ --target-col=NAME_OF_TARGET_COLfor example:
python langtest/pipelines/transformers_pipelines.py run \ --model-name="bert-base-uncased" \ --train-data=train.csv \ --eval-data=tesrt.csv \ --training-args='{"per_device_train_batch_size": 4}' \ --feature-col="tokens" \ --target-col="ner_tags"Checklist:
pydanticfor typing when/where necessary.