-
Notifications
You must be signed in to change notification settings - Fork 872
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multilayer ner #1289
Merged
Merged
Multilayer ner #1289
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
AngledLuffa
force-pushed
the
multilayer_ner
branch
10 times, most recently
from
October 2, 2023 16:02
58e8761
to
f7c756b
Compare
AngledLuffa
force-pushed
the
multilayer_ner
branch
11 times, most recently
from
October 3, 2023 03:54
f4b6590
to
cc7f10d
Compare
AngledLuffa
force-pushed
the
multilayer_ner
branch
from
October 3, 2023 04:16
cc7f10d
to
d4e9f13
Compare
This will allow representing multiple layers of tags in the same model in the vocab. Currently there is only one layer supported, though. Existing models may have TagVocab, including models created by users, so the model loading function converts them. The model could potentially get multiple layers of tags if the data returns multiple layers. For now we only handle the top layer (using indexing instead of squeeze so that it functions by ignoring later layers). Ultimately we will need to iterate
…o update several utility methods to make it work
Includes testing of a two column version If tuples are passed in, tuples are returned If process_tags receives a single column of tags as non-tuples, return a single column of tags instead of returning tuples. Primary use case is in the scripts which score flair or spacy on the WW dataset Includes error checking for the conversion from string to tuple of string
…n the NER data.py Later we will have the model use EMPTY to signify that this particular tag should be masked out, rather than having it learn to predict EMPTY
multiple tags, as the scorer doesn't support multiples and the model itself is only one layer. However, this is an important intermediate step Temporarily (?) use the output for just the first tag
…hose datasets don't align with the new training data
…ocab and multiple layers of tags from the dataset
single layer. This is the big money change - the model can now train with two output heads after adding these new layers. Old models would be incompatible with this format, but the loading code updates the tensors to the new format. Iterating over the lists in predict, unmapping all the tags, and then discarding tags after the first column allows it to successfully do something with a multi-entry data file Use EMPTY tags to mask out words where we don't want the NER model to learn anything about the tags Includes a basic test that training with two types of tags works Includes a check that two tag_clfs are both changing when backpropping Verify that the masking of empty tags means those tags aren't being trained. We do this with a unittest that turns off all of the tags in one tagset and then finetunes the model on that tagset
Add a test that the --connect_output_layers feature doesn't crash and actually connects the output layers
…agset to an NER model. Also, add ner_predict_tagset as an option to the Pipeline. This will allow the Pipeline to choose a different tagset for a multi-headed tagset
…ly OOV - EMPTY in particular is treated by the CompositeVocab as 'leave this blank'. The better fix would be to remove those states from the output layer entirely
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Add an ability to label text with multiple types of NER tags using one classifier. The idea is that a model can be trained to do both at the same time, and the information from both datasets should work together to make the overall model better. The learning from the second dataset can help the model generalize, even if it isn't the same tagset as the first dataset.
This will let us do something like cross-train the same model on different datasets, such as OntoNotes and CoNLL at the same time, or OntoNotes and the 8 class WorldWide dataset.
In the training data for a mixed dataset, each word now has an entry for "multi_ner" which can support more than one NER tag. Tags which aren't present for a sentence can be blank. In the case of the OntoNotes & WorldWide mixed dataset, for example, text from the WorldWide dataset has its 8 class tag and a blank tag, and text from the OntoNotes dataset has the original 18 class tag and a downscaled version of the 8 class tag.
There are two options for implementing this: one in which there is the original LSTM encoder, followed by a unique Linear for each tag class and a corresponding CRFLoss, and one in which the output of one of the Linears goes back into the input of the next output layer.
Old models are maintained by converting the original version of the tensors to the new tensors. Old datasets with one NER tagset are converted when loaded at training time. Therefore, no need to do anything to the existing models or datasets.
Results of running the OntoNotes model, with charlm but not transformer, on the OntoNotes and WorldWide test sets:
Here, "simplify" means the 18 class OntoNotes model is converted to 8 classes, then that data is combined with the WorldWide data as the training data for the second output layer