Skip to content

Commit

Permalink
Update 2024-05-15-legner_lener_base_pt.md (#1204)
Browse files Browse the repository at this point in the history
  • Loading branch information
dcecchini committed Jul 12, 2024
1 parent 0e6becc commit 1f6da49
Show file tree
Hide file tree
Showing 2 changed files with 4 additions and 4 deletions.
4 changes: 2 additions & 2 deletions docs/_posts/gadde5300/2024-05-15-legner_lener_base_pt.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ tokenizer = nlp.Tokenizer()\
.setInputCols("sentence")\
.setOutputCol("token")

tokenClassifier = legal.BertForTokenClassification.load("legner_lener_base","pt", "legal/models")\
tokenClassifier = legal.BertForTokenClassification.pretrained("legner_lener_base","pt", "legal/models")\
.setInputCols("token", "sentence")\
.setOutputCol("label")\
.setCaseSensitive(True)
Expand Down Expand Up @@ -226,4 +226,4 @@ result = pipeline.fit(example).transform(example)

## References

Original texts available in https://paperswithcode.com/sota?task=Token+Classification&dataset=lener_br and in-house data augmentation with weak labelling
Original texts available in https://paperswithcode.com/sota?task=Token+Classification&dataset=lener_br and in-house data augmentation with weak labelling
4 changes: 2 additions & 2 deletions docs/_posts/gadde5300/2024-05-15-legner_lener_large_pt.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ tokenizer = nlp.Tokenizer()\
.setInputCols("sentence")\
.setOutputCol("token")

tokenClassifier = legal.BertForTokenClassification.load("legner_lener_large","pt", "legal/models")\
tokenClassifier = legal.BertForTokenClassification.pretrained("legner_lener_large","pt", "legal/models")\
.setInputCols("token", "sentence")\
.setOutputCol("label")\
.setCaseSensitive(True)
Expand Down Expand Up @@ -226,4 +226,4 @@ result = pipeline.fit(example).transform(example)

## References

Original texts available in https://paperswithcode.com/sota?task=Token+Classification&dataset=lener_br and in-house data augmentation with weak labelling
Original texts available in https://paperswithcode.com/sota?task=Token+Classification&dataset=lener_br and in-house data augmentation with weak labelling

0 comments on commit 1f6da49

Please sign in to comment.