This repository has been archived by the owner on Nov 22, 2022. It is now read-only.
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Adding Google SentencePiece as a Tokenizer (#1106)
Summary: This diff adds SentencePiece as a pip requirement, and a tokenizer shell for PyText ## Motivation and Context We need SentencePiece to support modern cross lingual models ## How Has This Been Tested A unit test has been added. ## Types of changes - [ ] Docs change / refactoring / dependency upgrade - [ ] Bug fix (non-breaking change which fixes an issue) - [x] New feature (non-breaking change which adds functionality) - [ ] Breaking change (fix or feature that would cause existing functionality to change) ## Checklist - [x] My code follows the code style of this project. - [x] My change requires a change to the documentation. - [x] I have updated the documentation accordingly. - [x] I have read the **CONTRIBUTING** document. - [x] I have completed my CLA (see **CONTRIBUTING**) - [x] I have added tests to cover my changes. - [x] All new and existing tests passed. Pull Request resolved: #1106 Test Plan: Imported from GitHub, without a `Test Plan:` line. Tests are non-issue because the TARGET file was missing dependency, which cannot be filled up in OSS. Look at the diff stacked on top of this one for tests. Reviewed By: hudeven Differential Revision: D18309626 Pulled By: snisarg fbshipit-source-id: a0d16417023e237eb29a9355c22e203e380efbb5
- Loading branch information
1 parent
5882620
commit 322fc47
Showing
6 changed files
with
77 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -8,6 +8,7 @@ numpy | |
onnx | ||
pytorch-pretrained-bert | ||
requests | ||
sentencepiece | ||
torchtext | ||
tensorboard==1.14 | ||
pandas |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -12,3 +12,4 @@ scipy | |
torchtext | ||
tensorboard==1.14 | ||
torch | ||
sentencepiece |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,2 @@ | ||
#!/usr/bin/env python3 | ||
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved |
Binary file not shown.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,32 @@ | ||
#!/usr/bin/env python3 | ||
# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved | ||
|
||
import unittest | ||
|
||
from pytext.data.tensorizers import SentencePieceTokenizer | ||
|
||
|
||
class SentencePieceTokenizerTest(unittest.TestCase): | ||
def test_tokenize(self): | ||
sentence = "Testing out sentencepiece" | ||
expected = [ | ||
"▁T", | ||
"est", | ||
"ing", | ||
"▁out", | ||
"▁sen", | ||
"t", | ||
"ence", | ||
"p", | ||
"i", | ||
"e", | ||
"ce", | ||
] | ||
sp_tokenizer = SentencePieceTokenizer.from_config( | ||
SentencePieceTokenizer.Config( | ||
sp_model_path="tests/models/sentencepiece.model" | ||
) | ||
) | ||
tokens = sp_tokenizer.tokenize(sentence) | ||
tokens = [token.value for token in tokens] | ||
self.assertEqual(tokens, expected) |