Skip to content

Commit

Permalink
Small docfile fixes (#6328)
Browse files Browse the repository at this point in the history
  • Loading branch information
sgugger committed Aug 10, 2020
1 parent 1429b92 commit 6028ed9
Show file tree
Hide file tree
Showing 6 changed files with 90 additions and 136 deletions.
6 changes: 3 additions & 3 deletions docs/source/benchmarks.rst
Original file line number Diff line number Diff line change
Expand Up @@ -40,12 +40,12 @@ There are many more parameters that can be configured via the benchmark argument
``src/transformers/benchmark/benchmark_args_utils.py``, ``src/transformers/benchmark/benchmark_args.py`` (for PyTorch) and ``src/transformers/benchmark/benchmark_args_tf.py`` (for Tensorflow).
Alternatively, running the following shell commands from root will print out a descriptive list of all configurable parameters for PyTorch and Tensorflow respectively.

.. code-block::
.. code-block:: bash
>>> ## PYTORCH CODE
## PYTORCH CODE
python examples/benchmarking/run_benchmark.py --help
>>> ## TENSORFLOW CODE
## TENSORFLOW CODE
python examples/benchmarking/run_benchmark_tf.py --help
Expand Down
159 changes: 62 additions & 97 deletions docs/source/preprocessing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ work properly.
To automatically download the vocab used during pretraining or fine-tuning a given model, you can use the
:func:`~transformers.AutoTokenizer.from_pretrained` method:

::
.. code-block::
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('bert-base-cased')
Expand All @@ -31,33 +31,24 @@ Base use
A :class:`~transformers.PreTrainedTokenizer` has many methods, but the only one you need to remember for preprocessing
is its ``__call__``: you just need to feed your sentence to your tokenizer object.

::

encoded_input = tokenizer("Hello, I'm a single sentence!")
print(encoded_input)

This will return a dictionary string to list of ints like this one:

::
.. code-block::
>>> encoded_input = tokenizer("Hello, I'm a single sentence!")
>>> print(encoded_input)
{'input_ids': [101, 138, 18696, 155, 1942, 3190, 1144, 1572, 13745, 1104, 159, 9664, 2107, 102],
'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}
This returns a dictionary string to list of ints.
The `input_ids <glossary.html#input-ids>`__ are the indices corresponding to each token in our sentence. We will see
below what the `attention_mask <glossary.html#attention-mask>`__ is used for and in
:ref:`the next section <sentence-pairs>` the goal of `token_type_ids <glossary.html#token-type-ids>`__.

The tokenizer can decode a list of token ids in a proper sentence:

::

tokenizer.decode(encoded_input["input_ids"])

which should return

::
.. code-block::
>>> tokenizer.decode(encoded_input["input_ids"])
"[CLS] Hello, I'm a single sentence! [SEP]"
As you can see, the tokenizer automatically added some special tokens that the model expect. Not all model need special
Expand All @@ -69,18 +60,13 @@ those special tokens yourself) by passing ``add_special_tokens=False``.
If you have several sentences you want to process, you can do this efficiently by sending them as a list to the
tokenizer:

::

batch_sentences = ["Hello I'm a single sentence",
"And another sentence",
"And the very very last one"]
encoded_inputs = tokenizer(batch_sentences)
print(encoded_inputs)

We get back a dictionary once again, this time with values being list of list of ints:

::
.. code-block::
>>> batch_sentences = ["Hello I'm a single sentence",
... "And another sentence",
... "And the very very last one"]
>>> encoded_inputs = tokenizer(batch_sentences)
>>> print(encoded_inputs)
{'input_ids': [[101, 8667, 146, 112, 182, 170, 1423, 5650, 102],
[101, 1262, 1330, 5650, 102],
[101, 1262, 1103, 1304, 1304, 1314, 1141, 102]],
Expand All @@ -91,6 +77,8 @@ We get back a dictionary once again, this time with values being list of list of
[1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1]]}
We get back a dictionary once again, this time with values being list of list of ints.

If the purpose of sending several sentences at a time to the tokenizer is to build a batch to feed the model, you will
probably want:

Expand All @@ -100,19 +88,11 @@ probably want:

You can do all of this by using the following options when feeding your list of sentences to the tokenizer:

::

## PYTORCH CODE
batch = tokenizer(batch_sentences, padding=True, truncation=True, return_tensors="pt")
print(batch)
## TENSORFLOW CODE
batch = tokenizer(batch_sentences, padding=True, truncation=True, return_tensors="tf")
print(batch)

which should now return a dictionary string to tensor like this:

::
.. code-block::
>>> ## PYTORCH CODE
>>> batch = tokenizer(batch_sentences, padding=True, truncation=True, return_tensors="pt")
>>> print(batch)
{'input_ids': tensor([[ 101, 8667, 146, 112, 182, 170, 1423, 5650, 102],
[ 101, 1262, 1330, 5650, 102, 0, 0, 0, 0],
[ 101, 1262, 1103, 1304, 1304, 1314, 1141, 102, 0]]),
Expand All @@ -122,9 +102,22 @@ which should now return a dictionary string to tensor like this:
'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 0]])}
>>> ## TENSORFLOW CODE
>>> batch = tokenizer(batch_sentences, padding=True, truncation=True, return_tensors="tf")
>>> print(batch)
{'input_ids': tf.Tensor([[ 101, 8667, 146, 112, 182, 170, 1423, 5650, 102],
[ 101, 1262, 1330, 5650, 102, 0, 0, 0, 0],
[ 101, 1262, 1103, 1304, 1304, 1314, 1141, 102, 0]]),
'token_type_ids': tf.Tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0, 0]]),
'attention_mask': tf.Tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 0, 0, 0, 0],
[1, 1, 1, 1, 1, 1, 1, 1, 0]])}
We can now see what the `attention_mask <glossary.html#attention-mask>`__ is all about: it points out which tokens the
model should pay attention to and which ones it should not (because they represent padding in this case).
It returns a dictionary string to tensor. We can now see what the `attention_mask <glossary.html#attention-mask>`__ is
all about: it points out which tokens the model should pay attention to and which ones it should not (because they
represent padding in this case).


Note that if your model does not have a maximum length associated to it, the command above will throw a warning. You
Expand All @@ -137,26 +130,16 @@ Preprocessing pairs of sentences

Sometimes you need to feed pair of sentences to your model. For instance, if you want to classify if two sentences in a
pair are similar, or for question-answering models, which take a context and a question. For BERT models, the input is
then represented like this:

::

[CLS] Sequence A [SEP] Sequence B [SEP]
then represented like this: :obj:`[CLS] Sequence A [SEP] Sequence B [SEP]`

You can encode a pair of sentences in the format expected by your model by supplying the two sentences as two arguments

(not a list since a list of two sentences will be interpreted as a batch of two single sentences, as we saw before).


::

encoded_input = tokenizer("How old are you?", "I'm 6 years old")
print(encoded_input)

This will once again return a dict string to list of ints:

::
.. code-block::
>>> encoded_input = tokenizer("How old are you?", "I'm 6 years old")
>>> print(encoded_input)
{'input_ids': [101, 1731, 1385, 1132, 1128, 136, 102, 146, 112, 182, 127, 1201, 1385, 102],
'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1],
'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}
Expand All @@ -169,34 +152,24 @@ using ``return_input_ids`` or ``return_token_type_ids``.

If we decode the token ids we obtained, we will see that the special tokens have been properly added.

::

tokenizer.decode(encoded_input["input_ids"])

will return:

::
.. code-block::
>>> tokenizer.decode(encoded_input["input_ids"])
"[CLS] How old are you? [SEP] I'm 6 years old [SEP]"
If you have a list of pairs of sequences you want to process, you should feed them as two lists to your tokenizer: the
list of first sentences and the list of second sentences:

::

batch_sentences = ["Hello I'm a single sentence",
"And another sentence",
"And the very very last one"]
batch_of_second_sentences = ["I'm a sentence that goes with the first sentence",
"And I should be encoded with the second sentence",
"And I go with the very last one"]
encoded_inputs = tokenizer(batch_sentences, batch_of_second_sentences)
print(encoded_inputs)

will return a dict with the values being list of lists of ints:

::
.. code-block::
>>> batch_sentences = ["Hello I'm a single sentence",
... "And another sentence",
... "And the very very last one"]
>>> batch_of_second_sentences = ["I'm a sentence that goes with the first sentence",
... "And I should be encoded with the second sentence",
... "And I go with the very last one"]
>>> encoded_inputs = tokenizer(batch_sentences, batch_of_second_sentences)
>>> print(encoded_inputs)
{'input_ids': [[101, 8667, 146, 112, 182, 170, 1423, 5650, 102, 146, 112, 182, 170, 5650, 1115, 2947, 1114, 1103, 1148, 5650, 102],
[101, 1262, 1330, 5650, 102, 1262, 146, 1431, 1129, 12544, 1114, 1103, 1248, 5650, 102],
[101, 1262, 1103, 1304, 1304, 1314, 1141, 102, 1262, 146, 1301, 1114, 1103, 1304, 1314, 1141, 102]],
Expand All @@ -207,25 +180,22 @@ will return a dict with the values being list of lists of ints:
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]}
To double-check what is fed to the model, we can decode each list in `input_ids` one by one:

::
As we can see, it returns a dictionary with the values being list of lists of ints.

for ids in encoded_inputs["input_ids"]:
print(tokenizer.decode(ids))

which will return:
To double-check what is fed to the model, we can decode each list in `input_ids` one by one:

::
.. code-block::
>>> for ids in encoded_inputs["input_ids"]:
>>> print(tokenizer.decode(ids))
[CLS] Hello I'm a single sentence [SEP] I'm a sentence that goes with the first sentence [SEP]
[CLS] And another sentence [SEP] And I should be encoded with the second sentence [SEP]
[CLS] And the very very last one [SEP] And I go with the very last one [SEP]
Once again, you can automatically pad your inputs to the maximum sentence length in the batch, truncate to the maximum
length the model can accept and return tensors directly with the following:

::
.. code-block::
## PYTORCH CODE
batch = tokenizer(batch_sentences, batch_of_second_sentences, padding=True, truncation=True, return_tensors="pt")
Expand Down Expand Up @@ -316,17 +286,12 @@ predictions in `named entity recognition (NER) <https://en.wikipedia.org/wiki/Na
`part-of-speech tagging (POS tagging) <https://en.wikipedia.org/wiki/Part-of-speech_tagging>`__.

If you want to use pre-tokenized inputs, just set :obj:`is_pretokenized=True` when passing your inputs to the
tokenizer. For instance:

::

encoded_input = tokenizer(["Hello", "I'm", "a", "single", "sentence"], is_pretokenized=True)
print(encoded_input)

will return:
tokenizer. For instance, we have:

::
.. code-block::
>>> encoded_input = tokenizer(["Hello", "I'm", "a", "single", "sentence"], is_pretokenized=True)
>>> print(encoded_input)
{'input_ids': [101, 8667, 146, 112, 182, 170, 1423, 5650, 102],
'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0],
'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1]}
Expand All @@ -337,7 +302,7 @@ Note that the tokenizer still adds the ids of special tokens (if applicable) unl
This works exactly as before for batch of sentences or batch of pairs of sentences. You can encode a batch of sentences
like this:

::
.. code-block::
batch_sentences = [["Hello", "I'm", "a", "single", "sentence"],
["And", "another", "sentence"],
Expand All @@ -346,7 +311,7 @@ like this:
or a batch of pair sentences like this:

::
.. code-block::
batch_of_second_sentences = [["I'm", "a", "sentence", "that", "goes", "with", "the", "first", "sentence"],
["And", "I", "should", "be", "encoded", "with", "the", "second", "sentence"],
Expand All @@ -355,7 +320,7 @@ or a batch of pair sentences like this:
And you can add padding, truncation as well as directly return tensors like before:

::
.. code-block::
## PYTORCH CODE
batch = tokenizer(batch_sentences,
Expand Down
12 changes: 6 additions & 6 deletions docs/source/quicktour.rst
Original file line number Diff line number Diff line change
Expand Up @@ -128,7 +128,7 @@ Under the hood: pretrained models
Let's now see what happens beneath the hood when using those pipelines. As we saw, the model and tokenizer are created
using the :obj:`from_pretrained` method:

::
.. code-block::
>>> ## PYTORCH CODE
>>> from transformers import AutoTokenizer, AutoModelForSequenceClassification
Expand All @@ -146,7 +146,7 @@ Using the tokenizer

We mentioned the tokenizer is responsible for the preprocessing of your texts. First, it will split a given text in
words (or part of words, punctuation symbols, etc.) usually called `tokens`. There are multiple rules that can govern
that process (you can learn more about them in the :doc:`tokenizer_summary <tokenizer_summary>`, which is why we need
that process (you can learn more about them in the :doc:`tokenizer summary <tokenizer_summary>`, which is why we need
to instantiate the tokenizer using the name of the model, to make sure we use the same rules as when the model was
pretrained.

Expand Down Expand Up @@ -295,7 +295,7 @@ precision, etc.). See the :doc:`training tutorial <training>` for more details.

Once your model is fine-tuned, you can save it with its tokenizer in the following way:

::
.. code-block::
tokenizer.save_pretrained(save_directory)
model.save_pretrained(save_directory)
Expand All @@ -305,22 +305,22 @@ directory name instead of the model name. One cool feature of 馃 Transformers
PyTorch and TensorFlow: any model saved as before can be loaded back either in PyTorch or TensorFlow. If you are
loading a saved PyTorch model in a TensorFlow model, use :func:`~transformers.TFAutoModel.from_pretrained` like this:

::
.. code-block::
tokenizer = AutoTokenizer.from_pretrained(save_directory)
model = TFAutoModel.from_pretrained(save_directory, from_pt=True)
and if you are loading a saved TensorFlow model in a PyTorch model, you should use the following code:

::
.. code-block::
tokenizer = AutoTokenizer.from_pretrained(save_directory)
model = AutoModel.from_pretrained(save_directory, from_tf=True)
Lastly, you can also ask the model to return all hidden states and all attention weights if you need them:


::
.. code-block::
>>> ## PYTORCH CODE
>>> pt_outputs = pt_model(**pt_batch, output_hidden_states=True, output_attentions=True)
Expand Down
10 changes: 2 additions & 8 deletions docs/source/task_summary.rst
Original file line number Diff line number Diff line change
Expand Up @@ -477,7 +477,7 @@ This outputs a (hopefully) coherent next token following the original sequence,

.. code-block::
print(resulting_string)
>>> print(resulting_string)
Hugging Face is based in DUMBO, New York City, and has
In the next section, we show how this functionality is leveraged in :func:`~transformers.PreTrainedModel.generate` to generate multiple tokens up to a user-defined length.
Expand Down Expand Up @@ -604,8 +604,7 @@ expected results:

.. code-block::
print(nlp(sequence))
>>> print(nlp(sequence))
[
{'word': 'Hu', 'score': 0.9995632767677307, 'entity': 'I-ORG'},
{'word': '##gging', 'score': 0.9915938973426819, 'entity': 'I-ORG'},
Expand Down Expand Up @@ -803,11 +802,6 @@ translation results nevertheless.
Because the translation pipeline depends on the ``PretrainedModel.generate()`` method, we can override the default arguments
of ``PretrainedModel.generate()`` directly in the pipeline as is shown for ``max_length`` above.
This outputs the following translation into German:

::

Hugging Face ist ein Technologieunternehmen mit Sitz in New York und Paris.

Here is an example doing translation using a model and a tokenizer. The process is the following:

Expand Down
Loading

0 comments on commit 6028ed9

Please sign in to comment.