diff --git a/examples/nlp/ipynb/neural_machine_translation_with_transformer.ipynb b/examples/nlp/ipynb/neural_machine_translation_with_transformer.ipynb index dc561b7169..30f696f4c7 100644 --- a/examples/nlp/ipynb/neural_machine_translation_with_transformer.ipynb +++ b/examples/nlp/ipynb/neural_machine_translation_with_transformer.ipynb @@ -257,7 +257,7 @@ "As such, the training dataset will yield a tuple `(inputs, targets)`, where:\n", "\n", "- `inputs` is a dictionary with the keys `encoder_inputs` and `decoder_inputs`.\n", - "`encoder_inputs` is the vectorized source sentence and `encoder_inputs` is the target sentence \"so far\",\n", + "`encoder_inputs` is the vectorized source sentence and `decoder_inputs` is the target sentence \"so far\",\n", "that is to say, the words 0 to N used to predict word N+1 (and beyond) in the target sentence.\n", "- `target` is the target sentence offset by one step:\n", "it provides the next words in the target sentence -- what the model will try to predict." @@ -660,4 +660,4 @@ }, "nbformat": 4, "nbformat_minor": 0 -} \ No newline at end of file +} diff --git a/examples/nlp/md/neural_machine_translation_with_transformer.md b/examples/nlp/md/neural_machine_translation_with_transformer.md index 3ddd67c402..5326f26db1 100644 --- a/examples/nlp/md/neural_machine_translation_with_transformer.md +++ b/examples/nlp/md/neural_machine_translation_with_transformer.md @@ -186,7 +186,7 @@ using the source sentence and the target words 0 to N. As such, the training dataset will yield a tuple `(inputs, targets)`, where: - `inputs` is a dictionary with the keys `encoder_inputs` and `decoder_inputs`. -`encoder_inputs` is the vectorized source sentence and `encoder_inputs` is the target sentence "so far", +`encoder_inputs` is the vectorized source sentence and `decoder_inputs` is the target sentence "so far", that is to say, the words 0 to N used to predict word N+1 (and beyond) in the target sentence. - `target` is the target sentence offset by one step: it provides the next words in the target sentence -- what the model will try to predict. diff --git a/examples/nlp/neural_machine_translation_with_transformer.py b/examples/nlp/neural_machine_translation_with_transformer.py index 05ff450c3d..744f6b5c3c 100644 --- a/examples/nlp/neural_machine_translation_with_transformer.py +++ b/examples/nlp/neural_machine_translation_with_transformer.py @@ -154,7 +154,7 @@ def custom_standardization(input_string): As such, the training dataset will yield a tuple `(inputs, targets)`, where: - `inputs` is a dictionary with the keys `encoder_inputs` and `decoder_inputs`. -`encoder_inputs` is the vectorized source sentence and `encoder_inputs` is the target sentence "so far", +`encoder_inputs` is the vectorized source sentence and `decoder_inputs` is the target sentence "so far", that is to say, the words 0 to N used to predict word N+1 (and beyond) in the target sentence. - `target` is the target sentence offset by one step: it provides the next words in the target sentence -- what the model will try to predict.