From eecf10d2786a0abee1b6caaf4e3224e7f47072a8 Mon Sep 17 00:00:00 2001 From: synandi <98147397+synandi@users.noreply.github.com> Date: Thu, 13 Oct 2022 19:50:02 +0530 Subject: [PATCH 1/3] Fixed typos in mixed_precision.ipynb Fixed typos at multiple lines --- site/en/guide/mixed_precision.ipynb | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/site/en/guide/mixed_precision.ipynb b/site/en/guide/mixed_precision.ipynb index 05d29122211..447fa04dcce 100644 --- a/site/en/guide/mixed_precision.ipynb +++ b/site/en/guide/mixed_precision.ipynb @@ -411,7 +411,7 @@ "id": "0Sm8FJHegVRN" }, "source": [ - "This example cast the input data from int8 to float32. You don't cast to float16 since the division by 255 is on the CPU, which runs float16 operations slower than float32 operations. In this case, the performance difference in negligible, but in general you should run input processing math in float32 if it runs on the CPU. The first layer of the model will cast the inputs to float16, as each layer casts floating-point inputs to its compute dtype.\n", + "This example casts the input data from int8 to float32. You don't cast to float16 since the division by 255 is on the CPU, which runs float16 operations slower than float32 operations. In this case, the performance difference is negligible, but in general you should run input processing math in float32 if it runs on the CPU. The first layer of the model will cast the inputs to float16, as each layer casts floating-point inputs to its compute dtype.\n", "\n", "The initial weights of the model are retrieved. This will allow training from scratch again by loading the weights." ] @@ -465,7 +465,7 @@ " \n", "If you are running this guide in Colab, you can compare the performance of mixed precision with float32. To do so, change the policy from `mixed_float16` to `float32` in the \"Setting the dtype policy\" section, then rerun all the cells up to this point. On GPUs with compute capability 7.X, you should see the time per step significantly increase, indicating mixed precision sped up the model. Make sure to change the policy back to `mixed_float16` and rerun the cells before continuing with the guide.\n", "\n", - "On GPUs with compute capability of at least 8.0 (Ampere GPUs and above), you likely will see no performance improvement in the toy model in this guide when using mixed precision compared to float32. This is due to the use of [TensorFloat-32](https://www.tensorflow.org/api_docs/python/tf/config/experimental/enable_tensor_float_32_execution), which automatically uses lower precision math in certain float32 ops such as `tf.linalg.matmul`. TensorFloat-32 gives some of the performance advantages of mixed precision when using float32. However, in real-world models, you will still typically see significantly performance improvements from mixed precision due to memory bandwidth savings and ops which TensorFloat-32 does not support.\n", + "On GPUs with compute capability of at least 8.0 (Ampere GPUs and above), you likely will see no performance improvement in the toy model in this guide when using mixed precision compared to float32. This is due to the use of [TensorFloat-32](https://www.tensorflow.org/api_docs/python/tf/config/experimental/enable_tensor_float_32_execution), which automatically uses lower precision math in certain float32 ops such as `tf.linalg.matmul`. TensorFloat-32 gives some of the performance advantages of mixed precision when using float32. However, in real-world models, you will still typically see significant performance improvements from mixed precision due to memory bandwidth savings and ops which TensorFloat-32 does not support.\n", "\n", "If running mixed precision on a TPU, you will not see as much of a performance gain compared to running mixed precision on GPUs, especially pre-Ampere GPUs. This is because TPUs do certain ops in bfloat16 under the hood even with the default dtype policy of float32. This is similar to how Ampere GPUs use TensorFloat-32 by default. Compared to Ampere GPUs, TPUs typically see less performance gains with mixed precision on real-world models.\n", "\n", @@ -612,7 +612,7 @@ "id": "FVy5gnBqTE9z" }, "source": [ - "If you want, it is possible choose an explicit loss scale or otherwise customize the loss scaling behavior, but it is highly recommended to keep the default loss scaling behavior, as it has been found to work well on all known models. See the `tf.keras.mixed_precision.LossScaleOptimizer` documention if you want to customize the loss scaling behavior." + "If you want, it is possible choose an explicit loss scale or otherwise customize the loss scaling behavior, but it is highly recommended to keep the default loss scaling behavior, as it has been found to work well on all known models. See the `tf.keras.mixed_precision.LossScaleOptimizer` documentation if you want to customize the loss scaling behavior." ] }, { From 0e0d6f261be201d9ed9558181883fdf3587edaec Mon Sep 17 00:00:00 2001 From: 8bitmp3 <19637339+8bitmp3@users.noreply.github.com> Date: Tue, 18 Oct 2022 21:17:28 +0100 Subject: [PATCH 2/3] Update mixed_precision.ipynb --- site/en/guide/mixed_precision.ipynb | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/site/en/guide/mixed_precision.ipynb b/site/en/guide/mixed_precision.ipynb index 447fa04dcce..c240131fe14 100644 --- a/site/en/guide/mixed_precision.ipynb +++ b/site/en/guide/mixed_precision.ipynb @@ -465,7 +465,7 @@ " \n", "If you are running this guide in Colab, you can compare the performance of mixed precision with float32. To do so, change the policy from `mixed_float16` to `float32` in the \"Setting the dtype policy\" section, then rerun all the cells up to this point. On GPUs with compute capability 7.X, you should see the time per step significantly increase, indicating mixed precision sped up the model. Make sure to change the policy back to `mixed_float16` and rerun the cells before continuing with the guide.\n", "\n", - "On GPUs with compute capability of at least 8.0 (Ampere GPUs and above), you likely will see no performance improvement in the toy model in this guide when using mixed precision compared to float32. This is due to the use of [TensorFloat-32](https://www.tensorflow.org/api_docs/python/tf/config/experimental/enable_tensor_float_32_execution), which automatically uses lower precision math in certain float32 ops such as `tf.linalg.matmul`. TensorFloat-32 gives some of the performance advantages of mixed precision when using float32. However, in real-world models, you will still typically see significant performance improvements from mixed precision due to memory bandwidth savings and ops which TensorFloat-32 does not support.\n", + "On GPUs with compute capability of at least 8.0 (Ampere GPUs and above), you likely will see no performance improvement in the toy model in this guide when using mixed precision compared to float32. This is due to the use of [TensorFloat-32](https://www.tensorflow.org/api_docs/python/tf/config/experimental/enable_tensor_float_32_execution), which automatically uses lower precision math in certain float32 ops such as `tf.linalg.matmul`. TensorFloat-32 gives some of the performance advantages of mixed precision when using float32. However, in real-world models, you will still typically experience significant performance improvements from mixed precision due to memory bandwidth savings and ops which TensorFloat-32 does not support.\n", "\n", "If running mixed precision on a TPU, you will not see as much of a performance gain compared to running mixed precision on GPUs, especially pre-Ampere GPUs. This is because TPUs do certain ops in bfloat16 under the hood even with the default dtype policy of float32. This is similar to how Ampere GPUs use TensorFloat-32 by default. Compared to Ampere GPUs, TPUs typically see less performance gains with mixed precision on real-world models.\n", "\n", From 2e3d17efd99c125591404ab6896638df9cfbd73a Mon Sep 17 00:00:00 2001 From: tfdocsbot Date: Tue, 18 Oct 2022 20:17:59 +0000 Subject: [PATCH 3/3] nbfmt --- site/en/tutorials/text/image_captioning.ipynb | 16 +++------------- 1 file changed, 3 insertions(+), 13 deletions(-) diff --git a/site/en/tutorials/text/image_captioning.ipynb b/site/en/tutorials/text/image_captioning.ipynb index b37cf1e7646..5091634b271 100644 --- a/site/en/tutorials/text/image_captioning.ipynb +++ b/site/en/tutorials/text/image_captioning.ipynb @@ -486,9 +486,7 @@ "source": [ "### Image feature extractor\n", "\n", - "You will use an image model (pretrained on imagenet) to extract the features from each image. The model was trained as an image classifier, but setting `include_top=False` returns the model without the final classification layer, so you can use the last layer of feature-maps: \n", - "\n", - "\n" + "You will use an image model (pretrained on imagenet) to extract the features from each image. The model was trained as an image classifier, but setting `include_top=False` returns the model without the final classification layer, so you can use the last layer of feature-maps: \n" ] }, { @@ -1053,8 +1051,6 @@ "id": "qiRXWwIKNybB" }, "source": [ - "\n", - "\n", "The model will be implemented in three main parts: \n", "\n", "1. Input - The token embedding and positional encoding (`SeqEmbedding`).\n", @@ -1164,8 +1160,7 @@ " attn = self.mha(query=x, value=x,\n", " use_causal_mask=True)\n", " x = self.add([x, attn])\n", - " return self.layernorm(x)\n", - "\n" + " return self.layernorm(x)\n" ] }, { @@ -1305,8 +1300,6 @@ "id": "6WQD87efena5" }, "source": [ - "\n", - "\n", "But there are a few other features you can add to make this work a little better:\n", "\n", "1. **Handle bad tokens**: The model will be generating text. It should\n", @@ -1484,8 +1477,7 @@ "1. Flatten the extracted image features, so they can be input to the decoder layers.\n", "2. Look up the token embeddings.\n", "3. Run the stack of `DecoderLayer`s, on the image features and text embeddings.\n", - "4. Run the output layer to predict the next token at each position.\n", - "\n" + "4. Run the output layer to predict the next token at each position.\n" ] }, { @@ -2144,8 +2136,6 @@ "colab": { "collapsed_sections": [], "name": "image_captioning.ipynb", - "private_outputs": true, - "provenance": [], "toc_visible": true }, "kernelspec": {