Skip to content

Releases: tensorflow/tensorflow

TensorFlow 2.6.4

16 May 17:44
33ed2b1
Compare
Choose a tag to compare

Release 2.6.4

This releases introduces several vulnerability fixes:

TensorFlow 2.8.1

16 May 17:44
0516d4d
Compare
Choose a tag to compare

Release 2.8.1

This releases introduces several vulnerability fixes:

TensorFlow 2.7.2

16 May 17:44
dd7b8a3
Compare
Choose a tag to compare

Release 2.7.2

This releases introduces several vulnerability fixes:

TensorFlow 2.9.0-rc2

04 May 21:00
84326b3
Compare
Choose a tag to compare
TensorFlow 2.9.0-rc2 Pre-release
Pre-release

Release 2.9.0

Breaking Changes

  • Due to security issues in TF 2.8, all boosted trees code has now been removed (after being deprecated in TF 2.8). Users should switch to TensorFlow Decision Forests.
  • Build, Compilation and Packaging
    • TensorFlow is now compiled with _GLIBCXX_USE_CXX11_ABI=1. Downstream projects that encounter std::__cxx11 or [abi:cxx11] linker errors will need to adopt this compiler option. See the GNU C++ Library docs on Dual ABI.
    • TensorFlow Python wheels now specifically conform to manylinux2014, an upgrade from manylinux2010. The minimum Pip version supporting manylinux2014 is Pip 19.3 (see pypa/manylinux. This change may affect you if you have been using TensorFlow on a very old platform equivalent to CentOS 6, as manylinux2014 targets CentOS 7 as a compatibility base. Note that TensorFlow does not officially support either platform.
    • Discussion for these changes can be found on SIG Build's TensorFlow Community Forum thread
  • The tf.keras.mixed_precision.experimental API has been removed. The non-experimental symbols under tf.keras.mixed_precision have been available since TensorFlow 2.4 and should be used instead.
    • The non-experimental API has some minor differences from the experimental API. In most cases, you only need to make three minor changes:
      • Remove the word "experimental" from tf.keras.mixed_precision symbols. E.g., replace tf.keras.mixed_precision.experimental.global_policy with tf.keras.mixed_precision.global_policy.
      • Replace tf.keras.mixed_precision.experimental.set_policy with tf.keras.mixed_precision.set_global_policy. The experimental symbol set_policy was renamed to set_global_policy in the non-experimental API.
      • Replace LossScaleOptimizer(opt, "dynamic") with LossScaleOptimizer(opt). If you pass anything other than "dynamic" to the second argument, see (1) of the next section.
    • In the following rare cases, you need to make more changes when switching to the non-experimental API:
      • If you passed anything other than "dynamic" to the loss_scale argument (the second argument) of LossScaleOptimizer:
      • If you passed a value to the loss_scale argument (the second argument) of Policy:
        • The experimental version of Policy optionally took in a tf.compat.v1.mixed_precision.LossScale in the constructor, which defaulted to a dynamic loss scale for the "mixed_float16" policy and no loss scale for other policies. In Model.compile, if the model's policy had a loss scale, the optimizer would be wrapped with a LossScaleOptimizer. With the non-experimental Policy, there is no loss scale associated with the Policy, and Model.compile wraps the optimizer with a LossScaleOptimizer if and only if the policy is a "mixed_float16" policy. If you previously passed a LossScale to the experimental Policy, consider just removing it, as the default loss scaling behavior is usually what you want. If you really want to customize the loss scaling behavior, you can wrap your optimizer with a LossScaleOptimizer before passing it to Model.compile.
      • If you use the very rarely-used function tf.keras.mixed_precision.experimental.get_layer_policy:
        • Replace tf.keras.mixed_precision.experimental.get_layer_policy(layer) with layer.dtype_policy.
  • tf.mixed_precision.experimental.LossScale and its subclasses have been removed from the TF2 namespace. This symbols were very rarely used and were only useful in TF2 for use in the now-removed tf.keras.mixed_precision.experimental API. The symbols are still available under tf.compat.v1.mixed_precision.
  • The experimental_relax_shapes heuristic for tf.function has been deprecated and replaced with reduce_retracing which encompasses broader heuristics to reduce the number of retraces (see below)

Major Features and Improvements

  • tf.keras:

    • Added tf.keras.applications.resnet_rs models. This includes the ResNetRS50, ResNetRS101, ResNetRS152, ResNetRS200, ResNetRS270, ResNetRS350 and ResNetRS420 model architectures. The ResNetRS models are based on the architecture described in Revisiting ResNets: Improved Training and Scaling Strategies
    • Added tf.keras.optimizers.experimental.Optimizer. The reworked optimizer gives more control over different phases of optimizer calls, and is easier to customize. We provide Adam, SGD, Adadelta, AdaGrad and RMSprop optimizers based on tf.keras.optimizers.experimental.Optimizer. Generally the new optimizers work in the same way as the old ones, but support new constructor arguments. In the future, the symbols tf.keras.optimizers.Optimizer/Adam/etc will point to the new optimizers, and the previous generation of optimizers will be moved to tf.keras.optimizers.legacy.Optimizer/Adam/etc.
    • Added L2 unit normalization layer tf.keras.layers.UnitNormalization.
    • Added tf.keras.regularizers.OrthogonalRegularizer, a new regularizer that encourages orthogonality between the rows (or columns) or a weight matrix.
    • Added tf.keras.layers.RandomBrightness layer for image preprocessing.
    • Added APIs for switching between interactive logging and absl logging. By default, Keras always writes the logs to stdout. However, this is not optimal in a non-interactive environment, where you don't have access to stdout, but can only view the logs. You can use tf.keras.utils.disable_interactive_logging() to write the logs to ABSL logging. You can also use tf.keras.utils.enable_interactive_logging() to change it back to stdout, or tf.keras.utils.is_interactive_logging_enabled() to check if interactive logging is enabled.
    • Changed default value for the verbose argument of Model.evaluate() and Model.predict() to "auto", which defaults to verbose=1 for most cases and defaults to verbose=2 when used with ParameterServerStrategy or with interactive logging disabled.
    • Argument jit_compile in Model.compile() now applies to Model.evaluate() and Model.predict(). Setting jit_compile=True in compile() compiles the model's training, evaluation, and inference steps to XLA. Note that jit_compile=True may not necessarily work for all models.
    • Added DTensor-related Keras APIs under tf.keras.dtensor namespace. The APIs are still classified as experimental. You are welcome to try it out. Please check the tutoral and guide on https://www.tensorflow.org/ for more details about DTensor.
  • tf.lite:

    • Added TFLite builtin op support for the following TF ops:
      • tf.math.argmin/tf.math.argmax for input data type tf.bool on CPU.
      • tf.nn.gelu op for output data type tf.float32 and quantization on CPU.
    • Add nominal support for unsigned 16-bit integer tensor types. Note that very few TFLite kernels support this type natively, so its use in mobile ML authoring is generally discouraged.
    • Add support for unsigned 16-bit integer tensor types in cast op.
    • Experimental support for lowering list_ops.tensor_list_set_item with DynamicUpdateSlice.
    • Enabled a new MLIR-based dynamic range quantization backend by default
      • The new backend is used for post-training int8 dynamic range quantization and post-training float16 quantization.
      • Set experimental_new_dynamic_range_quantizer in tf.lite.TFLiteConverter to False to disable this change
    • Native TF Lite variables are now enabled during conversion by default on all v2 TfLiteConverter entry points. experimental_enable_resource_variables on tf.lite.TFLiteConverter is now True by default and will be removed in the future.
  • tf.function:

    • Custom classes used as arguments for tf.function can now specify rules regarding when retracing needs to occur by implementing the Tracing Protocol available through tf.types.experimental.SupportsTracingProtocol.
    • TypeSpec classes (as associated with ExtensionTypes) also implement the Tracing Protocol which can be overriden if necessary.
    • The newly introduced reduce_retracing option also uses the Tracing Protocol to proactively generate generalized traces similar to experimental_relax_shapes (which has now been deprecated).
  • Unified eager and tf.function execution:

    • Eager mode can now execute each op as a tf.function, allowing for more consistent feature support in future releases.
    • It is available for immediate use.
      • See the TF_RUN_EAGER_OP_AS_FUNCTION environment variable in eager context.
      • Eager performance should be similar with this feature enabled.
        • A roughly 5us per-op overhead may be observed when running many small functions.
        • Note a known issue with GPU performance.
      • The behavior of tf.function itself is unaffected.
    • Note: This feature will be enabled by default in an upcoming version of TensorFlow.
  • `tf....

Read more

TensorFlow 2.9.0-rc1

21 Apr 20:35
ca9b0df
Compare
Choose a tag to compare
TensorFlow 2.9.0-rc1 Pre-release
Pre-release

Release 2.9.0

Breaking Changes

  • Due to security issues in TF 2.8, all boosted trees code has now been removed (after being deprecated in TF 2.8). Users should switch to TensorFlow Decision Forests.
  • Build, Compilation and Packaging
    • TensorFlow is now compiled with _GLIBCXX_USE_CXX11_ABI=1. Downstream projects that encounter std::__cxx11 or [abi:cxx11] linker errors will need to adopt this compiler option. See the GNU C++ Library docs on Dual ABI.
    • TensorFlow Python wheels now specifically conform to manylinux2014, an upgrade from manylinux2010. The minimum Pip version supporting manylinux2014 is Pip 19.3 (see pypa/manylinux. This change may affect you if you have been using TensorFlow on a very old platform equivalent to CentOS 6, as manylinux2014 targets CentOS 7 as a compatibility base. Note that TensorFlow does not officially support either platform.
    • Discussion for these changes can be found on SIG Build's TensorFlow Community Forum thread
  • The tf.keras.mixed_precision.experimental API has been removed. The non-experimental symbols under tf.keras.mixed_precision have been available since TensorFlow 2.4 and should be used instead.
    • The non-experimental API has some minor differences from the experimental API. In most cases, you only need to make three minor changes:
      • Remove the word "experimental" from tf.keras.mixed_precision symbols. E.g., replace tf.keras.mixed_precision.experimental.global_policy with tf.keras.mixed_precision.global_policy.
      • Replace tf.keras.mixed_precision.experimental.set_policy with tf.keras.mixed_precision.set_global_policy. The experimental symbol set_policy was renamed to set_global_policy in the non-experimental API.
      • Replace LossScaleOptimizer(opt, "dynamic") with LossScaleOptimizer(opt). If you pass anything other than "dynamic" to the second argument, see (1) of the next section.
    • In the following rare cases, you need to make more changes when switching to the non-experimental API:
      • If you passed anything other than "dynamic" to the loss_scale argument (the second argument) of LossScaleOptimizer:
      • If you passed a value to the loss_scale argument (the second argument) of Policy:
        • The experimental version of Policy optionally took in a tf.compat.v1.mixed_precision.LossScale in the constructor, which defaulted to a dynamic loss scale for the "mixed_float16" policy and no loss scale for other policies. In Model.compile, if the model's policy had a loss scale, the optimizer would be wrapped with a LossScaleOptimizer. With the non-experimental Policy, there is no loss scale associated with the Policy, and Model.compile wraps the optimizer with a LossScaleOptimizer if and only if the policy is a "mixed_float16" policy. If you previously passed a LossScale to the experimental Policy, consider just removing it, as the default loss scaling behavior is usually what you want. If you really want to customize the loss scaling behavior, you can wrap your optimizer with a LossScaleOptimizer before passing it to Model.compile.
      • If you use the very rarely-used function tf.keras.mixed_precision.experimental.get_layer_policy:
        • Replace tf.keras.mixed_precision.experimental.get_layer_policy(layer) with layer.dtype_policy.
  • tf.mixed_precision.experimental.LossScale and its subclasses have been removed from the TF2 namespace. This symbols were very rarely used and were only useful in TF2 for use in the now-removed tf.keras.mixed_precision.experimental API. The symbols are still available under tf.compat.v1.mixed_precision.
  • The experimental_relax_shapes heuristic for tf.function has been deprecated and replaced with reduce_retracing which encompasses broader heuristics to reduce the number of retraces (see below)

Major Features and Improvements

  • tf.keras:

    • Added tf.keras.applications.resnet_rs models. This includes the ResNetRS50, ResNetRS101, ResNetRS152, ResNetRS200, ResNetRS270, ResNetRS350 and ResNetRS420 model architectures. The ResNetRS models are based on the architecture described in Revisiting ResNets: Improved Training and Scaling Strategies
    • Added tf.keras.optimizers.experimental.Optimizer. The reworked optimizer gives more control over different phases of optimizer calls, and is easier to customize. We provide Adam, SGD, Adadelta, AdaGrad and RMSprop optimizers based on tf.keras.optimizers.experimental.Optimizer. Generally the new optimizers work in the same way as the old ones, but support new constructor arguments. In the future, the symbols tf.keras.optimizers.Optimizer/Adam/etc will point to the new optimizers, and the previous generation of optimizers will be moved to tf.keras.optimizers.legacy.Optimizer/Adam/etc.
    • Added L2 unit normalization layer tf.keras.layers.UnitNormalization.
    • Added tf.keras.regularizers.OrthogonalRegularizer, a new regularizer that encourages orthogonality between the rows (or columns) or a weight matrix.
    • Added tf.keras.layers.RandomBrightness layer for image preprocessing.
    • Added APIs for switching between interactive logging and absl logging. By default, Keras always writes the logs to stdout. However, this is not optimal in a non-interactive environment, where you don't have access to stdout, but can only view the logs. You can use tf.keras.utils.disable_interactive_logging() to write the logs to ABSL logging. You can also use tf.keras.utils.enable_interactive_logging() to change it back to stdout, or tf.keras.utils.is_interactive_logging_enabled() to check if interactive logging is enabled.
    • Changed default value for the verbose argument of Model.evaluate() and Model.predict() to "auto", which defaults to verbose=1 for most cases and defaults to verbose=2 when used with ParameterServerStrategy or with interactive logging disabled.
    • Argument jit_compile in Model.compile() now applies to Model.evaluate() and Model.predict(). Setting jit_compile=True in compile() compiles the model's training, evaluation, and inference steps to XLA. Note that jit_compile=True may not necessarily work for all models.
    • Added DTensor-related Keras APIs under tf.keras.dtensor namespace. The APIs are still classified as experimental. You are welcome to try it out. Please check the tutoral and guide on https://www.tensorflow.org/ for more details about DTensor.
  • tf.lite:

    • Added TFLite builtin op support for the following TF ops:
      • tf.math.argmin/tf.math.argmax for input data type tf.bool on CPU.
      • tf.nn.gelu op for output data type tf.float32 and quantization on CPU.
    • Add nominal support for unsigned 16-bit integer tensor types. Note that very few TFLite kernels support this type natively, so its use in mobile ML authoring is generally discouraged.
    • Add support for unsigned 16-bit integer tensor types in cast op.
    • Experimental support for lowering list_ops.tensor_list_set_item with DynamicUpdateSlice.
    • Enabled a new MLIR-based dynamic range quantization backend by default
      • The new backend is used for post-training int8 dynamic range quantization and post-training float16 quantization.
      • Set experimental_new_dynamic_range_quantizer in tf.lite.TFLiteConverter to False to disable this change
    • Native TF Lite variables are now enabled during conversion by default on all v2 TfLiteConverter entry points. experimental_enable_resource_variables on tf.lite.TFLiteConverter is now True by default and will be removed in the future.
  • tf.function:

    • Custom classes used as arguments for tf.function can now specify rules regarding when retracing needs to occur by implementing the Tracing Protocol available through tf.types.experimental.SupportsTracingProtocol.
    • TypeSpec classes (as associated with ExtensionTypes) also implement the Tracing Protocol which can be overriden if necessary.
    • The newly introduced reduce_retracing option also uses the Tracing Protocol to proactively generate generalized traces similar to experimental_relax_shapes (which has now been deprecated).
  • Unified eager and tf.function execution:

    • Eager mode can now execute each op as a tf.function, allowing for more consistent feature support in future releases.
    • It is available for immediate use.
      • See the TF_RUN_EAGER_OP_AS_FUNCTION environment variable in eager context.
      • Eager performance should be similar with this feature enabled.
        • A roughly 5us per-op overhead may be observed when running many small functions.
        • Note a known issue with GPU performance.
      • The behavior of tf.function itself is unaffected.
    • Note: This feature will be enabled by default in an upcoming version of TensorFlow.
  • `tf....

Read more

TensorFlow 2.9.0-rc0

12 Apr 17:32
8727d03
Compare
Choose a tag to compare
TensorFlow 2.9.0-rc0 Pre-release
Pre-release

Release 2.9.0

Breaking Changes

  • Due to security issues in TF 2.8, all boosted trees code has now been removed (after being deprecated in TF 2.8). Users should switch to TensorFlow Decision Forests.
  • Build, Compilation and Packaging
    • TensorFlow is now compiled with _GLIBCXX_USE_CXX11_ABI=1. Downstream projects that encounter std::__cxx11 or [abi:cxx11] linker errors will need to adopt this compiler option. See the GNU C++ Library docs on Dual ABI.
    • TensorFlow Python wheels now specifically conform to manylinux2014, an upgrade from manylinux2010. The minimum Pip version supporting manylinux2014 is Pip 19.3 (see pypa/manylinux. This change may affect you if you have been using TensorFlow on a very old platform equivalent to CentOS 6, as manylinux2014 targets CentOS 7 as a compatibility base. Note that TensorFlow does not officially support either platform.
    • Discussion for these changes can be found on SIG Build's TensorFlow Community Forum thread
  • The tf.keras.mixed_precision.experimental API has been removed. The non-experimental symbols under tf.keras.mixed_precision have been available since TensorFlow 2.4 and should be used instead.
    • The non-experimental API has some minor differences from the experimental API. In most cases, you only need to make three minor changes:
      • Remove the word "experimental" from tf.keras.mixed_precision symbols. E.g., replace tf.keras.mixed_precision.experimental.global_policy with tf.keras.mixed_precision.global_policy.
      • Replace tf.keras.mixed_precision.experimental.set_policy with tf.keras.mixed_precision.set_global_policy. The experimental symbol set_policy was renamed to set_global_policy in the non-experimental API.
      • Replace LossScaleOptimizer(opt, "dynamic") with LossScaleOptimizer(opt). If you pass anything other than "dynamic" to the second argument, see (1) of the next section.
    • In the following rare cases, you need to make more changes when switching to the non-experimental API:
      • If you passed anything other than "dynamic" to the loss_scale argument (the second argument) of LossScaleOptimizer:
      • If you passed a value to the loss_scale argument (the second argument) of Policy:
        • The experimental version of Policy optionally took in a tf.compat.v1.mixed_precision.LossScale in the constructor, which defaulted to a dynamic loss scale for the "mixed_float16" policy and no loss scale for other policies. In Model.compile, if the model's policy had a loss scale, the optimizer would be wrapped with a LossScaleOptimizer. With the non-experimental Policy, there is no loss scale associated with the Policy, and Model.compile wraps the optimizer with a LossScaleOptimizer if and only if the policy is a "mixed_float16" policy. If you previously passed a LossScale to the experimental Policy, consider just removing it, as the default loss scaling behavior is usually what you want. If you really want to customize the loss scaling behavior, you can wrap your optimizer with a LossScaleOptimizer before passing it to Model.compile.
      • If you use the very rarely-used function tf.keras.mixed_precision.experimental.get_layer_policy:
        • Replace tf.keras.mixed_precision.experimental.get_layer_policy(layer) with layer.dtype_policy.
  • tf.mixed_precision.experimental.LossScale and its subclasses have been removed from the TF2 namespace. This symbols were very rarely used and were only useful in TF2 for use in the now-removed tf.keras.mixed_precision.experimental API. The symbols are still available under tf.compat.v1.mixed_precision.
  • The experimental_relax_shapes heuristic for tf.function has been deprecated and replaced with reduce_retracing which encompasses broader heuristics to reduce the number of retraces (see below)

Major Features and Improvements

  • tf.keras:

    • Added tf.keras.applications.resnet_rs models. This includes the ResNetRS50, ResNetRS101, ResNetRS152, ResNetRS200, ResNetRS270, ResNetRS350 and ResNetRS420 model architectures. The ResNetRS models are based on the architecture described in Revisiting ResNets: Improved Training and Scaling Strategies
    • Added tf.keras.optimizers.experimental.Optimizer. The reworked optimizer gives more control over different phases of optimizer calls, and is easier to customize. We provide Adam, SGD, Adadelta, AdaGrad and RMSprop optimizers based on tf.keras.optimizers.experimental.Optimizer. Generally the new optimizers work in the same way as the old ones, but support new constructor arguments. In the future, the symbols tf.keras.optimizers.Optimizer/Adam/etc will point to the new optimizers, and the previous generation of optimizers will be moved to tf.keras.optimizers.legacy.Optimizer/Adam/etc.
    • Added L2 unit normalization layer tf.keras.layers.UnitNormalization.
    • Added tf.keras.regularizers.OrthogonalRegularizer, a new regularizer that encourages orthogonality between the rows (or columns) or a weight matrix.
    • Added tf.keras.layers.RandomBrightness layer for image preprocessing.
    • Added APIs for switching between interactive logging and absl logging. By default, Keras always writes the logs to stdout. However, this is not optimal in a non-interactive environment, where you don't have access to stdout, but can only view the logs. You can use tf.keras.utils.disable_interactive_logging() to write the logs to ABSL logging. You can also use tf.keras.utils.enable_interactive_logging() to change it back to stdout, or tf.keras.utils.is_interactive_logging_enabled() to check if interactive logging is enabled.
    • Changed default value for the verbose argument of Model.evaluate() and Model.predict() to "auto", which defaults to verbose=1 for most cases and defaults to verbose=2 when used with ParameterServerStrategy or with interactive logging disabled.
    • Argument jit_compile in Model.compile() now applies to Model.evaluate() and Model.predict(). Setting jit_compile=True in compile() compiles the model's training, evaluation, and inference steps to XLA. Note that jit_compile=True may not necessarily work for all models.
    • Added DTensor-related Keras APIs under tf.keras.dtensor namespace. The APIs are still classified as experimental. You are welcome to try it out. Please check the tutoral and guide on https://www.tensorflow.org/ for more details about DTensor.
  • tf.lite:

    • Added TFLite builtin op support for the following TF ops:
      • tf.math.argmin/tf.math.argmax for input data type tf.bool on CPU.
      • tf.nn.gelu op for output data type tf.float32 and quantization on CPU.
    • Add nominal support for unsigned 16-bit integer tensor types. Note that very few TFLite kernels support this type natively, so its use in mobile ML authoring is generally discouraged.
    • Add support for unsigned 16-bit integer tensor types in cast op.
    • Experimental support for lowering list_ops.tensor_list_set_item with DynamicUpdateSlice.
    • Enabled a new MLIR-based dynamic range quantization backend by default
      • The new backend is used for post-training int8 dynamic range quantization and post-training float16 quantization.
      • Set experimental_new_dynamic_range_quantizer in tf.lite.TFLiteConverter to False to disable this change
    • Native TF Lite variables are now enabled during conversion by default on all v2 TfLiteConverter entry points. experimental_enable_resource_variables on tf.lite.TFLiteConverter is now True by default and will be removed in the future.
  • tf.function:

    • Custom classes used as arguments for tf.function can now specify rules regarding when retracing needs to occur by implementing the Tracing Protocol available through tf.types.experimental.SupportsTracingProtocol.
    • TypeSpec classes (as associated with ExtensionTypes) also implement the Tracing Protocol which can be overriden if necessary.
    • The newly introduced reduce_retracing option also uses the Tracing Protocol to proactively generate generalized traces similar to experimental_relax_shapes (which has now been deprecated).
  • Unified eager and tf.function execution:

    • Eager mode can now execute each op as a tf.function, allowing for more consistent feature support in future releases.
    • It is available for immediate use.
      • See the TF_RUN_EAGER_OP_AS_FUNCTION environment variable in eager context.
      • Eager performance should be similar with this feature enabled.
        • A roughly 5us per-op overhead may be observed when running many small functions.
        • Note a known issue with GPU performance.
      • The behavior of tf.function itself is unaffected.
    • Note: This feature will be enabled by default in an upcoming version of TensorFlow.

Bug ...

Read more

TensorFlow 2.8.0

02 Feb 16:54
3f878cf
Compare
Choose a tag to compare

Release 2.8.0

Major Features and Improvements

  • tf.lite:

    • Added TFLite builtin op support for the following TF ops:
      • tf.raw_ops.Bucketize op on CPU.
      • tf.where op for data types tf.int32/tf.uint32/tf.int8/tf.uint8/tf.int64.
      • tf.random.normal op for output data type tf.float32 on CPU.
      • tf.random.uniform op for output data type tf.float32 on CPU.
      • tf.random.categorical op for output data type tf.int64 on CPU.
  • tensorflow.experimental.tensorrt:

    • conversion_params is now deprecated inside TrtGraphConverterV2 in favor of direct arguments: max_workspace_size_bytes, precision_mode, minimum_segment_size, maximum_cached_engines, use_calibration and allow_build_at_runtime.
    • Added a new parameter called save_gpu_specific_engines to the .save() function inside TrtGraphConverterV2. When False, the .save() function won't save any TRT engines that have been built. When True (default), the original behavior is preserved.
    • TrtGraphConverterV2 provides a new API called .summary() which outputs a summary of the inference converted by TF-TRT. It namely shows each TRTEngineOp with their input(s)' and output(s)' shape and dtype. A detailed version of the summary is available which prints additionally all the TensorFlow OPs included in each of the TRTEngineOps.
  • tf.tpu.experimental.embedding:

    • tf.tpu.experimental.embedding.FeatureConfig now takes an additional argument output_shape which can specify the shape of the output activation for the feature.
    • tf.tpu.experimental.embedding.TPUEmbedding now has the same behavior as tf.tpu.experimental.embedding.serving_embedding_lookup which can take arbitrary rank of dense and sparse tensor. For ragged tensor, though the input tensor remains to be rank 2, the activations now can be rank 2 or above by specifying the output shape in the feature config or via the build method.
  • Add tf.config.experimental.enable_op_determinism, which makes TensorFlow ops run deterministically at the cost of performance. Replaces the TF_DETERMINISTIC_OPS environmental variable, which is now deprecated. The "Bug Fixes and Other Changes" section lists more determinism-related changes.

  • (Since TF 2.7) Add PluggableDevice support to TensorFlow Profiler.

Bug Fixes and Other Changes

  • tf.data:

    • The optimization parallel_batch now becomes default if not disabled by users, which will parallelize copying of batch elements.
    • Added the ability for TensorSliceDataset to identify and handle inputs that are files. This enables creating hermetic SavedModels when using datasets created from files.
      • The optimization parallel_batch now becomes default if not disabled by users, which will parallelize copying of batch elements.
      • Added the ability for TensorSliceDataset to identify and handle inputs that are files. This enables creating hermetic SavedModels when using datasets created from files.
  • tf.lite:

    • Adds GPU Delegation support for serialization to Java API. This boosts initialization time up to 90% when OpenCL is available.
    • Deprecated Interpreter::SetNumThreads, in favor of InterpreterBuilder::SetNumThreads.
  • tf.keras:

    • Adds tf.compat.v1.keras.utils.get_or_create_layer to aid migration to TF2 by enabling tracking of nested keras models created in TF1-style, when used with the tf.compat.v1.keras.utils.track_tf1_style_variables decorator.
    • Added a tf.keras.layers.experimental.preprocessing.HashedCrossing layer which applies the hashing trick to the concatenation of crossed scalar inputs. This provides a stateless way to try adding feature crosses of integer or string data to a model.
    • Removed keras.layers.experimental.preprocessing.CategoryCrossing. Users should migrate to the HashedCrossing layer or use tf.sparse.cross/tf.ragged.cross directly.
    • Added additional standardize and split modes to TextVectorization:
      • standardize="lower" will lowercase inputs.
      • standardize="string_punctuation" will remove all puncuation.
      • split="character" will split on every unicode character.
    • Added an output_mode argument to the Discretization and Hashing layers with the same semantics as other preprocessing layers. All categorical preprocessing layers now support output_mode.
    • All preprocessing layer output will follow the compute dtype of a tf.keras.mixed_precision.Policy, unless constructed with output_mode="int" in which case output will be tf.int64. The output type of any preprocessing layer can be controlled individually by passing a dtype argument to the layer.
    • tf.random.Generator for keras initializers and all RNG code.
    • Added 3 new APIs for enable/disable/check the usage of tf.random.Generator in keras backend, which will be the new backend for all the RNG in Keras. We plan to switch on the new code path by default in tf 2.8, and the behavior change will likely to cause some breakage on user side (eg if the test is checking against some golden nubmer). These 3 APIs will allow user to disable and switch back to legacy behavior if they prefer. In future (eg TF 2.10), we expect to totally remove the legacy code path (stateful random Ops), and these 3 APIs will be removed as well.
    • tf.keras.callbacks.experimental.BackupAndRestore is now available as tf.keras.callbacks.BackupAndRestore. The experimental endpoint is deprecated and will be removed in a future release.
    • tf.keras.experimental.SidecarEvaluator is now available as tf.keras.utils.SidecarEvaluator. The experimental endpoint is deprecated and will be removed in a future release.
    • Metrics update and collection logic in default Model.train_step() is now customizable via overriding Model.compute_metrics().
    • Losses computation logic in default Model.train_step() is now customizable via overriding Model.compute_loss().
    • jit_compile added to Model.compile() on an opt-in basis to compile the model's training step with XLA. Note that jit_compile=True may not necessarily work for all models.
  • Deterministic Op Functionality:

    • Fix regression in deterministic selection of deterministic cuDNN convolution algorithms, a regression that was introduced in v2.5. Note that nondeterministic out-of-memory events while selecting algorithms could still lead to nondeterminism, although this is very unlikely. This additional, unlikely source will be eliminated in a later version.
    • Add determinsitic GPU implementations of:
      • tf.function(jit_compile=True)'s that use Scatter.
      • (since v2.7) Stateful ops used in tf.data.Dataset
      • (since v2.7) tf.convert_to_tensor when fed with (sparse) tf.IndexedSlices (because it uses tf.math.unsorted_segment_sum)
      • (since v2.7) tf.gather backprop (because tf.convert_to_tensor reduces tf.gather's (sparse) tf.IndexedSlices gradients into its dense params input)
      • (since v2.7) tf.math.segment_mean
      • (since v2.7) tf.math.segment_prod
      • (since v2.7) tf.math.segment_sum
      • (since v2.7) tf.math.unsorted_segment_mean
      • (since v2.7) tf.math.unsorted_segment_prod
      • (since v2.7) tf.math.unsorted_segment_sum
      • (since v2.7) tf.math.unsorted_segment_sqrt
      • (since v2.7) tf.nn.ctc_loss (resolved, possibly in prior release, and confirmed with tests)
      • (since v2.7)tf.nn.sparse_softmax_crossentropy_with_logits
    • (since v2.7) Run tf.scatter_nd and other related scatter functions, such as tf.tensor_scatter_nd_update, on CPU (with significant performance penalty).
    • Add determinism-unimplemented exception-throwing to the following ops. When op-determinism is expected (i.e. after tf.config.experimental.enable_op_determinism has been called), an attempt to use the specified paths through the following ops on a GPU will cause tf.errors.UnimplementedError (with an understandable message), unless otherwise specified, to be thrown.
      • FakeQuantWithMinMaxVarsGradient and FakeQuantWithMinMaxVarsPerChannelGradient
      • (since v2.7) tf.compat.v1.get_seed if the global random seed has not yet been set (via tf.random.set_seed). Throws RuntimeError from Python or InvalidArgument from C++
      • (since v2.7) tf.compat.v1.nn.fused_batch_norm backprop to offset when is_training=False
      • (since v2.7) tf.image.adjust_contrast forward
      • (since v2.7) tf.image.resize with method=ResizeMethod.NEAREST backprop
      • (since v2.7) tf.linalg.svd
      • (since v2.7) tf.math.bincount
      • (since v2.7) tf.nn.depthwise_conv2d backprop to filter when not using cuDNN convolution
      • (since v2.7) tf.nn.dilation2d gradient
      • (since v2.7) tf.nn.max_pool_with_argmax gradient
      • (since v2.7) tf.raw_ops.DebugNumericSummary and tf.raw_ops.DebugNumericSummaryV2
      • (since v2.7) tf.timestamp. Throws FailedPrecondition
      • (since v2.7) tf.Variable.scatter_add (and other scatter methods, both on ref and resource variables)
      • (since v2.7) The random-number-generating ops in the tf.random module when the global random seed has not yet been set (via tf.random.set_seed). Throws RuntimeError from Python or InvalidArgument fro...
Read more

TensorFlow 2.7.1

02 Feb 16:54
2a0f59e
Compare
Choose a tag to compare

Release 2.7.1

This releases introduces several vulnerability fixes:

  • Fixes a floating point division by 0 when executing convolution operators (CVE-2022-21725)
  • Fixes a heap OOB read in shape inference for ReverseSequence (CVE-2022-21728)
  • Fixes a heap OOB access in Dequantize (CVE-2022-21726)
  • Fixes an integer overflow in shape inference for Dequantize (CVE-2022-21727)
  • Fixes a heap OOB access in FractionalAvgPoolGrad (CVE-2022-21730)
  • Fixes an overflow and divide by zero in UnravelIndex (CVE-2022-21729)
  • Fixes a type confusion in shape inference for ConcatV2 (CVE-2022-21731)
  • Fixes an OOM in ThreadPoolHandle (CVE-2022-21732)
  • Fixes an OOM due to integer overflow in StringNGrams (CVE-2022-21733)
  • Fixes more issues caused by incomplete validation in boosted trees code (CVE-2021-41208)
  • Fixes an integer overflows in most sparse component-wise ops (CVE-2022-23567)
  • Fixes an integer overflows in AddManySparseToTensorsMap (CVE-2022-23568)
  • Fixes a number of CHECK-failures in MapStage (CVE-2022-21734)
  • Fixes a division by zero in FractionalMaxPool (CVE-2022-21735)
  • Fixes a number of CHECK-fails when building invalid/overflowing tensor shapes (CVE-2022-23569)
  • Fixes an undefined behavior in SparseTensorSliceDataset (CVE-2022-21736)
  • Fixes an assertion failure based denial of service via faulty bin count operations (CVE-2022-21737)
  • Fixes a reference binding to null pointer in QuantizedMaxPool (CVE-2022-21739)
  • Fixes an integer overflow leading to crash in SparseCountSparseOutput (CVE-2022-21738)
  • Fixes a heap overflow in SparseCountSparseOutput (CVE-2022-21740)
  • Fixes an FPE in BiasAndClamp in TFLite (CVE-2022-23557)
  • Fixes an FPE in depthwise convolutions in TFLite (CVE-2022-21741)
  • Fixes an integer overflow in TFLite array creation (CVE-2022-23558)
  • Fixes an integer overflow in TFLite (CVE-2022-23559)
  • Fixes a dangerous OOB write in TFLite (CVE-2022-23561)
  • Fixes a vulnerability leading to read and write outside of bounds in TFLite (CVE-2022-23560)
  • Fixes a set of vulnerabilities caused by using insecure temporary files (CVE-2022-23563)
  • Fixes an integer overflow in Range resulting in undefined behavior and OOM (CVE-2022-23562)
  • Fixes a vulnerability where missing validation causes tf.sparse.split to crash when axis is a tuple (CVE-2021-41206)
  • Fixes a CHECK-fail when decoding resource handles from proto (CVE-2022-23564)
  • Fixes a CHECK-fail with repeated AttrDef (CVE-2022-23565)
  • Fixes a heap OOB write in Grappler (CVE-2022-23566)
  • Fixes a CHECK-fail when decoding invalid tensors from proto (CVE-2022-23571)
  • Fixes a null-dereference when specializing tensor type (CVE-2022-23570)
  • Fixes a crash when type cannot be specialized (CVE-2022-23572)
  • Fixes a heap OOB read/write in SpecializeType (CVE-2022-23574)
  • Fixes an unitialized variable access in AssignOp (CVE-2022-23573)
  • Fixes an integer overflow in OpLevelCostEstimator::CalculateTensorSize (CVE-2022-23575)
  • Fixes an integer overflow in OpLevelCostEstimator::CalculateOutputSize (CVE-2022-23576)
  • Fixes a null dereference in GetInitOp (CVE-2022-23577)
  • Fixes a memory leak when a graph node is invalid (CVE-2022-23578)
  • Fixes an abort caused by allocating a vector that is too large (CVE-2022-23580)
  • Fixes multiple CHECK-failures during Grappler's IsSimplifiableReshape (CVE-2022-23581)
  • Fixes multiple CHECK-failures during Grappler's SafeToRemoveIdentity (CVE-2022-23579)
  • Fixes multiple CHECK-failures in TensorByteSize (CVE-2022-23582)
  • Fixes multiple CHECK-failures in binary ops due to type confusion (CVE-2022-23583)
  • Fixes a use after free in DecodePng kernel (CVE-2022-23584)
  • Fixes a memory leak in decoding PNG images (CVE-2022-23585)
  • Fixes multiple CHECK-fails in function.cc (CVE-2022-23586)
  • Fixes multiple CHECK-fails due to attempting to build a reference tensor (CVE-2022-23588)
  • Fixes an integer overflow in Grappler cost estimation of crop and resize operation (CVE-2022-23587)
  • Fixes a null pointer dereference in Grappler's IsConstant (CVE-2022-23589)
  • Fixes a CHECK failure in constant folding (CVE-2021-41197)
  • Fixes a stack overflow due to self-recursive function in GraphDef (CVE-2022-23591)
  • Fixes a crash due to erroneous StatusOr (CVE-2022-23590)
  • Fixes multiple crashes and heap OOB accesses in TFG dialect (MLIR) (CVE-2022-23594)
  • Fixes a null pointer dereference in BuildXlaCompilationCache (XLA) (CVE-2022-23595)
  • Updates icu to 69.1 to handle CVE-2020-10531

TensorFlow 2.6.3

02 Feb 16:54
92a6bb0
Compare
Choose a tag to compare

Release 2.6.3

This releases introduces several vulnerability fixes:

  • Fixes a floating point division by 0 when executing convolution operators (CVE-2022-21725)
  • Fixes a heap OOB read in shape inference for ReverseSequence (CVE-2022-21728)
  • Fixes a heap OOB access in Dequantize (CVE-2022-21726)
  • Fixes an integer overflow in shape inference for Dequantize (CVE-2022-21727)
  • Fixes a heap OOB access in FractionalAvgPoolGrad (CVE-2022-21730)
  • Fixes an overflow and divide by zero in UnravelIndex (CVE-2022-21729)
  • Fixes a type confusion in shape inference for ConcatV2 (CVE-2022-21731)
  • Fixes an OOM in ThreadPoolHandle (CVE-2022-21732)
  • Fixes an OOM due to integer overflow in StringNGrams (CVE-2022-21733)
  • Fixes more issues caused by incomplete validation in boosted trees code (CVE-2021-41208)
  • Fixes an integer overflows in most sparse component-wise ops (CVE-2022-23567)
  • Fixes an integer overflows in AddManySparseToTensorsMap (CVE-2022-23568)
  • Fixes a number of CHECK-failures in MapStage (CVE-2022-21734)
  • Fixes a division by zero in FractionalMaxPool (CVE-2022-21735)
  • Fixes a number of CHECK-fails when building invalid/overflowing tensor shapes (CVE-2022-23569)
  • Fixes an undefined behavior in SparseTensorSliceDataset (CVE-2022-21736)
  • Fixes an assertion failure based denial of service via faulty bin count operations (CVE-2022-21737)
  • Fixes a reference binding to null pointer in QuantizedMaxPool (CVE-2022-21739)
  • Fixes an integer overflow leading to crash in SparseCountSparseOutput (CVE-2022-21738)
  • Fixes a heap overflow in SparseCountSparseOutput (CVE-2022-21740)
  • Fixes an FPE in BiasAndClamp in TFLite (CVE-2022-23557)
  • Fixes an FPE in depthwise convolutions in TFLite (CVE-2022-21741)
  • Fixes an integer overflow in TFLite array creation (CVE-2022-23558)
  • Fixes an integer overflow in TFLite (CVE-2022-23559)
  • Fixes a dangerous OOB write in TFLite (CVE-2022-23561)
  • Fixes a vulnerability leading to read and write outside of bounds in TFLite (CVE-2022-23560)
  • Fixes a set of vulnerabilities caused by using insecure temporary files (CVE-2022-23563)
  • Fixes an integer overflow in Range resulting in undefined behavior and OOM (CVE-2022-23562)
  • Fixes a vulnerability where missing validation causes tf.sparse.split to crash when axis is a tuple (CVE-2021-41206)
  • Fixes a CHECK-fail when decoding resource handles from proto (CVE-2022-23564)
  • Fixes a CHECK-fail with repeated AttrDef (CVE-2022-23565)
  • Fixes a heap OOB write in Grappler (CVE-2022-23566)
  • Fixes a CHECK-fail when decoding invalid tensors from proto (CVE-2022-23571)
  • Fixes a null-dereference when specializing tensor type (CVE-2022-23570)
  • Fixes a crash when type cannot be specialized (CVE-2022-23572)
  • Fixes a heap OOB read/write in SpecializeType (CVE-2022-23574)
  • Fixes an unitialized variable access in AssignOp (CVE-2022-23573)
  • Fixes an integer overflow in OpLevelCostEstimator::CalculateTensorSize (CVE-2022-23575)
  • Fixes an integer overflow in OpLevelCostEstimator::CalculateOutputSize (CVE-2022-23576)
  • Fixes a null dereference in GetInitOp (CVE-2022-23577)
  • Fixes a memory leak when a graph node is invalid (CVE-2022-23578)
  • Fixes an abort caused by allocating a vector that is too large (CVE-2022-23580)
  • Fixes multiple CHECK-failures during Grappler's IsSimplifiableReshape (CVE-2022-23581)
  • Fixes multiple CHECK-failures during Grappler's SafeToRemoveIdentity (CVE-2022-23579)
  • Fixes multiple CHECK-failures in TensorByteSize (CVE-2022-23582)
  • Fixes multiple CHECK-failures in binary ops due to type confusion (CVE-2022-23583)
  • Fixes a use after free in DecodePng kernel (CVE-2022-23584)
  • Fixes a memory leak in decoding PNG images (CVE-2022-23585)
  • Fixes multiple CHECK-fails in function.cc (CVE-2022-23586)
  • Fixes multiple CHECK-fails due to attempting to build a reference tensor (CVE-2022-23588)
  • Fixes an integer overflow in Grappler cost estimation of crop and resize operation (CVE-2022-23587)
  • Fixes a null pointer dereference in Grappler's IsConstant (CVE-2022-23589)
  • Fixes a CHECK failure in constant folding (CVE-2021-41197)
  • Fixes a stack overflow due to self-recursive function in GraphDef (CVE-2022-23591)
  • Fixes a null pointer dereference in BuildXlaCompilationCache (XLA) (CVE-2022-23595)
  • Updates icu to 69.1 to handle CVE-2020-10531

TensorFlow 2.5.3

02 Feb 16:54
959e9b2
Compare
Choose a tag to compare

Release 2.5.3

Note: This is the last release in the 2.5 series.

This releases introduces several vulnerability fixes:

  • Fixes a floating point division by 0 when executing convolution operators (CVE-2022-21725)
  • Fixes a heap OOB read in shape inference for ReverseSequence (CVE-2022-21728)
  • Fixes a heap OOB access in Dequantize (CVE-2022-21726)
  • Fixes an integer overflow in shape inference for Dequantize (CVE-2022-21727)
  • Fixes a heap OOB access in FractionalAvgPoolGrad (CVE-2022-21730)
  • Fixes an overflow and divide by zero in UnravelIndex (CVE-2022-21729)
  • Fixes a type confusion in shape inference for ConcatV2 (CVE-2022-21731)
  • Fixes an OOM in ThreadPoolHandle (CVE-2022-21732)
  • Fixes an OOM due to integer overflow in StringNGrams (CVE-2022-21733)
  • Fixes more issues caused by incomplete validation in boosted trees code (CVE-2021-41208)
  • Fixes an integer overflows in most sparse component-wise ops (CVE-2022-23567)
  • Fixes an integer overflows in AddManySparseToTensorsMap (CVE-2022-23568)
  • Fixes a number of CHECK-failures in MapStage (CVE-2022-21734)
  • Fixes a division by zero in FractionalMaxPool (CVE-2022-21735)
  • Fixes a number of CHECK-fails when building invalid/overflowing tensor shapes (CVE-2022-23569)
  • Fixes an undefined behavior in SparseTensorSliceDataset (CVE-2022-21736)
  • Fixes an assertion failure based denial of service via faulty bin count operations (CVE-2022-21737)
  • Fixes a reference binding to null pointer in QuantizedMaxPool (CVE-2022-21739)
  • Fixes an integer overflow leading to crash in SparseCountSparseOutput (CVE-2022-21738)
  • Fixes a heap overflow in SparseCountSparseOutput (CVE-2022-21740)
  • Fixes an FPE in BiasAndClamp in TFLite (CVE-2022-23557)
  • Fixes an FPE in depthwise convolutions in TFLite (CVE-2022-21741)
  • Fixes an integer overflow in TFLite array creation (CVE-2022-23558)
  • Fixes an integer overflow in TFLite (CVE-2022-23559)
  • Fixes a dangerous OOB write in TFLite (CVE-2022-23561)
  • Fixes a vulnerability leading to read and write outside of bounds in TFLite (CVE-2022-23560)
  • Fixes a set of vulnerabilities caused by using insecure temporary files (CVE-2022-23563)
  • Fixes an integer overflow in Range resulting in undefined behavior and OOM (CVE-2022-23562)
  • Fixes a vulnerability where missing validation causes tf.sparse.split to crash when axis is a tuple (CVE-2021-41206)
  • Fixes a CHECK-fail when decoding resource handles from proto (CVE-2022-23564)
  • Fixes a CHECK-fail with repeated AttrDef (CVE-2022-23565)
  • Fixes a heap OOB write in Grappler (CVE-2022-23566)
  • Fixes a CHECK-fail when decoding invalid tensors from proto (CVE-2022-23571)
  • Fixes an unitialized variable access in AssignOp (CVE-2022-23573)
  • Fixes an integer overflow in OpLevelCostEstimator::CalculateTensorSize (CVE-2022-23575)
  • Fixes an integer overflow in OpLevelCostEstimator::CalculateOutputSize (CVE-2022-23576)
  • Fixes a null dereference in GetInitOp (CVE-2022-23577)
  • Fixes a memory leak when a graph node is invalid (CVE-2022-23578)
  • Fixes an abort caused by allocating a vector that is too large (CVE-2022-23580)
  • Fixes multiple CHECK-failures during Grappler's IsSimplifiableReshape (CVE-2022-23581)
  • Fixes multiple CHECK-failures during Grappler's SafeToRemoveIdentity (CVE-2022-23579)
  • Fixes multiple CHECK-failures in TensorByteSize (CVE-2022-23582)
  • Fixes multiple CHECK-failures in binary ops due to type confusion (CVE-2022-23583)
  • Fixes a use after free in DecodePng kernel (CVE-2022-23584)
  • Fixes a memory leak in decoding PNG images (CVE-2022-23585)
  • Fixes multiple CHECK-fails in function.cc (CVE-2022-23586)
  • Fixes multiple CHECK-fails due to attempting to build a reference tensor (CVE-2022-23588)
  • Fixes an integer overflow in Grappler cost estimation of crop and resize operation (CVE-2022-23587)
  • Fixes a null pointer dereference in Grappler's IsConstant (CVE-2022-23589)
  • Fixes a CHECK failure in constant folding (CVE-2021-41197)
  • Fixes a stack overflow due to self-recursive function in GraphDef (CVE-2022-23591)
  • Updates icu to 69.1 to handle CVE-2020-10531