Releases: tensorflow/tensorflow
TensorFlow 2.6.4
Release 2.6.4
This releases introduces several vulnerability fixes:
- Fixes a code injection in
saved_model_cli
(CVE-2022-29216) - Fixes a missing validation which causes
TensorSummaryV2
to crash (CVE-2022-29193) - Fixes a missing validation which crashes
QuantizeAndDequantizeV4Grad
(CVE-2022-29192) - Fixes a missing validation which causes denial of service via
DeleteSessionTensor
(CVE-2022-29194) - Fixes a missing validation which causes denial of service via
GetSessionTensor
(CVE-2022-29191) - Fixes a missing validation which causes denial of service via
StagePeek
(CVE-2022-29195) - Fixes a missing validation which causes denial of service via
UnsortedSegmentJoin
(CVE-2022-29197) - Fixes a missing validation which causes denial of service via
LoadAndRemapMatrix
(CVE-2022-29199) - Fixes a missing validation which causes denial of service via
SparseTensorToCSRSparseMatrix
(CVE-2022-29198) - Fixes a missing validation which causes denial of service via
LSTMBlockCell
(CVE-2022-29200) - Fixes a missing validation which causes denial of service via
Conv3DBackpropFilterV2
(CVE-2022-29196) - Fixes a
CHECK
failure in depthwise ops via overflows (CVE-2021-41197) - Fixes issues arising from undefined behavior stemming from users supplying invalid resource handles (CVE-2022-29207)
- Fixes a segfault due to missing support for quantized types (CVE-2022-29205)
- Fixes a missing validation which results in undefined behavior in
SparseTensorDenseAdd
(CVE-2022-29206) - Fixes a missing validation which results in undefined behavior in
QuantizedConv2D
(CVE-2022-29201) - Fixes an integer overflow in
SpaceToBatchND
(CVE-2022-29203) - Fixes a segfault and OOB write due to incomplete validation in
EditDistance
(CVE-2022-29208) - Fixes a missing validation which causes denial of service via
Conv3DBackpropFilterV2
(CVE-2022-29204) - Fixes a denial of service in
tf.ragged.constant
due to lack of validation (CVE-2022-29202) - Fixes a segfault when
tf.histogram_fixed_width
is called with NaN values (CVE-2022-29211) - Fixes a core dump when loading TFLite models with quantization (CVE-2022-29212)
- Fixes crashes stemming from incomplete validation in signal ops (CVE-2022-29213)
- Fixes a type confusion leading to
CHECK
-failure based denial of service (CVE-2022-29209) - Updates
curl
to7.83.1
to handle (CVE-2022-22576, (CVE-2022-27774, (CVE-2022-27775, (CVE-2022-27776, (CVE-2022-27778, (CVE-2022-27779, (CVE-2022-27780, (CVE-2022-27781, (CVE-2022-27782 and (CVE-2022-30115 - Updates
zlib
to1.2.12
after1.2.11
was pulled due to security issue
TensorFlow 2.8.1
Release 2.8.1
This releases introduces several vulnerability fixes:
- Fixes a code injection in
saved_model_cli
(CVE-2022-29216) - Fixes a missing validation which causes
TensorSummaryV2
to crash (CVE-2022-29193) - Fixes a missing validation which crashes
QuantizeAndDequantizeV4Grad
(CVE-2022-29192) - Fixes a missing validation which causes denial of service via
DeleteSessionTensor
(CVE-2022-29194) - Fixes a missing validation which causes denial of service via
GetSessionTensor
(CVE-2022-29191) - Fixes a missing validation which causes denial of service via
StagePeek
(CVE-2022-29195) - Fixes a missing validation which causes denial of service via
UnsortedSegmentJoin
(CVE-2022-29197) - Fixes a missing validation which causes denial of service via
LoadAndRemapMatrix
(CVE-2022-29199) - Fixes a missing validation which causes denial of service via
SparseTensorToCSRSparseMatrix
(CVE-2022-29198) - Fixes a missing validation which causes denial of service via
LSTMBlockCell
(CVE-2022-29200) - Fixes a missing validation which causes denial of service via
Conv3DBackpropFilterV2
(CVE-2022-29196) - Fixes a
CHECK
failure in depthwise ops via overflows (CVE-2021-41197) - Fixes issues arising from undefined behavior stemming from users supplying invalid resource handles (CVE-2022-29207)
- Fixes a segfault due to missing support for quantized types (CVE-2022-29205)
- Fixes a missing validation which results in undefined behavior in
SparseTensorDenseAdd
(CVE-2022-29206) - Fixes a missing validation which results in undefined behavior in
QuantizedConv2D
(CVE-2022-29201) - Fixes an integer overflow in
SpaceToBatchND
(CVE-2022-29203) - Fixes a segfault and OOB write due to incomplete validation in
EditDistance
(CVE-2022-29208) - Fixes a missing validation which causes denial of service via
Conv3DBackpropFilterV2
(CVE-2022-29204) - Fixes a denial of service in
tf.ragged.constant
due to lack of validation (CVE-2022-29202) - Fixes a segfault when
tf.histogram_fixed_width
is called with NaN values (CVE-2022-29211) - Fixes a core dump when loading TFLite models with quantization (CVE-2022-29212)
- Fixes crashes stemming from incomplete validation in signal ops (CVE-2022-29213)
- Fixes a type confusion leading to
CHECK
-failure based denial of service (CVE-2022-29209) - Fixes a heap buffer overflow due to incorrect hash function (CVE-2022-29210)
- Updates
curl
to7.83.1
to handle (CVE-2022-22576, (CVE-2022-27774, (CVE-2022-27775, (CVE-2022-27776, (CVE-2022-27778, (CVE-2022-27779, (CVE-2022-27780, (CVE-2022-27781, (CVE-2022-27782 and (CVE-2022-30115 - Updates
zlib
to1.2.12
after1.2.11
was pulled due to security issue
TensorFlow 2.7.2
Release 2.7.2
This releases introduces several vulnerability fixes:
- Fixes a code injection in
saved_model_cli
(CVE-2022-29216) - Fixes a missing validation which causes
TensorSummaryV2
to crash (CVE-2022-29193) - Fixes a missing validation which crashes
QuantizeAndDequantizeV4Grad
(CVE-2022-29192) - Fixes a missing validation which causes denial of service via
DeleteSessionTensor
(CVE-2022-29194) - Fixes a missing validation which causes denial of service via
GetSessionTensor
(CVE-2022-29191) - Fixes a missing validation which causes denial of service via
StagePeek
(CVE-2022-29195) - Fixes a missing validation which causes denial of service via
UnsortedSegmentJoin
(CVE-2022-29197) - Fixes a missing validation which causes denial of service via
LoadAndRemapMatrix
(CVE-2022-29199) - Fixes a missing validation which causes denial of service via
SparseTensorToCSRSparseMatrix
(CVE-2022-29198) - Fixes a missing validation which causes denial of service via
LSTMBlockCell
(CVE-2022-29200) - Fixes a missing validation which causes denial of service via
Conv3DBackpropFilterV2
(CVE-2022-29196) - Fixes a
CHECK
failure in depthwise ops via overflows (CVE-2021-41197) - Fixes issues arising from undefined behavior stemming from users supplying invalid resource handles (CVE-2022-29207)
- Fixes a segfault due to missing support for quantized types (CVE-2022-29205)
- Fixes a missing validation which results in undefined behavior in
SparseTensorDenseAdd
(CVE-2022-29206) - Fixes a missing validation which results in undefined behavior in
QuantizedConv2D
(CVE-2022-29201) - Fixes an integer overflow in
SpaceToBatchND
(CVE-2022-29203) - Fixes a segfault and OOB write due to incomplete validation in
EditDistance
(CVE-2022-29208) - Fixes a missing validation which causes denial of service via
Conv3DBackpropFilterV2
(CVE-2022-29204) - Fixes a denial of service in
tf.ragged.constant
due to lack of validation (CVE-2022-29202) - Fixes a segfault when
tf.histogram_fixed_width
is called with NaN values (CVE-2022-29211) - Fixes a core dump when loading TFLite models with quantization (CVE-2022-29212)
- Fixes crashes stemming from incomplete validation in signal ops (CVE-2022-29213)
- Fixes a type confusion leading to
CHECK
-failure based denial of service (CVE-2022-29209) - Updates
curl
to7.83.1
to handle (CVE-2022-22576, (CVE-2022-27774, (CVE-2022-27775, (CVE-2022-27776, (CVE-2022-27778, (CVE-2022-27779, (CVE-2022-27780, (CVE-2022-27781, (CVE-2022-27782 and (CVE-2022-30115 - Updates
zlib
to1.2.12
after1.2.11
was pulled due to security issue
TensorFlow 2.9.0-rc2
Release 2.9.0
Breaking Changes
- Due to security issues in TF 2.8, all boosted trees code has now been removed (after being deprecated in TF 2.8). Users should switch to TensorFlow Decision Forests.
- Build, Compilation and Packaging
- TensorFlow is now compiled with
_GLIBCXX_USE_CXX11_ABI=1
. Downstream projects that encounterstd::__cxx11
or[abi:cxx11]
linker errors will need to adopt this compiler option. See the GNU C++ Library docs on Dual ABI. - TensorFlow Python wheels now specifically conform to manylinux2014, an upgrade from manylinux2010. The minimum Pip version supporting manylinux2014 is Pip 19.3 (see pypa/manylinux. This change may affect you if you have been using TensorFlow on a very old platform equivalent to CentOS 6, as manylinux2014 targets CentOS 7 as a compatibility base. Note that TensorFlow does not officially support either platform.
- Discussion for these changes can be found on SIG Build's TensorFlow Community Forum thread
- TensorFlow is now compiled with
- The
tf.keras.mixed_precision.experimental
API has been removed. The non-experimental symbols undertf.keras.mixed_precision
have been available since TensorFlow 2.4 and should be used instead.- The non-experimental API has some minor differences from the experimental API. In most cases, you only need to make three minor changes:
- Remove the word "experimental" from
tf.keras.mixed_precision
symbols. E.g., replacetf.keras.mixed_precision.experimental.global_policy
withtf.keras.mixed_precision.global_policy
. - Replace
tf.keras.mixed_precision.experimental.set_policy
withtf.keras.mixed_precision.set_global_policy
. The experimental symbolset_policy
was renamed toset_global_policy
in the non-experimental API. - Replace
LossScaleOptimizer(opt, "dynamic")
withLossScaleOptimizer(opt)
. If you pass anything other than"dynamic"
to the second argument, see (1) of the next section.
- Remove the word "experimental" from
- In the following rare cases, you need to make more changes when switching to the non-experimental API:
- If you passed anything other than
"dynamic"
to theloss_scale
argument (the second argument) ofLossScaleOptimizer
:- The LossScaleOptimizer constructor takes in different arguments. See the TF 2.7 documentation of tf.keras.mixed_precision.experimental.LossScaleOptimizer for details on the differences, which has examples on how to convert to the non-experimental LossScaleOptimizer.
- If you passed a value to the
loss_scale
argument (the second argument) ofPolicy
:- The experimental version of
Policy
optionally took in atf.compat.v1.mixed_precision.LossScale
in the constructor, which defaulted to a dynamic loss scale for the"mixed_float16"
policy and no loss scale for other policies. InModel.compile
, if the model's policy had a loss scale, the optimizer would be wrapped with aLossScaleOptimizer
. With the non-experimentalPolicy
, there is no loss scale associated with thePolicy
, andModel.compile
wraps the optimizer with aLossScaleOptimizer
if and only if the policy is a"mixed_float16"
policy. If you previously passed aLossScale
to the experimentalPolicy
, consider just removing it, as the default loss scaling behavior is usually what you want. If you really want to customize the loss scaling behavior, you can wrap your optimizer with aLossScaleOptimizer
before passing it toModel.compile
.
- The experimental version of
- If you use the very rarely-used function
tf.keras.mixed_precision.experimental.get_layer_policy
:- Replace
tf.keras.mixed_precision.experimental.get_layer_policy(layer)
withlayer.dtype_policy
.
- Replace
- If you passed anything other than
- The non-experimental API has some minor differences from the experimental API. In most cases, you only need to make three minor changes:
tf.mixed_precision.experimental.LossScale
and its subclasses have been removed from the TF2 namespace. This symbols were very rarely used and were only useful in TF2 for use in the now-removedtf.keras.mixed_precision.experimental
API. The symbols are still available undertf.compat.v1.mixed_precision
.- The
experimental_relax_shapes
heuristic fortf.function
has been deprecated and replaced withreduce_retracing
which encompasses broader heuristics to reduce the number of retraces (see below)
Major Features and Improvements
-
tf.keras
:- Added
tf.keras.applications.resnet_rs
models. This includes theResNetRS50
,ResNetRS101
,ResNetRS152
,ResNetRS200
,ResNetRS270
,ResNetRS350
andResNetRS420
model architectures. The ResNetRS models are based on the architecture described in Revisiting ResNets: Improved Training and Scaling Strategies - Added
tf.keras.optimizers.experimental.Optimizer
. The reworked optimizer gives more control over different phases of optimizer calls, and is easier to customize. We provide Adam, SGD, Adadelta, AdaGrad and RMSprop optimizers based ontf.keras.optimizers.experimental.Optimizer
. Generally the new optimizers work in the same way as the old ones, but support new constructor arguments. In the future, the symbolstf.keras.optimizers.Optimizer
/Adam
/etc will point to the new optimizers, and the previous generation of optimizers will be moved totf.keras.optimizers.legacy.Optimizer
/Adam
/etc. - Added L2 unit normalization layer
tf.keras.layers.UnitNormalization
. - Added
tf.keras.regularizers.OrthogonalRegularizer
, a new regularizer that encourages orthogonality between the rows (or columns) or a weight matrix. - Added
tf.keras.layers.RandomBrightness
layer for image preprocessing. - Added APIs for switching between interactive logging and absl logging. By default, Keras always writes the logs to stdout. However, this is not optimal in a non-interactive environment, where you don't have access to stdout, but can only view the logs. You can use
tf.keras.utils.disable_interactive_logging()
to write the logs to ABSL logging. You can also usetf.keras.utils.enable_interactive_logging()
to change it back to stdout, ortf.keras.utils.is_interactive_logging_enabled()
to check if interactive logging is enabled. - Changed default value for the
verbose
argument ofModel.evaluate()
andModel.predict()
to"auto"
, which defaults toverbose=1
for most cases and defaults toverbose=2
when used withParameterServerStrategy
or with interactive logging disabled. - Argument
jit_compile
inModel.compile()
now applies toModel.evaluate()
andModel.predict()
. Settingjit_compile=True
incompile()
compiles the model's training, evaluation, and inference steps to XLA. Note thatjit_compile=True
may not necessarily work for all models. - Added DTensor-related Keras APIs under
tf.keras.dtensor
namespace. The APIs are still classified as experimental. You are welcome to try it out. Please check the tutoral and guide on https://www.tensorflow.org/ for more details about DTensor.
- Added
-
tf.lite
:- Added TFLite builtin op support for the following TF ops:
tf.math.argmin
/tf.math.argmax
for input data typetf.bool
on CPU.tf.nn.gelu
op for output data typetf.float32
and quantization on CPU.
- Add nominal support for unsigned 16-bit integer tensor types. Note that very few TFLite kernels support this type natively, so its use in mobile ML authoring is generally discouraged.
- Add support for unsigned 16-bit integer tensor types in cast op.
- Experimental support for lowering
list_ops.tensor_list_set_item
withDynamicUpdateSlice
. - Enabled a new MLIR-based dynamic range quantization backend by default
- The new backend is used for post-training int8 dynamic range quantization and post-training float16 quantization.
- Set
experimental_new_dynamic_range_quantizer
in tf.lite.TFLiteConverter to False to disable this change
- Native TF Lite variables are now enabled during conversion by default on all v2 TfLiteConverter entry points.
experimental_enable_resource_variables
on tf.lite.TFLiteConverter is now True by default and will be removed in the future.
- Added TFLite builtin op support for the following TF ops:
-
tf.function
:- Custom classes used as arguments for
tf.function
can now specify rules regarding when retracing needs to occur by implementing the Tracing Protocol available throughtf.types.experimental.SupportsTracingProtocol
. TypeSpec
classes (as associated withExtensionTypes
) also implement the Tracing Protocol which can be overriden if necessary.- The newly introduced
reduce_retracing
option also uses the Tracing Protocol to proactively generate generalized traces similar toexperimental_relax_shapes
(which has now been deprecated).
- Custom classes used as arguments for
-
Unified eager and
tf.function
execution:- Eager mode can now execute each op as a
tf.function
, allowing for more consistent feature support in future releases. - It is available for immediate use.
- See the
TF_RUN_EAGER_OP_AS_FUNCTION
environment variable in eager context. - Eager performance should be similar with this feature enabled.
- A roughly 5us per-op overhead may be observed when running many small functions.
- Note a known issue with GPU performance.
- The behavior of
tf.function
itself is unaffected.
- See the
- Note: This feature will be enabled by default in an upcoming version of TensorFlow.
- Eager mode can now execute each op as a
-
`tf....
TensorFlow 2.9.0-rc1
Release 2.9.0
Breaking Changes
- Due to security issues in TF 2.8, all boosted trees code has now been removed (after being deprecated in TF 2.8). Users should switch to TensorFlow Decision Forests.
- Build, Compilation and Packaging
- TensorFlow is now compiled with
_GLIBCXX_USE_CXX11_ABI=1
. Downstream projects that encounterstd::__cxx11
or[abi:cxx11]
linker errors will need to adopt this compiler option. See the GNU C++ Library docs on Dual ABI. - TensorFlow Python wheels now specifically conform to manylinux2014, an upgrade from manylinux2010. The minimum Pip version supporting manylinux2014 is Pip 19.3 (see pypa/manylinux. This change may affect you if you have been using TensorFlow on a very old platform equivalent to CentOS 6, as manylinux2014 targets CentOS 7 as a compatibility base. Note that TensorFlow does not officially support either platform.
- Discussion for these changes can be found on SIG Build's TensorFlow Community Forum thread
- TensorFlow is now compiled with
- The
tf.keras.mixed_precision.experimental
API has been removed. The non-experimental symbols undertf.keras.mixed_precision
have been available since TensorFlow 2.4 and should be used instead.- The non-experimental API has some minor differences from the experimental API. In most cases, you only need to make three minor changes:
- Remove the word "experimental" from
tf.keras.mixed_precision
symbols. E.g., replacetf.keras.mixed_precision.experimental.global_policy
withtf.keras.mixed_precision.global_policy
. - Replace
tf.keras.mixed_precision.experimental.set_policy
withtf.keras.mixed_precision.set_global_policy
. The experimental symbolset_policy
was renamed toset_global_policy
in the non-experimental API. - Replace
LossScaleOptimizer(opt, "dynamic")
withLossScaleOptimizer(opt)
. If you pass anything other than"dynamic"
to the second argument, see (1) of the next section.
- Remove the word "experimental" from
- In the following rare cases, you need to make more changes when switching to the non-experimental API:
- If you passed anything other than
"dynamic"
to theloss_scale
argument (the second argument) ofLossScaleOptimizer
:- The LossScaleOptimizer constructor takes in different arguments. See the TF 2.7 documentation of tf.keras.mixed_precision.experimental.LossScaleOptimizer for details on the differences, which has examples on how to convert to the non-experimental LossScaleOptimizer.
- If you passed a value to the
loss_scale
argument (the second argument) ofPolicy
:- The experimental version of
Policy
optionally took in atf.compat.v1.mixed_precision.LossScale
in the constructor, which defaulted to a dynamic loss scale for the"mixed_float16"
policy and no loss scale for other policies. InModel.compile
, if the model's policy had a loss scale, the optimizer would be wrapped with aLossScaleOptimizer
. With the non-experimentalPolicy
, there is no loss scale associated with thePolicy
, andModel.compile
wraps the optimizer with aLossScaleOptimizer
if and only if the policy is a"mixed_float16"
policy. If you previously passed aLossScale
to the experimentalPolicy
, consider just removing it, as the default loss scaling behavior is usually what you want. If you really want to customize the loss scaling behavior, you can wrap your optimizer with aLossScaleOptimizer
before passing it toModel.compile
.
- The experimental version of
- If you use the very rarely-used function
tf.keras.mixed_precision.experimental.get_layer_policy
:- Replace
tf.keras.mixed_precision.experimental.get_layer_policy(layer)
withlayer.dtype_policy
.
- Replace
- If you passed anything other than
- The non-experimental API has some minor differences from the experimental API. In most cases, you only need to make three minor changes:
tf.mixed_precision.experimental.LossScale
and its subclasses have been removed from the TF2 namespace. This symbols were very rarely used and were only useful in TF2 for use in the now-removedtf.keras.mixed_precision.experimental
API. The symbols are still available undertf.compat.v1.mixed_precision
.- The
experimental_relax_shapes
heuristic fortf.function
has been deprecated and replaced withreduce_retracing
which encompasses broader heuristics to reduce the number of retraces (see below)
Major Features and Improvements
-
tf.keras
:- Added
tf.keras.applications.resnet_rs
models. This includes theResNetRS50
,ResNetRS101
,ResNetRS152
,ResNetRS200
,ResNetRS270
,ResNetRS350
andResNetRS420
model architectures. The ResNetRS models are based on the architecture described in Revisiting ResNets: Improved Training and Scaling Strategies - Added
tf.keras.optimizers.experimental.Optimizer
. The reworked optimizer gives more control over different phases of optimizer calls, and is easier to customize. We provide Adam, SGD, Adadelta, AdaGrad and RMSprop optimizers based ontf.keras.optimizers.experimental.Optimizer
. Generally the new optimizers work in the same way as the old ones, but support new constructor arguments. In the future, the symbolstf.keras.optimizers.Optimizer
/Adam
/etc will point to the new optimizers, and the previous generation of optimizers will be moved totf.keras.optimizers.legacy.Optimizer
/Adam
/etc. - Added L2 unit normalization layer
tf.keras.layers.UnitNormalization
. - Added
tf.keras.regularizers.OrthogonalRegularizer
, a new regularizer that encourages orthogonality between the rows (or columns) or a weight matrix. - Added
tf.keras.layers.RandomBrightness
layer for image preprocessing. - Added APIs for switching between interactive logging and absl logging. By default, Keras always writes the logs to stdout. However, this is not optimal in a non-interactive environment, where you don't have access to stdout, but can only view the logs. You can use
tf.keras.utils.disable_interactive_logging()
to write the logs to ABSL logging. You can also usetf.keras.utils.enable_interactive_logging()
to change it back to stdout, ortf.keras.utils.is_interactive_logging_enabled()
to check if interactive logging is enabled. - Changed default value for the
verbose
argument ofModel.evaluate()
andModel.predict()
to"auto"
, which defaults toverbose=1
for most cases and defaults toverbose=2
when used withParameterServerStrategy
or with interactive logging disabled. - Argument
jit_compile
inModel.compile()
now applies toModel.evaluate()
andModel.predict()
. Settingjit_compile=True
incompile()
compiles the model's training, evaluation, and inference steps to XLA. Note thatjit_compile=True
may not necessarily work for all models. - Added DTensor-related Keras APIs under
tf.keras.dtensor
namespace. The APIs are still classified as experimental. You are welcome to try it out. Please check the tutoral and guide on https://www.tensorflow.org/ for more details about DTensor.
- Added
-
tf.lite
:- Added TFLite builtin op support for the following TF ops:
tf.math.argmin
/tf.math.argmax
for input data typetf.bool
on CPU.tf.nn.gelu
op for output data typetf.float32
and quantization on CPU.
- Add nominal support for unsigned 16-bit integer tensor types. Note that very few TFLite kernels support this type natively, so its use in mobile ML authoring is generally discouraged.
- Add support for unsigned 16-bit integer tensor types in cast op.
- Experimental support for lowering
list_ops.tensor_list_set_item
withDynamicUpdateSlice
. - Enabled a new MLIR-based dynamic range quantization backend by default
- The new backend is used for post-training int8 dynamic range quantization and post-training float16 quantization.
- Set
experimental_new_dynamic_range_quantizer
in tf.lite.TFLiteConverter to False to disable this change
- Native TF Lite variables are now enabled during conversion by default on all v2 TfLiteConverter entry points.
experimental_enable_resource_variables
on tf.lite.TFLiteConverter is now True by default and will be removed in the future.
- Added TFLite builtin op support for the following TF ops:
-
tf.function
:- Custom classes used as arguments for
tf.function
can now specify rules regarding when retracing needs to occur by implementing the Tracing Protocol available throughtf.types.experimental.SupportsTracingProtocol
. TypeSpec
classes (as associated withExtensionTypes
) also implement the Tracing Protocol which can be overriden if necessary.- The newly introduced
reduce_retracing
option also uses the Tracing Protocol to proactively generate generalized traces similar toexperimental_relax_shapes
(which has now been deprecated).
- Custom classes used as arguments for
-
Unified eager and
tf.function
execution:- Eager mode can now execute each op as a
tf.function
, allowing for more consistent feature support in future releases. - It is available for immediate use.
- See the
TF_RUN_EAGER_OP_AS_FUNCTION
environment variable in eager context. - Eager performance should be similar with this feature enabled.
- A roughly 5us per-op overhead may be observed when running many small functions.
- Note a known issue with GPU performance.
- The behavior of
tf.function
itself is unaffected.
- See the
- Note: This feature will be enabled by default in an upcoming version of TensorFlow.
- Eager mode can now execute each op as a
-
`tf....
TensorFlow 2.9.0-rc0
Release 2.9.0
Breaking Changes
- Due to security issues in TF 2.8, all boosted trees code has now been removed (after being deprecated in TF 2.8). Users should switch to TensorFlow Decision Forests.
- Build, Compilation and Packaging
- TensorFlow is now compiled with
_GLIBCXX_USE_CXX11_ABI=1
. Downstream projects that encounterstd::__cxx11
or[abi:cxx11]
linker errors will need to adopt this compiler option. See the GNU C++ Library docs on Dual ABI. - TensorFlow Python wheels now specifically conform to manylinux2014, an upgrade from manylinux2010. The minimum Pip version supporting manylinux2014 is Pip 19.3 (see pypa/manylinux. This change may affect you if you have been using TensorFlow on a very old platform equivalent to CentOS 6, as manylinux2014 targets CentOS 7 as a compatibility base. Note that TensorFlow does not officially support either platform.
- Discussion for these changes can be found on SIG Build's TensorFlow Community Forum thread
- TensorFlow is now compiled with
- The
tf.keras.mixed_precision.experimental
API has been removed. The non-experimental symbols undertf.keras.mixed_precision
have been available since TensorFlow 2.4 and should be used instead.- The non-experimental API has some minor differences from the experimental API. In most cases, you only need to make three minor changes:
- Remove the word "experimental" from
tf.keras.mixed_precision
symbols. E.g., replacetf.keras.mixed_precision.experimental.global_policy
withtf.keras.mixed_precision.global_policy
. - Replace
tf.keras.mixed_precision.experimental.set_policy
withtf.keras.mixed_precision.set_global_policy
. The experimental symbolset_policy
was renamed toset_global_policy
in the non-experimental API. - Replace
LossScaleOptimizer(opt, "dynamic")
withLossScaleOptimizer(opt)
. If you pass anything other than"dynamic"
to the second argument, see (1) of the next section.
- Remove the word "experimental" from
- In the following rare cases, you need to make more changes when switching to the non-experimental API:
- If you passed anything other than
"dynamic"
to theloss_scale
argument (the second argument) ofLossScaleOptimizer
:- The LossScaleOptimizer constructor takes in different arguments. See the TF 2.7 documentation of tf.keras.mixed_precision.experimental.LossScaleOptimizer for details on the differences, which has examples on how to convert to the non-experimental LossScaleOptimizer.
- If you passed a value to the
loss_scale
argument (the second argument) ofPolicy
:- The experimental version of
Policy
optionally took in atf.compat.v1.mixed_precision.LossScale
in the constructor, which defaulted to a dynamic loss scale for the"mixed_float16"
policy and no loss scale for other policies. InModel.compile
, if the model's policy had a loss scale, the optimizer would be wrapped with aLossScaleOptimizer
. With the non-experimentalPolicy
, there is no loss scale associated with thePolicy
, andModel.compile
wraps the optimizer with aLossScaleOptimizer
if and only if the policy is a"mixed_float16"
policy. If you previously passed aLossScale
to the experimentalPolicy
, consider just removing it, as the default loss scaling behavior is usually what you want. If you really want to customize the loss scaling behavior, you can wrap your optimizer with aLossScaleOptimizer
before passing it toModel.compile
.
- The experimental version of
- If you use the very rarely-used function
tf.keras.mixed_precision.experimental.get_layer_policy
:- Replace
tf.keras.mixed_precision.experimental.get_layer_policy(layer)
withlayer.dtype_policy
.
- Replace
- If you passed anything other than
- The non-experimental API has some minor differences from the experimental API. In most cases, you only need to make three minor changes:
tf.mixed_precision.experimental.LossScale
and its subclasses have been removed from the TF2 namespace. This symbols were very rarely used and were only useful in TF2 for use in the now-removedtf.keras.mixed_precision.experimental
API. The symbols are still available undertf.compat.v1.mixed_precision
.- The
experimental_relax_shapes
heuristic fortf.function
has been deprecated and replaced withreduce_retracing
which encompasses broader heuristics to reduce the number of retraces (see below)
Major Features and Improvements
-
tf.keras
:- Added
tf.keras.applications.resnet_rs
models. This includes theResNetRS50
,ResNetRS101
,ResNetRS152
,ResNetRS200
,ResNetRS270
,ResNetRS350
andResNetRS420
model architectures. The ResNetRS models are based on the architecture described in Revisiting ResNets: Improved Training and Scaling Strategies - Added
tf.keras.optimizers.experimental.Optimizer
. The reworked optimizer gives more control over different phases of optimizer calls, and is easier to customize. We provide Adam, SGD, Adadelta, AdaGrad and RMSprop optimizers based ontf.keras.optimizers.experimental.Optimizer
. Generally the new optimizers work in the same way as the old ones, but support new constructor arguments. In the future, the symbolstf.keras.optimizers.Optimizer
/Adam
/etc will point to the new optimizers, and the previous generation of optimizers will be moved totf.keras.optimizers.legacy.Optimizer
/Adam
/etc. - Added L2 unit normalization layer
tf.keras.layers.UnitNormalization
. - Added
tf.keras.regularizers.OrthogonalRegularizer
, a new regularizer that encourages orthogonality between the rows (or columns) or a weight matrix. - Added
tf.keras.layers.RandomBrightness
layer for image preprocessing. - Added APIs for switching between interactive logging and absl logging. By default, Keras always writes the logs to stdout. However, this is not optimal in a non-interactive environment, where you don't have access to stdout, but can only view the logs. You can use
tf.keras.utils.disable_interactive_logging()
to write the logs to ABSL logging. You can also usetf.keras.utils.enable_interactive_logging()
to change it back to stdout, ortf.keras.utils.is_interactive_logging_enabled()
to check if interactive logging is enabled. - Changed default value for the
verbose
argument ofModel.evaluate()
andModel.predict()
to"auto"
, which defaults toverbose=1
for most cases and defaults toverbose=2
when used withParameterServerStrategy
or with interactive logging disabled. - Argument
jit_compile
inModel.compile()
now applies toModel.evaluate()
andModel.predict()
. Settingjit_compile=True
incompile()
compiles the model's training, evaluation, and inference steps to XLA. Note thatjit_compile=True
may not necessarily work for all models. - Added DTensor-related Keras APIs under
tf.keras.dtensor
namespace. The APIs are still classified as experimental. You are welcome to try it out. Please check the tutoral and guide on https://www.tensorflow.org/ for more details about DTensor.
- Added
-
tf.lite
:- Added TFLite builtin op support for the following TF ops:
tf.math.argmin
/tf.math.argmax
for input data typetf.bool
on CPU.tf.nn.gelu
op for output data typetf.float32
and quantization on CPU.
- Add nominal support for unsigned 16-bit integer tensor types. Note that very few TFLite kernels support this type natively, so its use in mobile ML authoring is generally discouraged.
- Add support for unsigned 16-bit integer tensor types in cast op.
- Experimental support for lowering
list_ops.tensor_list_set_item
withDynamicUpdateSlice
. - Enabled a new MLIR-based dynamic range quantization backend by default
- The new backend is used for post-training int8 dynamic range quantization and post-training float16 quantization.
- Set
experimental_new_dynamic_range_quantizer
in tf.lite.TFLiteConverter to False to disable this change
- Native TF Lite variables are now enabled during conversion by default on all v2 TfLiteConverter entry points.
experimental_enable_resource_variables
on tf.lite.TFLiteConverter is now True by default and will be removed in the future.
- Added TFLite builtin op support for the following TF ops:
-
tf.function
:- Custom classes used as arguments for
tf.function
can now specify rules regarding when retracing needs to occur by implementing the Tracing Protocol available throughtf.types.experimental.SupportsTracingProtocol
. TypeSpec
classes (as associated withExtensionTypes
) also implement the Tracing Protocol which can be overriden if necessary.- The newly introduced
reduce_retracing
option also uses the Tracing Protocol to proactively generate generalized traces similar toexperimental_relax_shapes
(which has now been deprecated).
- Custom classes used as arguments for
-
Unified eager and
tf.function
execution:- Eager mode can now execute each op as a
tf.function
, allowing for more consistent feature support in future releases. - It is available for immediate use.
- See the
TF_RUN_EAGER_OP_AS_FUNCTION
environment variable in eager context. - Eager performance should be similar with this feature enabled.
- A roughly 5us per-op overhead may be observed when running many small functions.
- Note a known issue with GPU performance.
- The behavior of
tf.function
itself is unaffected.
- See the
- Note: This feature will be enabled by default in an upcoming version of TensorFlow.
- Eager mode can now execute each op as a
Bug ...
TensorFlow 2.8.0
Release 2.8.0
Major Features and Improvements
-
tf.lite
:- Added TFLite builtin op support for the following TF ops:
tf.raw_ops.Bucketize
op on CPU.tf.where
op for data typestf.int32
/tf.uint32
/tf.int8
/tf.uint8
/tf.int64
.tf.random.normal
op for output data typetf.float32
on CPU.tf.random.uniform
op for output data typetf.float32
on CPU.tf.random.categorical
op for output data typetf.int64
on CPU.
- Added TFLite builtin op support for the following TF ops:
-
tensorflow.experimental.tensorrt
:conversion_params
is now deprecated insideTrtGraphConverterV2
in favor of direct arguments:max_workspace_size_bytes
,precision_mode
,minimum_segment_size
,maximum_cached_engines
,use_calibration
andallow_build_at_runtime
.- Added a new parameter called
save_gpu_specific_engines
to the.save()
function insideTrtGraphConverterV2
. WhenFalse
, the.save()
function won't save any TRT engines that have been built. WhenTrue
(default), the original behavior is preserved. TrtGraphConverterV2
provides a new API called.summary()
which outputs a summary of the inference converted by TF-TRT. It namely shows eachTRTEngineOp
with their input(s)' and output(s)' shape and dtype. A detailed version of the summary is available which prints additionally all the TensorFlow OPs included in each of theTRTEngineOp
s.
-
tf.tpu.experimental.embedding
:tf.tpu.experimental.embedding.FeatureConfig
now takes an additional argumentoutput_shape
which can specify the shape of the output activation for the feature.tf.tpu.experimental.embedding.TPUEmbedding
now has the same behavior astf.tpu.experimental.embedding.serving_embedding_lookup
which can take arbitrary rank of dense and sparse tensor. For ragged tensor, though the input tensor remains to be rank 2, the activations now can be rank 2 or above by specifying the output shape in the feature config or via the build method.
-
Add
tf.config.experimental.enable_op_determinism
, which makes TensorFlow ops run deterministically at the cost of performance. Replaces theTF_DETERMINISTIC_OPS
environmental variable, which is now deprecated. The "Bug Fixes and Other Changes" section lists more determinism-related changes. -
(Since TF 2.7) Add PluggableDevice support to TensorFlow Profiler.
Bug Fixes and Other Changes
-
tf.data
:- The optimization
parallel_batch
now becomes default if not disabled by users, which will parallelize copying of batch elements. - Added the ability for
TensorSliceDataset
to identify and handle inputs that are files. This enables creating hermetic SavedModels when using datasets created from files.- The optimization
parallel_batch
now becomes default if not disabled by users, which will parallelize copying of batch elements. - Added the ability for
TensorSliceDataset
to identify and handle inputs that are files. This enables creating hermetic SavedModels when using datasets created from files.
- The optimization
- The optimization
-
tf.lite
:- Adds GPU Delegation support for serialization to Java API. This boosts initialization time up to 90% when OpenCL is available.
- Deprecated
Interpreter::SetNumThreads
, in favor ofInterpreterBuilder::SetNumThreads
.
-
tf.keras
:- Adds
tf.compat.v1.keras.utils.get_or_create_layer
to aid migration to TF2 by enabling tracking of nested keras models created in TF1-style, when used with thetf.compat.v1.keras.utils.track_tf1_style_variables
decorator. - Added a
tf.keras.layers.experimental.preprocessing.HashedCrossing
layer which applies the hashing trick to the concatenation of crossed scalar inputs. This provides a stateless way to try adding feature crosses of integer or string data to a model. - Removed
keras.layers.experimental.preprocessing.CategoryCrossing
. Users should migrate to theHashedCrossing
layer or usetf.sparse.cross
/tf.ragged.cross
directly. - Added additional
standardize
andsplit
modes toTextVectorization
:standardize="lower"
will lowercase inputs.standardize="string_punctuation"
will remove all puncuation.split="character"
will split on every unicode character.
- Added an
output_mode
argument to theDiscretization
andHashing
layers with the same semantics as other preprocessing layers. All categorical preprocessing layers now supportoutput_mode
. - All preprocessing layer output will follow the compute dtype of a
tf.keras.mixed_precision.Policy
, unless constructed withoutput_mode="int"
in which case output will betf.int64
. The output type of any preprocessing layer can be controlled individually by passing adtype
argument to the layer. tf.random.Generator
for keras initializers and all RNG code.- Added 3 new APIs for enable/disable/check the usage of
tf.random.Generator
in keras backend, which will be the new backend for all the RNG in Keras. We plan to switch on the new code path by default in tf 2.8, and the behavior change will likely to cause some breakage on user side (eg if the test is checking against some golden nubmer). These 3 APIs will allow user to disable and switch back to legacy behavior if they prefer. In future (eg TF 2.10), we expect to totally remove the legacy code path (stateful random Ops), and these 3 APIs will be removed as well. tf.keras.callbacks.experimental.BackupAndRestore
is now available astf.keras.callbacks.BackupAndRestore
. The experimental endpoint is deprecated and will be removed in a future release.tf.keras.experimental.SidecarEvaluator
is now available astf.keras.utils.SidecarEvaluator
. The experimental endpoint is deprecated and will be removed in a future release.- Metrics update and collection logic in default
Model.train_step()
is now customizable via overridingModel.compute_metrics()
. - Losses computation logic in default
Model.train_step()
is now customizable via overridingModel.compute_loss()
. jit_compile
added toModel.compile()
on an opt-in basis to compile the model's training step with XLA. Note thatjit_compile=True
may not necessarily work for all models.
- Adds
-
Deterministic Op Functionality:
- Fix regression in deterministic selection of deterministic cuDNN convolution algorithms, a regression that was introduced in v2.5. Note that nondeterministic out-of-memory events while selecting algorithms could still lead to nondeterminism, although this is very unlikely. This additional, unlikely source will be eliminated in a later version.
- Add determinsitic GPU implementations of:
tf.function(jit_compile=True)
's that useScatter
.- (since v2.7) Stateful ops used in
tf.data.Dataset
- (since v2.7)
tf.convert_to_tensor
when fed with (sparse)tf.IndexedSlices
(because it usestf.math.unsorted_segment_sum
) - (since v2.7)
tf.gather
backprop (becausetf.convert_to_tensor
reducestf.gather
's (sparse)tf.IndexedSlices
gradients into its denseparams
input) - (since v2.7)
tf.math.segment_mean
- (since v2.7)
tf.math.segment_prod
- (since v2.7)
tf.math.segment_sum
- (since v2.7)
tf.math.unsorted_segment_mean
- (since v2.7)
tf.math.unsorted_segment_prod
- (since v2.7)
tf.math.unsorted_segment_sum
- (since v2.7)
tf.math.unsorted_segment_sqrt
- (since v2.7)
tf.nn.ctc_loss
(resolved, possibly in prior release, and confirmed with tests) - (since v2.7)
tf.nn.sparse_softmax_crossentropy_with_logits
- (since v2.7) Run
tf.scatter_nd
and other related scatter functions, such astf.tensor_scatter_nd_update
, on CPU (with significant performance penalty). - Add determinism-unimplemented exception-throwing to the following ops. When op-determinism is expected (i.e. after
tf.config.experimental.enable_op_determinism
has been called), an attempt to use the specified paths through the following ops on a GPU will causetf.errors.UnimplementedError
(with an understandable message), unless otherwise specified, to be thrown.FakeQuantWithMinMaxVarsGradient
andFakeQuantWithMinMaxVarsPerChannelGradient
- (since v2.7)
tf.compat.v1.get_seed
if the global random seed has not yet been set (viatf.random.set_seed
). ThrowsRuntimeError
from Python orInvalidArgument
from C++ - (since v2.7)
tf.compat.v1.nn.fused_batch_norm
backprop tooffset
whenis_training=False
- (since v2.7)
tf.image.adjust_contrast
forward - (since v2.7)
tf.image.resize
withmethod=ResizeMethod.NEAREST
backprop - (since v2.7)
tf.linalg.svd
- (since v2.7)
tf.math.bincount
- (since v2.7)
tf.nn.depthwise_conv2d
backprop tofilter
when not using cuDNN convolution - (since v2.7)
tf.nn.dilation2d
gradient - (since v2.7)
tf.nn.max_pool_with_argmax
gradient - (since v2.7)
tf.raw_ops.DebugNumericSummary
andtf.raw_ops.DebugNumericSummaryV2
- (since v2.7)
tf.timestamp
. ThrowsFailedPrecondition
- (since v2.7)
tf.Variable.scatter_add
(and other scatter methods, both on ref and resource variables) - (since v2.7) The random-number-generating ops in the
tf.random
module when the global random seed has not yet been set (viatf.random.set_seed
). ThrowsRuntimeError
from Python orInvalidArgument
fro...
TensorFlow 2.7.1
Release 2.7.1
This releases introduces several vulnerability fixes:
- Fixes a floating point division by 0 when executing convolution operators (CVE-2022-21725)
- Fixes a heap OOB read in shape inference for
ReverseSequence
(CVE-2022-21728) - Fixes a heap OOB access in
Dequantize
(CVE-2022-21726) - Fixes an integer overflow in shape inference for
Dequantize
(CVE-2022-21727) - Fixes a heap OOB access in
FractionalAvgPoolGrad
(CVE-2022-21730) - Fixes an overflow and divide by zero in
UnravelIndex
(CVE-2022-21729) - Fixes a type confusion in shape inference for
ConcatV2
(CVE-2022-21731) - Fixes an OOM in
ThreadPoolHandle
(CVE-2022-21732) - Fixes an OOM due to integer overflow in
StringNGrams
(CVE-2022-21733) - Fixes more issues caused by incomplete validation in boosted trees code (CVE-2021-41208)
- Fixes an integer overflows in most sparse component-wise ops (CVE-2022-23567)
- Fixes an integer overflows in
AddManySparseToTensorsMap
(CVE-2022-23568) - Fixes a number of
CHECK
-failures inMapStage
(CVE-2022-21734) - Fixes a division by zero in
FractionalMaxPool
(CVE-2022-21735) - Fixes a number of
CHECK
-fails when building invalid/overflowing tensor shapes (CVE-2022-23569) - Fixes an undefined behavior in
SparseTensorSliceDataset
(CVE-2022-21736) - Fixes an assertion failure based denial of service via faulty bin count operations (CVE-2022-21737)
- Fixes a reference binding to null pointer in
QuantizedMaxPool
(CVE-2022-21739) - Fixes an integer overflow leading to crash in
SparseCountSparseOutput
(CVE-2022-21738) - Fixes a heap overflow in
SparseCountSparseOutput
(CVE-2022-21740) - Fixes an FPE in
BiasAndClamp
in TFLite (CVE-2022-23557) - Fixes an FPE in depthwise convolutions in TFLite (CVE-2022-21741)
- Fixes an integer overflow in TFLite array creation (CVE-2022-23558)
- Fixes an integer overflow in TFLite (CVE-2022-23559)
- Fixes a dangerous OOB write in TFLite (CVE-2022-23561)
- Fixes a vulnerability leading to read and write outside of bounds in TFLite (CVE-2022-23560)
- Fixes a set of vulnerabilities caused by using insecure temporary files (CVE-2022-23563)
- Fixes an integer overflow in Range resulting in undefined behavior and OOM (CVE-2022-23562)
- Fixes a vulnerability where missing validation causes
tf.sparse.split
to crash whenaxis
is a tuple (CVE-2021-41206) - Fixes a
CHECK
-fail when decoding resource handles from proto (CVE-2022-23564) - Fixes a
CHECK
-fail with repeatedAttrDef
(CVE-2022-23565) - Fixes a heap OOB write in Grappler (CVE-2022-23566)
- Fixes a
CHECK
-fail when decoding invalid tensors from proto (CVE-2022-23571) - Fixes a null-dereference when specializing tensor type (CVE-2022-23570)
- Fixes a crash when type cannot be specialized (CVE-2022-23572)
- Fixes a heap OOB read/write in
SpecializeType
(CVE-2022-23574) - Fixes an unitialized variable access in
AssignOp
(CVE-2022-23573) - Fixes an integer overflow in
OpLevelCostEstimator::CalculateTensorSize
(CVE-2022-23575) - Fixes an integer overflow in
OpLevelCostEstimator::CalculateOutputSize
(CVE-2022-23576) - Fixes a null dereference in
GetInitOp
(CVE-2022-23577) - Fixes a memory leak when a graph node is invalid (CVE-2022-23578)
- Fixes an abort caused by allocating a vector that is too large (CVE-2022-23580)
- Fixes multiple
CHECK
-failures during Grappler'sIsSimplifiableReshape
(CVE-2022-23581) - Fixes multiple
CHECK
-failures during Grappler'sSafeToRemoveIdentity
(CVE-2022-23579) - Fixes multiple
CHECK
-failures inTensorByteSize
(CVE-2022-23582) - Fixes multiple
CHECK
-failures in binary ops due to type confusion (CVE-2022-23583) - Fixes a use after free in
DecodePng
kernel (CVE-2022-23584) - Fixes a memory leak in decoding PNG images (CVE-2022-23585)
- Fixes multiple
CHECK
-fails infunction.cc
(CVE-2022-23586) - Fixes multiple
CHECK
-fails due to attempting to build a reference tensor (CVE-2022-23588) - Fixes an integer overflow in Grappler cost estimation of crop and resize operation (CVE-2022-23587)
- Fixes a null pointer dereference in Grappler's
IsConstant
(CVE-2022-23589) - Fixes a
CHECK
failure in constant folding (CVE-2021-41197) - Fixes a stack overflow due to self-recursive function in
GraphDef
(CVE-2022-23591) - Fixes a crash due to erroneous
StatusOr
(CVE-2022-23590) - Fixes multiple crashes and heap OOB accesses in TFG dialect (MLIR) (CVE-2022-23594)
- Fixes a null pointer dereference in
BuildXlaCompilationCache
(XLA) (CVE-2022-23595) - Updates
icu
to69.1
to handle CVE-2020-10531
TensorFlow 2.6.3
Release 2.6.3
This releases introduces several vulnerability fixes:
- Fixes a floating point division by 0 when executing convolution operators (CVE-2022-21725)
- Fixes a heap OOB read in shape inference for
ReverseSequence
(CVE-2022-21728) - Fixes a heap OOB access in
Dequantize
(CVE-2022-21726) - Fixes an integer overflow in shape inference for
Dequantize
(CVE-2022-21727) - Fixes a heap OOB access in
FractionalAvgPoolGrad
(CVE-2022-21730) - Fixes an overflow and divide by zero in
UnravelIndex
(CVE-2022-21729) - Fixes a type confusion in shape inference for
ConcatV2
(CVE-2022-21731) - Fixes an OOM in
ThreadPoolHandle
(CVE-2022-21732) - Fixes an OOM due to integer overflow in
StringNGrams
(CVE-2022-21733) - Fixes more issues caused by incomplete validation in boosted trees code (CVE-2021-41208)
- Fixes an integer overflows in most sparse component-wise ops (CVE-2022-23567)
- Fixes an integer overflows in
AddManySparseToTensorsMap
(CVE-2022-23568) - Fixes a number of
CHECK
-failures inMapStage
(CVE-2022-21734) - Fixes a division by zero in
FractionalMaxPool
(CVE-2022-21735) - Fixes a number of
CHECK
-fails when building invalid/overflowing tensor shapes (CVE-2022-23569) - Fixes an undefined behavior in
SparseTensorSliceDataset
(CVE-2022-21736) - Fixes an assertion failure based denial of service via faulty bin count operations (CVE-2022-21737)
- Fixes a reference binding to null pointer in
QuantizedMaxPool
(CVE-2022-21739) - Fixes an integer overflow leading to crash in
SparseCountSparseOutput
(CVE-2022-21738) - Fixes a heap overflow in
SparseCountSparseOutput
(CVE-2022-21740) - Fixes an FPE in
BiasAndClamp
in TFLite (CVE-2022-23557) - Fixes an FPE in depthwise convolutions in TFLite (CVE-2022-21741)
- Fixes an integer overflow in TFLite array creation (CVE-2022-23558)
- Fixes an integer overflow in TFLite (CVE-2022-23559)
- Fixes a dangerous OOB write in TFLite (CVE-2022-23561)
- Fixes a vulnerability leading to read and write outside of bounds in TFLite (CVE-2022-23560)
- Fixes a set of vulnerabilities caused by using insecure temporary files (CVE-2022-23563)
- Fixes an integer overflow in Range resulting in undefined behavior and OOM (CVE-2022-23562)
- Fixes a vulnerability where missing validation causes
tf.sparse.split
to crash whenaxis
is a tuple (CVE-2021-41206) - Fixes a
CHECK
-fail when decoding resource handles from proto (CVE-2022-23564) - Fixes a
CHECK
-fail with repeatedAttrDef
(CVE-2022-23565) - Fixes a heap OOB write in Grappler (CVE-2022-23566)
- Fixes a
CHECK
-fail when decoding invalid tensors from proto (CVE-2022-23571) - Fixes a null-dereference when specializing tensor type (CVE-2022-23570)
- Fixes a crash when type cannot be specialized (CVE-2022-23572)
- Fixes a heap OOB read/write in
SpecializeType
(CVE-2022-23574) - Fixes an unitialized variable access in
AssignOp
(CVE-2022-23573) - Fixes an integer overflow in
OpLevelCostEstimator::CalculateTensorSize
(CVE-2022-23575) - Fixes an integer overflow in
OpLevelCostEstimator::CalculateOutputSize
(CVE-2022-23576) - Fixes a null dereference in
GetInitOp
(CVE-2022-23577) - Fixes a memory leak when a graph node is invalid (CVE-2022-23578)
- Fixes an abort caused by allocating a vector that is too large (CVE-2022-23580)
- Fixes multiple
CHECK
-failures during Grappler'sIsSimplifiableReshape
(CVE-2022-23581) - Fixes multiple
CHECK
-failures during Grappler'sSafeToRemoveIdentity
(CVE-2022-23579) - Fixes multiple
CHECK
-failures inTensorByteSize
(CVE-2022-23582) - Fixes multiple
CHECK
-failures in binary ops due to type confusion (CVE-2022-23583) - Fixes a use after free in
DecodePng
kernel (CVE-2022-23584) - Fixes a memory leak in decoding PNG images (CVE-2022-23585)
- Fixes multiple
CHECK
-fails infunction.cc
(CVE-2022-23586) - Fixes multiple
CHECK
-fails due to attempting to build a reference tensor (CVE-2022-23588) - Fixes an integer overflow in Grappler cost estimation of crop and resize operation (CVE-2022-23587)
- Fixes a null pointer dereference in Grappler's
IsConstant
(CVE-2022-23589) - Fixes a
CHECK
failure in constant folding (CVE-2021-41197) - Fixes a stack overflow due to self-recursive function in
GraphDef
(CVE-2022-23591) - Fixes a null pointer dereference in
BuildXlaCompilationCache
(XLA) (CVE-2022-23595) - Updates
icu
to69.1
to handle CVE-2020-10531
TensorFlow 2.5.3
Release 2.5.3
Note: This is the last release in the 2.5 series.
This releases introduces several vulnerability fixes:
- Fixes a floating point division by 0 when executing convolution operators (CVE-2022-21725)
- Fixes a heap OOB read in shape inference for
ReverseSequence
(CVE-2022-21728) - Fixes a heap OOB access in
Dequantize
(CVE-2022-21726) - Fixes an integer overflow in shape inference for
Dequantize
(CVE-2022-21727) - Fixes a heap OOB access in
FractionalAvgPoolGrad
(CVE-2022-21730) - Fixes an overflow and divide by zero in
UnravelIndex
(CVE-2022-21729) - Fixes a type confusion in shape inference for
ConcatV2
(CVE-2022-21731) - Fixes an OOM in
ThreadPoolHandle
(CVE-2022-21732) - Fixes an OOM due to integer overflow in
StringNGrams
(CVE-2022-21733) - Fixes more issues caused by incomplete validation in boosted trees code (CVE-2021-41208)
- Fixes an integer overflows in most sparse component-wise ops (CVE-2022-23567)
- Fixes an integer overflows in
AddManySparseToTensorsMap
(CVE-2022-23568) - Fixes a number of
CHECK
-failures inMapStage
(CVE-2022-21734) - Fixes a division by zero in
FractionalMaxPool
(CVE-2022-21735) - Fixes a number of
CHECK
-fails when building invalid/overflowing tensor shapes (CVE-2022-23569) - Fixes an undefined behavior in
SparseTensorSliceDataset
(CVE-2022-21736) - Fixes an assertion failure based denial of service via faulty bin count operations (CVE-2022-21737)
- Fixes a reference binding to null pointer in
QuantizedMaxPool
(CVE-2022-21739) - Fixes an integer overflow leading to crash in
SparseCountSparseOutput
(CVE-2022-21738) - Fixes a heap overflow in
SparseCountSparseOutput
(CVE-2022-21740) - Fixes an FPE in
BiasAndClamp
in TFLite (CVE-2022-23557) - Fixes an FPE in depthwise convolutions in TFLite (CVE-2022-21741)
- Fixes an integer overflow in TFLite array creation (CVE-2022-23558)
- Fixes an integer overflow in TFLite (CVE-2022-23559)
- Fixes a dangerous OOB write in TFLite (CVE-2022-23561)
- Fixes a vulnerability leading to read and write outside of bounds in TFLite (CVE-2022-23560)
- Fixes a set of vulnerabilities caused by using insecure temporary files (CVE-2022-23563)
- Fixes an integer overflow in Range resulting in undefined behavior and OOM (CVE-2022-23562)
- Fixes a vulnerability where missing validation causes
tf.sparse.split
to crash whenaxis
is a tuple (CVE-2021-41206) - Fixes a
CHECK
-fail when decoding resource handles from proto (CVE-2022-23564) - Fixes a
CHECK
-fail with repeatedAttrDef
(CVE-2022-23565) - Fixes a heap OOB write in Grappler (CVE-2022-23566)
- Fixes a
CHECK
-fail when decoding invalid tensors from proto (CVE-2022-23571) - Fixes an unitialized variable access in
AssignOp
(CVE-2022-23573) - Fixes an integer overflow in
OpLevelCostEstimator::CalculateTensorSize
(CVE-2022-23575) - Fixes an integer overflow in
OpLevelCostEstimator::CalculateOutputSize
(CVE-2022-23576) - Fixes a null dereference in
GetInitOp
(CVE-2022-23577) - Fixes a memory leak when a graph node is invalid (CVE-2022-23578)
- Fixes an abort caused by allocating a vector that is too large (CVE-2022-23580)
- Fixes multiple
CHECK
-failures during Grappler'sIsSimplifiableReshape
(CVE-2022-23581) - Fixes multiple
CHECK
-failures during Grappler'sSafeToRemoveIdentity
(CVE-2022-23579) - Fixes multiple
CHECK
-failures inTensorByteSize
(CVE-2022-23582) - Fixes multiple
CHECK
-failures in binary ops due to type confusion (CVE-2022-23583) - Fixes a use after free in
DecodePng
kernel (CVE-2022-23584) - Fixes a memory leak in decoding PNG images (CVE-2022-23585)
- Fixes multiple
CHECK
-fails infunction.cc
(CVE-2022-23586) - Fixes multiple
CHECK
-fails due to attempting to build a reference tensor (CVE-2022-23588) - Fixes an integer overflow in Grappler cost estimation of crop and resize operation (CVE-2022-23587)
- Fixes a null pointer dereference in Grappler's
IsConstant
(CVE-2022-23589) - Fixes a
CHECK
failure in constant folding (CVE-2021-41197) - Fixes a stack overflow due to self-recursive function in
GraphDef
(CVE-2022-23591) - Updates
icu
to69.1
to handle CVE-2020-10531