Releases: tensorflow/tensorflow
TensorFlow 2.11.0
Release 2.11.0
Breaking Changes
-
The
tf.keras.optimizers.Optimizer
base class now points to the new Keras optimizer, while the old optimizers have been moved to thetf.keras.optimizers.legacy
namespace.If you find your workflow failing due to this change, you may be facing one of the following issues:
- Checkpoint loading failure. The new optimizer handles optimizer state differently from the old optimizer, which simplifies the logic of checkpoint saving/loading, but at the cost of breaking checkpoint backward compatibility in some cases. If you want to keep using an old checkpoint, please change your optimizer to
tf.keras.optimizer.legacy.XXX
(e.g.tf.keras.optimizer.legacy.Adam
). - TF1 compatibility. The new optimizer,
tf.keras.optimizers.Optimizer
, does not support TF1 any more, so please use the legacy optimizertf.keras.optimizer.legacy.XXX
. We highly recommend migrating your workflow to TF2 for stable support and new features. - Old optimizer API not found. The new optimizer,
tf.keras.optimizers.Optimizer
, has a different set of public APIs from the old optimizer. These API changes are mostly related to getting rid of slot variables and TF1 support. Please check the API documentation to find alternatives to the missing API. If you must call the deprecated API, please change your optimizer to the legacy optimizer. - Learning rate schedule access. When using a
tf.keras.optimizers.schedules.LearningRateSchedule
, the new optimizer'slearning_rate
property returns the current learning rate value instead of aLearningRateSchedule
object as before. If you need to access theLearningRateSchedule
object, please useoptimizer._learning_rate
. - If you implemented a custom optimizer based on the old optimizer. Please set your optimizer to subclass
tf.keras.optimizer.legacy.XXX
. If you want to migrate to the new optimizer and find it does not support your optimizer, please file an issue in the Keras GitHub repo. - Errors, such as
Cannot recognize variable...
. The new optimizer requires all optimizer variables to be created at the firstapply_gradients()
orminimize()
call. If your workflow calls the optimizer to update different parts of the model in multiple stages, please calloptimizer.build(model.trainable_variables)
before the training loop. - Timeout or performance loss. We don't anticipate this to happen, but if you see such issues, please use the legacy optimizer, and file an issue in the Keras GitHub repo.
The old Keras optimizer will never be deleted, but will not see any new feature additions. New optimizers (for example,
tf.keras.optimizers.Adafactor
) will only be implemented based on the newtf.keras.optimizers.Optimizer
base class. - Checkpoint loading failure. The new optimizer handles optimizer state differently from the old optimizer, which simplifies the logic of checkpoint saving/loading, but at the cost of breaking checkpoint backward compatibility in some cases. If you want to keep using an old checkpoint, please change your optimizer to
-
tensorflow/python/keras
code is a legacy copy of Keras since the TensorFlow v2.7 release, and will be deleted in the v2.12 release. Please remove any import oftensorflow.python.keras
and use the public API withfrom tensorflow import keras
orimport tensorflow as tf; tf.keras
.
Major Features and Improvements
-
tf.lite
:- New operations supported:
tf.math.unsorted_segment_sum
,tf.atan2
andtf.sign
. - Updates to existing operations:
tfl.mul
now supports complex32 inputs.
- New operations supported:
-
tf.experimental.StructuredTensor
:- Introduced
tf.experimental.StructuredTensor
, which provides a flexible and TensorFlow-native way to encode structured data such as protocol buffers or pandas dataframes.
- Introduced
-
tf.keras
:- Added a new
get_metrics_result()
method totf.keras.models.Model
.- Returns the current metrics values of the model as a dict.
- Added a new group normalization layer -
tf.keras.layers.GroupNormalization
. - Added weight decay support for all Keras optimizers via the
weight_decay
argument. - Added the Adafactor optimizer -
tf.keras.optimizers.Adafactor
. - Added
warmstart_embedding_matrix
totf.keras.utils
.- This utility can be used to warmstart an embedding matrix, so you reuse previously-learned word embeddings when working with a new set of words which may include previously unseen words (the embedding vectors for unseen words will be randomly initialized).
- Added a new
-
tf.Variable
:- Added
CompositeTensor
as a base class toResourceVariable
.- This allows
tf.Variable
s to be nested intf.experimental.ExtensionType
s.
- This allows
- Added a new constructor argument
experimental_enable_variable_lifting
totf.Variable
, defaulting toTrue
.- When it's set to
False
, the variable won't be lifted out oftf.function
; thus it can be used as atf.function
-local variable: during each execution of thetf.function
, the variable will be created and then disposed, similar to a local (that is, stack-allocated) variable in C/C++. Currently,experimental_enable_variable_lifting=False
only works on non-XLA devices (for example, under@tf.function(jit_compile=False)
).
- When it's set to
- Added
-
TF SavedModel:
- Added
fingerprint.pb
to the SavedModel directory. Thefingerprint.pb
file is a protobuf containing the "fingerprint" of the SavedModel. See the RFC for more details regarding its design and properties.
- Added
-
TF pip:
- Windows CPU-builds for x86/x64 processors are now built, maintained, tested and released by a third party: Intel. Installing the Windows-native pip packages for
tensorflow
ortensorflow-cpu
would install Intel'stensorflow-intel
package. These packages are provided on an as-is basis. TensorFlow will use reasonable efforts to maintain the availability and integrity of this pip package. There may be delays if the third party fails to release the pip package. For using TensorFlow GPU on Windows, you will need to install TensorFlow in WSL2.
- Windows CPU-builds for x86/x64 processors are now built, maintained, tested and released by a third party: Intel. Installing the Windows-native pip packages for
Bug Fixes and Other Changes
-
tf.image
:- Added an optional parameter
return_index_map
totf.image.ssim
, which causes the returned value to be the local SSIM map instead of the global mean.
- Added an optional parameter
-
TF Core:
tf.custom_gradient
can now be applied to functions that accept "composite" tensors, such astf.RaggedTensor
, as inputs.- Fix device placement issues related to datasets with ragged tensors of strings (i.e. variant encoded data with types not supported on GPU).
experimental_follow_type_hints
fortf.function
has been deprecated. Pleaseuse input_signature
orreduce_retracing
to minimize retracing.
-
tf.SparseTensor
:- Introduced
set_shape
, which sets the static dense shape of the sparse tensor and has the same semantics astf.Tensor.set_shape
.
- Introduced
Security
- TF is currently using giflib 5.2.1 which has CVE-2022-28506. TF is not affected by the CVE as it does not use
DumpScreen2RGB
at all. - Fixes an OOB seg fault in
DynamicStitch
due to missing validation (CVE-2022-41883) - Fixes an overflow in
tf.keras.losses.poisson
(CVE-2022-41887) - Fixes a heap OOB failure in
ThreadUnsafeUnigramCandidateSampler
caused by missing validation (CVE-2022-41880) - Fixes a segfault in
ndarray_tensor_bridge
(CVE-2022-41884) - Fixes an overflow in
FusedResizeAndPadConv2D
(CVE-2022-41885) - Fixes a overflow in
ImageProjectiveTransformV2
(CVE-2022-41886) - Fixes an FPE in
tf.image.generate_bounding_box_proposals
on GPU (CVE-2022-41888) - Fixes a segfault in
pywrap_tfe_src
caused by invalid attributes (CVE-2022-41889) - Fixes a
CHECK
fail inBCast
(CVE-2022-41890) - Fixes a segfault in
TensorListConcat
(CVE-2022-41891) - Fixes a
CHECK_EQ
fail inTensorListResize
(CVE-2022-41893) - Fixes an overflow in
CONV_3D_TRANSPOSE
on TFLite (CVE-2022-41894) - Fixes a heap OOB in
MirrorPadGrad
(CVE-2022-41895) - Fixes a crash in
Mfcc
(CVE-2022-41896) - Fixes a heap OOB in
FractionalMaxPoolGrad
(CVE-2022-41897) - Fixes a
CHECK
fail inSparseFillEmptyRowsGrad
(CVE-2022-41898) - Fixes a
CHECK
fail inSdcaOptimizer
(CVE-2022-41899) - Fixes a heap OOB in
FractionalAvgPool
andFractionalMaxPool
(CVE-2022-41900) - Fixes a
CHECK_EQ
inSparseMatrixNNZ
(CVE-2022-41901) - Fixes an OOB write in grappler (CVE-2022-41902)
- Fixes a overflow in
ResizeNearestNeighborGrad
(CVE-2022-41907) - Fixes a
CHECK
f...
TensorFlow 2.10.1
Release 2.10.1
This release introduces several vulnerability fixes:
- Fixes an OOB seg fault in
DynamicStitch
due to missing validation (CVE-2022-41883) - Fixes an overflow in
tf.keras.losses.poisson
(CVE-2022-41887) - Fixes a heap OOB failure in
ThreadUnsafeUnigramCandidateSampler
caused by missing validation (CVE-2022-41880) - Fixes a segfault in
ndarray_tensor_bridge
(CVE-2022-41884) - Fixes an overflow in
FusedResizeAndPadConv2D
(CVE-2022-41885) - Fixes a overflow in
ImageProjectiveTransformV2
(CVE-2022-41886) - Fixes an FPE in
tf.image.generate_bounding_box_proposals
on GPU (CVE-2022-41888) - Fixes a segfault in
pywrap_tfe_src
caused by invalid attributes (CVE-2022-41889) - Fixes a
CHECK
fail inBCast
(CVE-2022-41890) - Fixes a segfault in
TensorListConcat
(CVE-2022-41891) - Fixes a
CHECK_EQ
fail inTensorListResize
(CVE-2022-41893) - Fixes an overflow in
CONV_3D_TRANSPOSE
on TFLite (CVE-2022-41894) - Fixes a heap OOB in
MirrorPadGrad
(CVE-2022-41895) - Fixes a crash in
Mfcc
(CVE-2022-41896) - Fixes a heap OOB in
FractionalMaxPoolGrad
(CVE-2022-41897) - Fixes a
CHECK
fail inSparseFillEmptyRowsGrad
(CVE-2022-41898) - Fixes a
CHECK
fail inSdcaOptimizer
(CVE-2022-41899) - Fixes a heap OOB in
FractionalAvgPool
andFractionalMaxPool
(CVE-2022-41900) - Fixes a
CHECK_EQ
inSparseMatrixNNZ
(CVE-2022-41901) - Fixes an OOB write in grappler (CVE-2022-41902)
- Fixes a overflow in
ResizeNearestNeighborGrad
(CVE-2022-41907) - Fixes a
CHECK
fail inPyFunc
(CVE-2022-41908) - Fixes a segfault in
CompositeTensorVariantToComponents
(CVE-2022-41909) - Fixes a invalid char to bool conversion in printing a tensor (CVE-2022-41911)
- Fixes a heap overflow in
QuantizeAndDequantizeV2
(CVE-2022-41910) - Fixes a
CHECK
failure inSobolSample
via missing validation (CVE-2022-35935) - Fixes a
CHECK
fail inTensorListScatter
andTensorListScatterV2
in eager mode (CVE-2022-35935)
TensorFlow 2.9.3
Release 2.9.3
This release introduces several vulnerability fixes:
- Fixes an overflow in
tf.keras.losses.poisson
(CVE-2022-41887) - Fixes a heap OOB failure in
ThreadUnsafeUnigramCandidateSampler
caused by missing validation (CVE-2022-41880) - Fixes a segfault in
ndarray_tensor_bridge
(CVE-2022-41884) - Fixes an overflow in
FusedResizeAndPadConv2D
(CVE-2022-41885) - Fixes a overflow in
ImageProjectiveTransformV2
(CVE-2022-41886) - Fixes an FPE in
tf.image.generate_bounding_box_proposals
on GPU (CVE-2022-41888) - Fixes a segfault in
pywrap_tfe_src
caused by invalid attributes (CVE-2022-41889) - Fixes a
CHECK
fail inBCast
(CVE-2022-41890) - Fixes a segfault in
TensorListConcat
(CVE-2022-41891) - Fixes a
CHECK_EQ
fail inTensorListResize
(CVE-2022-41893) - Fixes an overflow in
CONV_3D_TRANSPOSE
on TFLite (CVE-2022-41894) - Fixes a heap OOB in
MirrorPadGrad
(CVE-2022-41895) - Fixes a crash in
Mfcc
(CVE-2022-41896) - Fixes a heap OOB in
FractionalMaxPoolGrad
(CVE-2022-41897) - Fixes a
CHECK
fail inSparseFillEmptyRowsGrad
(CVE-2022-41898) - Fixes a
CHECK
fail inSdcaOptimizer
(CVE-2022-41899) - Fixes a heap OOB in
FractionalAvgPool
andFractionalMaxPool
(CVE-2022-41900) - Fixes a
CHECK_EQ
inSparseMatrixNNZ
(CVE-2022-41901) - Fixes an OOB write in grappler (CVE-2022-41902)
- Fixes a overflow in
ResizeNearestNeighborGrad
(CVE-2022-41907) - Fixes a
CHECK
fail inPyFunc
(CVE-2022-41908) - Fixes a segfault in
CompositeTensorVariantToComponents
(CVE-2022-41909) - Fixes a invalid char to bool conversion in printing a tensor (CVE-2022-41911)
- Fixes a heap overflow in
QuantizeAndDequantizeV2
(CVE-2022-41910) - Fixes a
CHECK
failure inSobolSample
via missing validation (CVE-2022-35935) - Fixes a
CHECK
fail inTensorListScatter
andTensorListScatterV2
in eager mode (CVE-2022-35935)
TensorFlow 2.8.4
Release 2.8.4
This release introduces several vulnerability fixes:
- Fixes a heap OOB failure in
ThreadUnsafeUnigramCandidateSampler
caused by missing validation (CVE-2022-41880) - Fixes a segfault in
ndarray_tensor_bridge
(CVE-2022-41884) - Fixes an overflow in
FusedResizeAndPadConv2D
(CVE-2022-41885) - Fixes a overflow in
ImageProjectiveTransformV2
(CVE-2022-41886) - Fixes an FPE in
tf.image.generate_bounding_box_proposals
on GPU (CVE-2022-41888) - Fixes a segfault in
pywrap_tfe_src
caused by invalid attributes (CVE-2022-41889) - Fixes a
CHECK
fail inBCast
(CVE-2022-41890) - Fixes a segfault in
TensorListConcat
(CVE-2022-41891) - Fixes a
CHECK_EQ
fail inTensorListResize
(CVE-2022-41893) - Fixes an overflow in
CONV_3D_TRANSPOSE
on TFLite (CVE-2022-41894) - Fixes a heap OOB in
MirrorPadGrad
(CVE-2022-41895) - Fixes a crash in
Mfcc
(CVE-2022-41896) - Fixes a heap OOB in
FractionalMaxPoolGrad
(CVE-2022-41897) - Fixes a
CHECK
fail inSparseFillEmptyRowsGrad
(CVE-2022-41898) - Fixes a
CHECK
fail inSdcaOptimizer
(CVE-2022-41899) - Fixes a heap OOB in
FractionalAvgPool
andFractionalMaxPool
(CVE-2022-41900) - Fixes a
CHECK_EQ
inSparseMatrixNNZ
(CVE-2022-41901) - Fixes an OOB write in grappler (CVE-2022-41902)
- Fixes a overflow in
ResizeNearestNeighborGrad
(CVE-2022-41907) - Fixes a
CHECK
fail inPyFunc
(CVE-2022-41908) - Fixes a segfault in
CompositeTensorVariantToComponents
(CVE-2022-41909) - Fixes a invalid char to bool conversion in printing a tensor (CVE-2022-41911)
- Fixes a heap overflow in
QuantizeAndDequantizeV2
(CVE-2022-41910) - Fixes a
CHECK
failure inSobolSample
via missing validation (CVE-2022-35935) - Fixes a
CHECK
fail inTensorListScatter
andTensorListScatterV2
in eager mode (CVE-2022-35935)
TensorFlow 2.11.0-rc2
Release 2.11.0
Breaking Changes
-
tf.keras.optimizers.Optimizer
now points to the new Keras optimizer, and old optimizers have moved to thetf.keras.optimizers.legacy
namespace.
If you find your workflow failing due to this change, you may be facing one of the following issues:- Checkpoint loading failure. The new optimizer handles optimizer state differently from the old optimizer, which simplifies the logic of checkpoint saving/loading, but at the cost of breaking checkpoint backward compatibility in some cases. If you want to keep using an old checkpoint, please change your optimizer to
tf.keras.optimizer.legacy.XXX
(e.g.tf.keras.optimizer.legacy.Adam
). - TF1 compatibility. The new optimizer,
tf.keras.optimizers.Optimizer
, does not support TF1 any more, so please use the legacy optimizertf.keras.optimizer.legacy.XXX
. We highly recommend to migrate your workflow to TF2 for stable support and new features. - Old optimizer API not found. The new optimizer,
tf.keras.optimizers.Optimizer
, has a different set of public APIs from the old
optimizer. These API changes are mostly related to getting rid of slot variables and TF1 support. Please check the API documentation to find alternatives to the missing API. If you must call the deprecated API, please change your optimizer to the legacy optimizer. - Learning rate schedule access. When using a
LearningRateSchedule
, The new optimizer'slearning_rate
property returns the current learning rate value instead of aLearningRateSchedule
object as before. If you need to access theLearningRateSchedule
object, please useoptimizer._learning_rate
. - If you implemented a custom optimizer based on the old optimizer. Please set your optimizer to subclass
tf.keras.optimizer.legacy.XXX
. If you want to migrate to the new optimizer and find it does not support your optimizer, please file an issue in the Keras GitHub repo. - Errors, such as
Cannot recognize variable...
. The new optimizer requires all optimizer variables to be created at the firstapply_gradients()
orminimize()
call. If your workflow calls the optimizer to update different parts of the model in multiple stages, please calloptimizer.build(model.trainable_variables)
before the training loop. - Timeout or performance loss. We don't anticipate this to happen, but if you see such issues, please use the legacy optimizer, and file an issue in the Keras GitHub repo.
The old Keras optimizer will never be deleted, but will not see any new feature additions. New optimizers (for example,
tf.keras.optimizers.Adafactor
) will only be implemented based ontf.keras.optimizers.Optimizer
, the new base class. - Checkpoint loading failure. The new optimizer handles optimizer state differently from the old optimizer, which simplifies the logic of checkpoint saving/loading, but at the cost of breaking checkpoint backward compatibility in some cases. If you want to keep using an old checkpoint, please change your optimizer to
-
tensorflow/python/keras
code is a legacy copy of Keras since 2.7 release, and will be deleted in 2.12 release. Please remove any import oftensorflow.python.keras
and use public API withfrom tensorflow import keras
orimport tensorflow as tf; tf.keras
.
Major Features and Improvements
-
tf.lite
:- New operations supported:
tf.unsortedsegmentmin
,tf.atan2
andtf.sign
. - Updates to existing operations:
tfl.mul
now supports complex32 inputs.
- New operations supported:
-
tf.experimental.StructuredTensor
- Introduced
tf.experimental.StructuredTensor
, which provides a flexible and TensorFlow-native way to encode structured data such as protocol buffers or pandas dataframes.
- Introduced
-
tf.keras
:- Added a new
get_metrics_result()
method totf.keras.models.Model
.- Returns the current metrics values of the model as a dict.
- Added a new group normalization layer -
tf.keras.layers.GroupNormalization
. - Added weight decay support for all Keras optimizers.
- Added Adafactor optimizer
tf.keras.optimizers.Adafactor
. - Added
warmstart_embedding_matrix
totf.keras.utils
.- This utility can be used to warmstart an embeddings matrix, so you reuse previously-learned word embeddings when working with a new set of words which may include previously unseen words (the embedding vectors for unseen words will be randomly initialized).
- Added a new
-
tf.Variable
:- Added
CompositeTensor
as a baseclass toResourceVariable
.- This allows
tf.Variable
s to be nested intf.experimental.ExtensionType
s.
- This allows
- Added a new constructor argument
experimental_enable_variable_lifting
totf.Variable
, defaulting to True.- When it's
False
, the variable won't be lifted out oftf.function
, thus it can be used as atf.function
-local variable: during each execution of thetf.function
, the variable will be created and then disposed, similar to a local (that is, stack-allocated) variable in C/C++. Currently,experimental_enable_variable_lifting=False
only works on non-XLA devices (for example, under@tf.function(jit_compile=False)
).
- When it's
- Added
-
TF SavedModel:
- Added
fingerprint.pb
to the SavedModel directory. Thefingerprint.pb
file is a protobuf containing the "fingerprint" of the SavedModel. See the RFC for more details regarding its design and properties.
- Added
-
TF pip:
- Windows CPU-builds for x86/x64 processors are now built, maintained, tested and released by a third party: Intel. Installing the windows-native pip packages for
tensorflow
ortensorflow-cpu
would install Intel's tensorflow-intel package. These packages are provided as-is. Tensorflow will use reasonable efforts to maintain the availability and integrity of this pip package. There may be delays if the third party fails to release the pip package. For using TensorFlow GPU on Windows, you will need to install TensorFlow in WSL2.
- Windows CPU-builds for x86/x64 processors are now built, maintained, tested and released by a third party: Intel. Installing the windows-native pip packages for
Bug Fixes and Other Changes
-
tf.image
- Added an optional parameter
return_index_map
totf.image.ssim
which causes the returned value to be the local SSIM map instead of the global mean.
- Added an optional parameter
-
TF Core:
tf.custom_gradient
can now be applied to functions that accept "composite" tensors, such astf.RaggedTensor
, as inputs.- Fix device placement issues related to datasets with ragged tensors of strings (i.e. variant encoded data with types not supported on GPU).
experimental_follow_type_hints
fortf.function
has been deprecated. Pleaseuse input_signature
orreduce_retracing
to minimize retracing.
-
tf.SparseTensor
:- Introduced
set_shape
, which sets the static dense shape of the sparse tensor and has the same semantics astf.Tensor.set_shape
.
- Introduced
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
103yiran, 8bitmp3, Aakar Dwivedi, Alexander Grund, alif_elham, Aman Agarwal, amoitra, Andrei Ivanov, andreii, Andrew Goodbody, angerson, Ashay Rane, Azeem Shaikh, Ben Barsdell, bhack, Bhavani Subramanian, Cedric Nugteren, Chandra Kumar Ramasamy, Christopher Bate, CohenAriel, Cotarou, cramasam, Enrico Minack, Francisco Unda, Frederic Bastien, gadagashwini, Gauri1 Deshpande, george, Jake, Jeff, Jerry Ge, Jingxuan He, Jojimon Varghese, Jonathan Dekhtiar, Kaixi Hou, Kanvi Khanna, kcoul, Keith Smiley, Kevin Hu, Kun Lu, kushanam, Lianmin Zheng, liuyuanqiang, Louis Sugy, Mahmoud Abuzaina, Marius Brehler, mdfaijul, Meenakshi Venkataraman, Milos Puzovic, mohantym, Namrata-Ibm, Nathan John Sircombe, Nathan Luehr, Olaf Lipinski, Om Thakkar, Osman F Bayram, Patrice Vignola, Pavani Majety, Philipp Hack, Prianka Liz Kariat, Rahul Batra, RajeshT, Renato Golin, riestere, Roger Iyengar, Rohit Santhanam, Rsanthanam-Amd, Sadeed Pv, Samuel Marks, Shimokawa, Naoaki, Siddhesh Kothadi, Simengliu-Nv, Sindre Seppola, snadampal, Srinivasan Narayanamoorthy, sushreebarsa, syedshahbaaz, Tamas Bela Feher, Tatwai Chong, Thibaut Goetghebuer-Planchon, tilakrayal, Tom Anderson, Tomohiro Endo, Trevor Morris, vibhutisawant, Victor Zhang, Vremold, Xavier Bonaventura, Yanming Wang, Yasir Modak, Yimei Sun, Yong Tang, Yulv-Git, zhuoran.liu, zotanika
TensorFlow 2.11.0-rc1
Release 2.11.0
Breaking Changes
-
tf.keras.optimizers.Optimizer
now points to the new Keras optimizer, and old optimizers have moved to thetf.keras.optimizers.legacy
namespace.
If you find your workflow failing due to this change, you may be facing one of the following issues:- Checkpoint loading failure. The new optimizer handles optimizer state differently from the old optimizer, which simplies the logic of
checkpoint saving/loading, but at the cost of breaking checkpoint backward compatibility in some cases. If you want to keep using an old
checkpoint, please change your optimizer totf.keras.optimizer.legacy.XXX
(e.g.tf.keras.optimizer.legacy.Adam
). - TF1 compatibility. The new optimizer,
tf.keras.optimizers.Optimizer
, does not support TF1 any more, so please use the legacy optimizer
tf.keras.optimizer.legacy.XXX
.
We highly recommend to migrate your workflow to TF2 for stable support and new features. - Old optimizer API not found. The new optimizer,
tf.keras.optimizers.Optimizer
, has a different set of public APIs from the old optimizer.
These API changes are mostly related to getting rid of slot variables and TF1 support. Please check the API documentation to find alternatives
to the missing API. If you must call the deprecated API, please change your optimizer to the legacy optimizer. - Learning rate schedule access. When using a
LearningRateSchedule
, The new optimizer'slearning_rate
property returns the
current learning rate value instead of aLearningRateSchedule
object as before. If you need to access theLearningRateSchedule
object,
please useoptimizer._learning_rate
. - If you implemented a custom optimizer based on the old optimizer. Please set your optimizer to subclass
tf.keras.optimizer.legacy.XXX
. If you want to migrate to the new optimizer and find it does not support your optimizer, please file
an issue in the Keras GitHub repo. - Errors, such as
Cannot recognize variable...
. The new optimizer requires all optimizer variables to be created at the first
apply_gradients()
orminimize()
call. If your workflow calls optimizer to update different parts of model in multiple stages,
please calloptimizer.build(model.trainable_variables)
before the training loop. - Timeout or performance loss. We don't anticipate this to happen, but if you see such issues, please use the legacy optimizer, and file
an issue in the Keras GitHub repo.
The old Keras optimizer will never be deleted, but will not see any new feature additions. New optimizers (for example,
tf.keras.optimizers.Adafactor
) will only be implemented based ontf.keras.optimizers.Optimizer
, the new base class. - Checkpoint loading failure. The new optimizer handles optimizer state differently from the old optimizer, which simplies the logic of
Major Features and Improvements
-
tf.lite
:- New operations supported:
tf.unsortedsegmentmin
,tf.atan2
andtf.sign
. - Updates to existing operations:
tfl.mul
now supports complex32 inputs.
- New operations supported:
-
tf.experimental.StructuredTensor
- Introduced
tf.experimental.StructuredTensor
, which provides a flexible and TensorFlow-native way to encode structured data such as protocol
buffers or pandas dataframes.
- Introduced
-
tf.keras
:- Added a new
get_metrics_result()
method totf.keras.models.Model
.- Returns the current metrics values of the model as a dict.
- Added a new group normalization layer -
tf.keras.layers.GroupNormalization
. - Added weight decay support for all Keras optimizers.
- Added Adafactor optimizer
tf.keras.optimizers.Adafactor
. - Added
warmstart_embedding_matrix
totf.keras.utils
.- This utility can be used to warmstart an embeddings matrix, so you reuse previously-learned word embeddings when working with a new set of
words which may include previously unseen words (the embedding vectors for unseen words will be randomly initialized).
- This utility can be used to warmstart an embeddings matrix, so you reuse previously-learned word embeddings when working with a new set of
- Added a new
-
tf.Variable
:- Added
CompositeTensor
as a baseclass toResourceVariable
.- This allows
tf.Variable
s to be nested intf.experimental.ExtensionType
s.
- This allows
- Added a new constructor argument
experimental_enable_variable_lifting
totf.Variable
, defaulting to True.- When it's
False
, the variable won't be lifted out oftf.function
, thus it can be used as atf.function
-local variable: during each
execution of thetf.function
, the variable will be created and then disposed, similar to a local (that is, stack-allocated) variable in C/C++.
Currently,experimental_enable_variable_lifting=False
only works on non-XLA devices (for example, under@tf.function(jit_compile=False)
).
- When it's
- Added
-
TF SavedModel:
- Added
fingerprint.pb
to the SavedModel directory. Thefingerprint.pb
file is a protobuf containing the "fingerprint" of the SavedModel. See
the RFC for more details regarding its design and properties.
- Added
-
TF pip:
- Windows CPU-builds for x86/x64 processors are now built, maintained, tested and released by a third party: Intel. Installing the windows-native
pip packages fortensorflow
ortensorflow-cpu
would install Intel's tensorflow-intel package. These packages are provided as-is. Tensorflow
will use reasonable efforts to maintain the availability and integrity of this pip package. There may be delays if the third party fails to
release the pip package. For using TensorFlow GPU on Windows, you will need to install TensorFlow in WSL2.
- Windows CPU-builds for x86/x64 processors are now built, maintained, tested and released by a third party: Intel. Installing the windows-native
Bug Fixes and Other Changes
-
tf.image
- Added an optional parameter
return_index_map
totf.image.ssim
which causes the returned value to be the local SSIM map instead of the global
mean.
- Added an optional parameter
-
TF Core:
tf.custom_gradient
can now be applied to functions that accept "composite" tensors, such astf.RaggedTensor
, as inputs.- Fix device placement issues related to datasets with ragged tensors of strings (i.e. variant encoded data with types not supported on GPU).
experimental_follow_type_hints
for tf.function has been deprecated. Pleaseuse input_signature
orreduce_retracing
to minimize retracing.
-
tf.SparseTensor
:- Introduced
set_shape
, which sets the static dense shape of the sparse tensor and has the same semantics astf.Tensor.set_shape
.
- Introduced
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
103yiran, 8bitmp3, Aakar Dwivedi, Alexander Grund, alif_elham, Aman Agarwal, amoitra, Andrei Ivanov, andreii, Andrew Goodbody, angerson, Ashay Rane, Azeem Shaikh, Ben Barsdell, bhack, Bhavani Subramanian, Cedric Nugteren, Chandra Kumar Ramasamy, Christopher Bate, CohenAriel, Cotarou, cramasam, Enrico Minack, Francisco Unda, Frederic Bastien, gadagashwini, Gauri1 Deshpande, george, Jake, Jeff, Jerry Ge, Jingxuan He, Jojimon Varghese, Jonathan Dekhtiar, Kaixi Hou, Kanvi Khanna, kcoul, Keith Smiley, Kevin Hu, Kun Lu, kushanam, Lianmin Zheng, liuyuanqiang, Louis Sugy, Mahmoud Abuzaina, Marius Brehler, mdfaijul, Meenakshi Venkataraman, Milos Puzovic, mohantym, Namrata-Ibm, Nathan John Sircombe, Nathan Luehr, Olaf Lipinski, Om Thakkar, Osman F Bayram, Patrice Vignola, Pavani Majety, Philipp Hack, Prianka Liz Kariat, Rahul Batra, RajeshT, Renato Golin, riestere, Roger Iyengar, Rohit Santhanam, Rsanthanam-Amd, Sadeed Pv, Samuel Marks, Shimokawa, Naoaki, Siddhesh Kothadi, Simengliu-Nv, Sindre Seppola, snadampal, Srinivasan Narayanamoorthy, sushreebarsa, syedshahbaaz, Tamas Bela Feher, Tatwai Chong, Thibaut Goetghebuer-Planchon, tilakrayal, Tom Anderson, Tomohiro Endo, Trevor Morris, vibhutisawant, Victor Zhang, Vremold, Xavier Bonaventura, Yanming Wang, Yasir Modak, Yimei Sun, Yong Tang, Yulv-Git, zhuoran.liu, zotanika
TensorFlow 2.11.0-rc0
Release 2.11.0
Breaking Changes
-
tf.keras.optimizers.Optimizer
now points to the new Keras optimizer, and old optimizers have moved to thetf.keras.optimizers.legacy
namespace.
If you find your workflow failing due to this change, you may be facing one of the following issues:- Checkpoint loading failure. The new optimizer handles optimizer state differently from the old optimizer, which simplies the logic of
checkpoint saving/loading, but at the cost of breaking checkpoint backward compatibility in some cases. If you want to keep using an old
checkpoint, please change your optimizer totf.keras.optimizer.legacy.XXX
(e.g.tf.keras.optimizer.legacy.Adam
). - TF1 compatibility. The new optimizer,
tf.keras.optimizers.Optimizer
, does not support TF1 any more, so please use the legacy optimizer
tf.keras.optimizer.legacy.XXX
.
We highly recommend to migrate your workflow to TF2 for stable support and new features. - Old optimizer API not found. The new optimizer,
tf.keras.optimizers.Optimizer
, has a different set of public APIs from the old optimizer.
These API changes are mostly related to getting rid of slot variables and TF1 support. Please check the API documentation to find alternatives
to the missing API. If you must call the deprecated API, please change your optimizer to the legacy optimizer. - Learning rate schedule access. When using a
LearningRateSchedule
, The new optimizer'slearning_rate
property returns the
current learning rate value instead of aLearningRateSchedule
object as before. If you need to access theLearningRateSchedule
object,
please useoptimizer._learning_rate
. - If you implemented a custom optimizer based on the old optimizer. Please set your optimizer to subclass
tf.keras.optimizer.legacy.XXX
. If you want to migrate to the new optimizer and find it does not support your optimizer, please file
an issue in the Keras GitHub repo. - Errors, such as
Cannot recognize variable...
. The new optimizer requires all optimizer variables to be created at the first
apply_gradients()
orminimize()
call. If your workflow calls optimizer to update different parts of model in multiple stages,
please calloptimizer.build(model.trainable_variables)
before the training loop. - Timeout or performance loss. We don't anticipate this to happen, but if you see such issues, please use the legacy optimizer, and file
an issue in the Keras GitHub repo.
The old Keras optimizer will never be deleted, but will not see any new feature additions. New optimizers (for example,
tf.keras.optimizers.Adafactor
) will only be implemented based ontf.keras.optimizers.Optimizer
, the new base class. - Checkpoint loading failure. The new optimizer handles optimizer state differently from the old optimizer, which simplies the logic of
Major Features and Improvements
-
tf.lite
:- New operations supported:
tf.unsortedsegmentmin
,tf.atan2
andtf.sign
. - Updates to existing operations:
tfl.mul
now supports complex32 inputs.
- New operations supported:
-
tf.experimental.StructuredTensor
- Introduced
tf.experimental.StructuredTensor
, which provides a flexible and TensorFlow-native way to encode structured data such as protocol
buffers or pandas dataframes.
- Introduced
-
tf.keras
:- Added a new
get_metrics_result()
method totf.keras.models.Model
.- Returns the current metrics values of the model as a dict.
- Added a new group normalization layer -
tf.keras.layers.GroupNormalization
. - Added weight decay support for all Keras optimizers.
- Added Adafactor optimizer
tf.keras.optimizers.Adafactor
. - Added
warmstart_embedding_matrix
totf.keras.utils
.- This utility can be used to warmstart an embeddings matrix, so you reuse previously-learned word embeddings when working with a new set of
words which may include previously unseen words (the embedding vectors for unseen words will be randomly initialized).
- This utility can be used to warmstart an embeddings matrix, so you reuse previously-learned word embeddings when working with a new set of
- Added a new
-
tf.Variable
:- Added
CompositeTensor
as a baseclass toResourceVariable
.- This allows
tf.Variable
s to be nested intf.experimental.ExtensionType
s.
- This allows
- Added a new constructor argument
experimental_enable_variable_lifting
totf.Variable
, defaulting to True.- When it's
False
, the variable won't be lifted out oftf.function
, thus it can be used as atf.function
-local variable: during each
execution of thetf.function
, the variable will be created and then disposed, similar to a local (that is, stack-allocated) variable in C/C++.
Currently,experimental_enable_variable_lifting=False
only works on non-XLA devices (for example, under@tf.function(jit_compile=False)
).
- When it's
- Added
-
TF SavedModel:
- Added
fingerprint.pb
to the SavedModel directory. Thefingerprint.pb
file is a protobuf containing the "fingerprint" of the SavedModel. See
the RFC for more details regarding its design and properties.
- Added
-
TF pip:
- Windows CPU-builds for x86/x64 processors are now built, maintained, tested and released by a third party: Intel. Installing the windows-native
pip packages fortensorflow
ortensorflow-cpu
would install Intel's tensorflow-intel package. These packages are provided as-is. Tensorflow
will use reasonable efforts to maintain the availability and integrity of this pip package. There may be delays if the third party fails to
release the pip package. For using TensorFlow GPU on Windows, you will need to install TensorFlow in WSL2.
- Windows CPU-builds for x86/x64 processors are now built, maintained, tested and released by a third party: Intel. Installing the windows-native
Bug Fixes and Other Changes
-
tf.image
- Added an optional parameter
return_index_map
totf.image.ssim
which causes the returned value to be the local SSIM map instead of the global
mean.
- Added an optional parameter
-
TF Core:
tf.custom_gradient
can now be applied to functions that accept "composite" tensors, such astf.RaggedTensor
, as inputs.- Fix device placement issues related to datasets with ragged tensors of strings (i.e. variant encoded data with types not supported on GPU).
experimental_follow_type_hints
for tf.function has been deprecated. Pleaseuse input_signature
orreduce_retracing
to minimize retracing.
-
tf.SparseTensor
:- Introduced
set_shape
, which sets the static dense shape of the sparse tensor and has the same semantics astf.Tensor.set_shape
.
- Introduced
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
103yiran, 8bitmp3, Aakar Dwivedi, Alexander Grund, alif_elham, Aman Agarwal, amoitra, Andrei Ivanov, andreii, Andrew Goodbody, angerson, Ashay Rane, Azeem Shaikh, Ben Barsdell, bhack, Bhavani Subramanian, Cedric Nugteren, Chandra Kumar Ramasamy, Christopher Bate, CohenAriel, Cotarou, cramasam, Enrico Minack, Francisco Unda, Frederic Bastien, gadagashwini, Gauri1 Deshpande, george, Jake, Jeff, Jerry Ge, Jingxuan He, Jojimon Varghese, Jonathan Dekhtiar, Kaixi Hou, Kanvi Khanna, kcoul, Keith Smiley, Kevin Hu, Kun Lu, kushanam, Lianmin Zheng, liuyuanqiang, Louis Sugy, Mahmoud Abuzaina, Marius Brehler, mdfaijul, Meenakshi Venkataraman, Milos Puzovic, mohantym, Namrata-Ibm, Nathan John Sircombe, Nathan Luehr, Olaf Lipinski, Om Thakkar, Osman F Bayram, Patrice Vignola, Pavani Majety, Philipp Hack, Prianka Liz Kariat, Rahul Batra, RajeshT, Renato Golin, riestere, Roger Iyengar, Rohit Santhanam, Rsanthanam-Amd, Sadeed Pv, Samuel Marks, Shimokawa, Naoaki, Siddhesh Kothadi, Simengliu-Nv, Sindre Seppola, snadampal, Srinivasan Narayanamoorthy, sushreebarsa, syedshahbaaz, Tamas Bela Feher, Tatwai Chong, Thibaut Goetghebuer-Planchon, tilakrayal, Tom Anderson, Tomohiro Endo, Trevor Morris, vibhutisawant, Victor Zhang, Vremold, Xavier Bonaventura, Yanming Wang, Yasir Modak, Yimei Sun, Yong Tang, Yulv-Git, zhuoran.liu, zotanika
TensorFlow 2.10.0
Release 2.10.0
Breaking Changes
- Causal attention in
keras.layers.Attention
andkeras.layers.AdditiveAttention
is now specified in thecall()
method via theuse_causal_mask
argument (rather than in the constructor), for consistency with other layers. - Some files in
tensorflow/python/training
have been moved totensorflow/python/tracking
andtensorflow/python/checkpoint
. Please update your imports accordingly, the old files will be removed in Release 2.11. tf.keras.optimizers.experimental.Optimizer
will graduate in Release 2.11, which meanstf.keras.optimizers.Optimizer
will be an alias oftf.keras.optimizers.experimental.Optimizer
. The currenttf.keras.optimizers.Optimizer
will continue to be supported astf.keras.optimizers.legacy.Optimizer
, e.g.,tf.keras.optimizers.legacy.Adam
. Most users won't be affected by this change, but please check the API doc if any API used in your workflow is changed or deprecated, and make adaptions. If you decide to keep using the old optimizer, please explicitly change your optimizer totf.keras.optimizers.legacy.Optimizer
.- RNG behavior change for
tf.keras.initializers
. Keras initializers will now use stateless random ops to generate random numbers.- Both seeded and unseeded initializers will always generate the same values every time they are called (for a given variable shape). For unseeded initializers (
seed=None
), a random seed will be created and assigned at initializer creation (different initializer instances get different seeds). - An unseeded initializer will raise a warning if it is reused (called) multiple times. This is because it would produce the same values each time, which may not be intended.
- Both seeded and unseeded initializers will always generate the same values every time they are called (for a given variable shape). For unseeded initializers (
Deprecations
- The C++
tensorflow::Code
andtensorflow::Status
will become aliases of respectivelyabsl::StatusCode
andabsl::Status
in some future release.- Use
tensorflow::OkStatus()
instead oftensorflow::Status::OK()
. - Stop constructing
Status
objects fromtensorflow::error::Code
. - One MUST NOT access
tensorflow::errors::Code
fields. Accessingtensorflow::error::Code
fields is fine.- Use the constructors such as
tensorflow::errors:InvalidArgument
to create status using an error code without accessing it. - Use the free functions such as
tensorflow::errors::IsInvalidArgument
if needed. - In the last resort, use e.g.
static_cast<tensorflow::errors::Code>(error::Code::INVALID_ARGUMENT)
orstatic_cast<int>(code)
for comparisons.
- Use the constructors such as
- Use
tensorflow::StatusOr
will also become in the future alias toabsl::StatusOr
, so useStatusOr::value
instead ofStatusOr::ConsumeValueOrDie
.
Major Features and Improvements
-
tf.lite
:- New operations supported:
- tflite SelectV2 now supports 5D.
- tf.einsum is supported with multiple unknown shapes.
- tf.unsortedsegmentprod op is supported.
- tf.unsortedsegmentmax op is supported.
- tf.unsortedsegmentsum op is supported.
- Updates to existing operations:
- tfl.scatter_nd now supports I1 for update arg.
- Upgrade Flatbuffers v2.0.5 from v1.12.0
- New operations supported:
-
tf.keras
:EinsumDense
layer is moved from experimental to core. Its import path is moved fromtf.keras.layers.experimental.EinsumDense
totf.keras.layers.EinsumDense
.- Added
tf.keras.utils.audio_dataset_from_directory
utility to easily generate audio classification datasets from directories of.wav
files. - Added
subset="both"
support intf.keras.utils.image_dataset_from_directory
,tf.keras.utils.text_dataset_from_directory
, andaudio_dataset_from_directory
, to be used with thevalidation_split
argument, for returning both dataset splits at once, as a tuple. - Added
tf.keras.utils.split_dataset
utility to split aDataset
object or a list/tuple of arrays into twoDataset
objects (e.g. train/test). - Added step granularity to
BackupAndRestore
callback for handling distributed training failures & restarts. The training state can now be restored at the exact epoch and step at which it was previously saved before failing. - Added
tf.keras.dtensor.experimental.optimizers.AdamW
. This optimizer is similar as the existingkeras.optimizers.experimental.AdamW
, and works in the DTensor training use case. - Improved masking support for
tf.keras.layers.MultiHeadAttention
.- Implicit masks for
query
,key
andvalue
inputs will automatically be used to compute a correct attention mask for the layer. These padding masks will be combined with anyattention_mask
passed in directly when calling the layer. This can be used withtf.keras.layers.Embedding
withmask_zero=True
to automatically infer a correct padding mask. - Added a
use_causal_mask
call time arugment to the layer. Passinguse_causal_mask=True
will compute a causal attention mask, and optionally combine it with anyattention_mask
passed in directly when calling the layer.
- Implicit masks for
- Added
ignore_class
argument in the lossSparseCategoricalCrossentropy
and metricsIoU
andMeanIoU
, to specify a class index to be ignored during loss/metric computation (e.g. a background/void class). - Added
tf.keras.models.experimental.SharpnessAwareMinimization
. This class implements the sharpness-aware minimization technique, which boosts model performance on various tasks, e.g., ResNet on image classification.
-
tf.data
:- Added support for cross-trainer data caching in tf.data service. This saves computation resources when concurrent training jobs train from the same dataset. See (https://www.tensorflow.org/api_docs/python/tf/data/experimental/service#sharing_tfdata_service_with_concurrent_trainers) for more details.
- Added
dataset_id
totf.data.experimental.service.register_dataset
. If provided,tf.data
service will use the provided ID for the dataset. If the dataset ID already exists, no new dataset will be registered. This is useful if multiple training jobs need to use the same dataset for training. In this case, users should callregister_dataset
with the samedataset_id
. - Added a new field,
inject_prefetch
, totf.data.experimental.OptimizationOptions
. If it is set toTrue
,tf.data
will now automatically add aprefetch
transformation to datasets that end in synchronous transformations. This enables data generation to be overlapped with data consumption. This may cause a small increase in memory usage due to buffering. To enable this behavior, setinject_prefetch=True
intf.data.experimental.OptimizationOptions
. - Added a new value to
tf.data.Options.autotune.autotune_algorithm
: STAGE_BASED. If the autotune algorithm is set to STAGE_BASED, then it runs a new algorithm that can get the same performance with lower CPU/memory usage. - Added
tf.data.experimental.from_list
, a new API for creatingDataset
s from lists of elements.
-
tf.distribute
:- Added
tf.distribute.experimental.PreemptionCheckpointHandler
to handle worker preemption/maintenance and cluster-wise consistent error reporting fortf.distribute.MultiWorkerMirroredStrategy
. Specifically, for the type of interruption with advance notice, it automatically saves a checkpoint, exits the program without raising an unrecoverable error, and restores the progress when training restarts.
- Added
-
tf.math
:- Added
tf.math.approx_max_k
andtf.math.approx_min_k
which are the optimized alternatives totf.math.top_k
on TPU. The performance difference range from 8 to 100 times depending on the size of k. When running on CPU and GPU, a non-optimized XLA kernel is used.
- Added
-
tf.train
:- Added
tf.train.TrackableView
which allows users to inspect the TensorFlow Trackable object (e.g.tf.Module
, Keras Layers and models).
- Added
-
tf.vectorized_map
:- Added an optional parameter:
warn
. This parameter controls whether or not warnings will be printed when operations in the providedfn
fall back to a while loop.
- Added an optional parameter:
-
XLA:
- MWMS is now compilable with XLA.
- Compute Library for the Arm® Architecture (ACL) is supported for aarch64 CPU XLA runtime
-
CPU performance optimizations:
- x86 CPUs: oneDNN bfloat16 auto-mixed precision grappler graph optimization pass has been renamed from
auto_mixed_precision_mkl
toauto_mixed_precision_onednn_bfloat16
. See example usage here. - aarch64 CPUs: Experimental performance optimizations from Compute Library for the Arm® Architecture (ACL) are available through oneDNN in the default Linux aarch64 package (
pip install tensorflow
).- The optimizations are disabled by default.
- Set the environment variable
TF_ENABLE_ONEDNN_OPTS=1
to enable the optimizations. Setting the variable to 0 or uns...
- x86 CPUs: oneDNN bfloat16 auto-mixed precision grappler graph optimization pass has been renamed from
TensorFlow 2.9.2
Release 2.9.2
This releases introduces several vulnerability fixes:
- Fixes a
CHECK
failure in tf.reshape caused by overflows (CVE-2022-35934) - Fixes a
CHECK
failure inSobolSample
caused by missing validation (CVE-2022-35935) - Fixes an OOB read in
Gather_nd
op in TF Lite (CVE-2022-35937) - Fixes a
CHECK
failure inTensorListReserve
caused by missing validation (CVE-2022-35960) - Fixes an OOB write in
Scatter_nd
op in TF Lite (CVE-2022-35939) - Fixes an integer overflow in
RaggedRangeOp
(CVE-2022-35940) - Fixes a
CHECK
failure inAvgPoolOp
(CVE-2022-35941) - Fixes a
CHECK
failures inUnbatchGradOp
(CVE-2022-35952) - Fixes a segfault TFLite converter on per-channel quantized transposed convolutions (CVE-2022-36027)
- Fixes a
CHECK
failures inAvgPool3DGrad
(CVE-2022-35959) - Fixes a
CHECK
failures inFractionalAvgPoolGrad
(CVE-2022-35963) - Fixes a segfault in
BlockLSTMGradV2
(CVE-2022-35964) - Fixes a segfault in
LowerBound
andUpperBound
(CVE-2022-35965) - Fixes a segfault in
QuantizedAvgPool
(CVE-2022-35966) - Fixes a segfault in
QuantizedAdd
(CVE-2022-35967) - Fixes a
CHECK
fail inAvgPoolGrad
(CVE-2022-35968) - Fixes a
CHECK
fail inConv2DBackpropInput
(CVE-2022-35969) - Fixes a segfault in
QuantizedInstanceNorm
(CVE-2022-35970) - Fixes a
CHECK
fail inFakeQuantWithMinMaxVars
(CVE-2022-35971) - Fixes a segfault in
Requantize
(CVE-2022-36017) - Fixes a segfault in
QuantizedBiasAdd
(CVE-2022-35972) - Fixes a
CHECK
fail inFakeQuantWithMinMaxVarsPerChannel
(CVE-2022-36019) - Fixes a segfault in
QuantizedMatMul
(CVE-2022-35973) - Fixes a segfault in
QuantizeDownAndShrinkRange
(CVE-2022-35974) - Fixes segfaults in
QuantizedRelu
andQuantizedRelu6
(CVE-2022-35979) - Fixes a
CHECK
fail inFractionalMaxPoolGrad
(CVE-2022-35981) - Fixes a
CHECK
fail inRaggedTensorToVariant
(CVE-2022-36018) - Fixes a
CHECK
fail inQuantizeAndDequantizeV3
(CVE-2022-36026) - Fixes a segfault in
SparseBincount
(CVE-2022-35982) - Fixes a
CHECK
fail inSave
andSaveSlices
(CVE-2022-35983) - Fixes a
CHECK
fail inParameterizedTruncatedNormal
(CVE-2022-35984) - Fixes a
CHECK
fail inLRNGrad
(CVE-2022-35985) - Fixes a segfault in
RaggedBincount
(CVE-2022-35986) - Fixes a
CHECK
fail inDenseBincount
(CVE-2022-35987) - Fixes a
CHECK
fail intf.linalg.matrix_rank
(CVE-2022-35988) - Fixes a
CHECK
fail inMaxPool
(CVE-2022-35989) - Fixes a
CHECK
fail inConv2DBackpropInput
(CVE-2022-35999) - Fixes a
CHECK
fail inEmptyTensorList
(CVE-2022-35998) - Fixes a
CHECK
fail intf.sparse.cross
(CVE-2022-35997) - Fixes a floating point exception in
Conv2D
(CVE-2022-35996) - Fixes a
CHECK
fail inAudioSummaryV2
(CVE-2022-35995) - Fixes a
CHECK
fail inCollectiveGather
(CVE-2022-35994) - Fixes a
CHECK
fail inSetSize
(CVE-2022-35993) - Fixes a
CHECK
fail inTensorListFromTensor
(CVE-2022-35992) - Fixes a
CHECK
fail inTensorListScatter
andTensorListScatterV2
(CVE-2022-35991) - Fixes a
CHECK
fail inFakeQuantWithMinMaxVarsPerChannelGradient
(CVE-2022-35990) - Fixes a
CHECK
fail inFakeQuantWithMinMaxVarsGradient
(CVE-2022-36005) - Fixes a
CHECK
fail intf.random.gamma
(CVE-2022-36004) - Fixes a
CHECK
fail inRandomPoissonV2
(CVE-2022-36003) - Fixes a
CHECK
fail inUnbatch
(CVE-2022-36002) - Fixes a
CHECK
fail inDrawBoundingBoxes
(CVE-2022-36001) - Fixes a
CHECK
fail inEig
(CVE-2022-36000) - Fixes a null dereference on MLIR on empty function attributes (CVE-2022-36011)
- Fixes an assertion failure on MLIR empty edge names (CVE-2022-36012)
- Fixes a null-dereference in
mlir::tfg::GraphDefImporter::ConvertNodeDef
(CVE-2022-36013) - Fixes a null-dereference in
mlir::tfg::TFOp::nameAttr
(CVE-2022-36014) - Fixes an integer overflow in math ops (CVE-2022-36015)
- Fixes a
CHECK
-fail intensorflow::full_type::SubstituteFromAttrs
(CVE-2022-36016) - Fixes an OOB read in
Gather_nd
op in TF Lite Micro (CVE-2022-35938)
TensorFlow 2.8.3
Release 2.8.3
This releases introduces several vulnerability fixes:
- Fixes a
CHECK
failure in tf.reshape caused by overflows (CVE-2022-35934) - Fixes a
CHECK
failure inSobolSample
caused by missing validation (CVE-2022-35935) - Fixes an OOB read in
Gather_nd
op in TF Lite (CVE-2022-35937) - Fixes a
CHECK
failure inTensorListReserve
caused by missing validation (CVE-2022-35960) - Fixes an OOB write in
Scatter_nd
op in TF Lite (CVE-2022-35939) - Fixes an integer overflow in
RaggedRangeOp
(CVE-2022-35940) - Fixes a
CHECK
failure inAvgPoolOp
(CVE-2022-35941) - Fixes a
CHECK
failures inUnbatchGradOp
(CVE-2022-35952) - Fixes a segfault TFLite converter on per-channel quantized transposed convolutions (CVE-2022-36027)
- Fixes a
CHECK
failures inAvgPool3DGrad
(CVE-2022-35959) - Fixes a
CHECK
failures inFractionalAvgPoolGrad
(CVE-2022-35963) - Fixes a segfault in
BlockLSTMGradV2
(CVE-2022-35964) - Fixes a segfault in
LowerBound
andUpperBound
(CVE-2022-35965) - Fixes a segfault in
QuantizedAvgPool
(CVE-2022-35966) - Fixes a segfault in
QuantizedAdd
(CVE-2022-35967) - Fixes a
CHECK
fail inAvgPoolGrad
(CVE-2022-35968) - Fixes a
CHECK
fail inConv2DBackpropInput
(CVE-2022-35969) - Fixes a segfault in
QuantizedInstanceNorm
(CVE-2022-35970) - Fixes a
CHECK
fail inFakeQuantWithMinMaxVars
(CVE-2022-35971) - Fixes a segfault in
Requantize
(CVE-2022-36017) - Fixes a segfault in
QuantizedBiasAdd
(CVE-2022-35972) - Fixes a
CHECK
fail inFakeQuantWithMinMaxVarsPerChannel
(CVE-2022-36019) - Fixes a segfault in
QuantizedMatMul
(CVE-2022-35973) - Fixes a segfault in
QuantizeDownAndShrinkRange
(CVE-2022-35974) - Fixes segfaults in
QuantizedRelu
andQuantizedRelu6
(CVE-2022-35979) - Fixes a
CHECK
fail inFractionalMaxPoolGrad
(CVE-2022-35981) - Fixes a
CHECK
fail inRaggedTensorToVariant
(CVE-2022-36018) - Fixes a
CHECK
fail inQuantizeAndDequantizeV3
(CVE-2022-36026) - Fixes a segfault in
SparseBincount
(CVE-2022-35982) - Fixes a
CHECK
fail inSave
andSaveSlices
(CVE-2022-35983) - Fixes a
CHECK
fail inParameterizedTruncatedNormal
(CVE-2022-35984) - Fixes a
CHECK
fail inLRNGrad
(CVE-2022-35985) - Fixes a segfault in
RaggedBincount
(CVE-2022-35986) - Fixes a
CHECK
fail inDenseBincount
(CVE-2022-35987) - Fixes a
CHECK
fail intf.linalg.matrix_rank
(CVE-2022-35988) - Fixes a
CHECK
fail inMaxPool
(CVE-2022-35989) - Fixes a
CHECK
fail inConv2DBackpropInput
(CVE-2022-35999) - Fixes a
CHECK
fail inEmptyTensorList
(CVE-2022-35998) - Fixes a
CHECK
fail intf.sparse.cross
(CVE-2022-35997) - Fixes a floating point exception in
Conv2D
(CVE-2022-35996) - Fixes a
CHECK
fail inAudioSummaryV2
(CVE-2022-35995) - Fixes a
CHECK
fail inCollectiveGather
(CVE-2022-35994) - Fixes a
CHECK
fail inSetSize
(CVE-2022-35993) - Fixes a
CHECK
fail inTensorListFromTensor
(CVE-2022-35992) - Fixes a
CHECK
fail inTensorListScatter
andTensorListScatterV2
(CVE-2022-35991) - Fixes a
CHECK
fail inFakeQuantWithMinMaxVarsPerChannelGradient
(CVE-2022-35990) - Fixes a
CHECK
fail inFakeQuantWithMinMaxVarsGradient
(CVE-2022-36005) - Fixes a
CHECK
fail intf.random.gamma
(CVE-2022-36004) - Fixes a
CHECK
fail inRandomPoissonV2
(CVE-2022-36003) - Fixes a
CHECK
fail inUnbatch
(CVE-2022-36002) - Fixes a
CHECK
fail inDrawBoundingBoxes
(CVE-2022-36001) - Fixes a
CHECK
fail inEig
(CVE-2022-36000) - Fixes a null dereference on MLIR on empty function attributes (CVE-2022-36011)
- Fixes an assertion failure on MLIR empty edge names (CVE-2022-36012)
- Fixes a null-dereference in
mlir::tfg::GraphDefImporter::ConvertNodeDef
(CVE-2022-36013) - Fixes a null-dereference in
mlir::tfg::TFOp::nameAttr
(CVE-2022-36014) - Fixes an integer overflow in math ops (CVE-2022-36015)
- Fixes a
CHECK
-fail intensorflow::full_type::SubstituteFromAttrs
(CVE-2022-36016) - Fixes an OOB read in
Gather_nd
op in TF Lite Micro (CVE-2022-35938)