Release 2.13.0
Breaking Changes
- <THIS SECTION SHOULD CONTAIN API, ABI AND BEHAVIORAL BREAKING CHANGES>
Known Caveats
- <CAVEATS REGARDING THE RELEASE (BUT NOT BREAKING CHANGES).>
- <ADDING/BUMPING DEPENDENCIES SHOULD GO HERE>
- <KNOWN LACK OF SUPPORT ON SOME PLATFORM, SHOULD GO HERE>
Major Features and Improvements
-
tf.lite
:- Add 16-bit and 64-bit float type support for built-in op
cast
.
- Add 16-bit and 64-bit float type support for built-in op
-
tf.keras
- Added Keras metrics
tf.keras.metrics.FBetaScore
andtf.keras.metrics.F1Score
.
- Added Keras metrics
Bug Fixes and Other Changes
- <SIMILAR TO ABOVE SECTION, BUT FOR OTHER IMPORTANT CHANGES / BUG FIXES>
- <IF A CHANGE CLOSES A GITHUB ISSUE, IT SHOULD BE DOCUMENTED HERE>
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
, , , , ,
Release 2.12.0
Breaking Changes
-
<THIS SECTION SHOULD CONTAIN API, ABI AND BEHAVIORAL BREAKING CHANGES>
-
Build, Compilation and Packaging
- Removal of redundant packages: the
tensorflow-gpu
andtf-nightly-gpu
packages have been effectively removed and replaced with packages that direct users to switch totensorflow
ortf-nightly
respectively. The naming difference was the only difference between the two sets of packages ever since TensorFlow 2.1, so there is no loss of functionality or GPU support. See https://pypi.org/project/tensorflow-gpu for more details.
- Removal of redundant packages: the
-
tf.function
:- tf.function now uses the Python inspect library directly for parsing the signature of the Python function it is decorated on.
- This can break certain cases that were previously ignored where the signature is malformed, e.g. * Using functools.wraps on a function with different signature * Using functools.partial with an invalid tf.function input
- tf.function now enforces input parameter names to be valid Python identifiers. Incompatible names are automatically sanitized similarly to existing SavedModel signature behavior.
- Parameterless tf.functions are assumed to have an empty input_signature instead of an undefined one even if the input_signature is unspecified.
- tf.types.experimental.TraceType now requires an additional
placeholder_value
method to be defined. - tf.function now traces with placeholder values generated by TraceType instead of the value itself.
-
tf.config.experimental.enable_mlir_graph_optimization
:- Experimental API removed.
-
tf.config.experimental.disable_mlir_graph_optimization
:- Experimental API removed.
-
tf.keras
- Moved all saving-related utilities to a new namespace,
keras.saving
, i.e.keras.saving.load_model
,keras.saving.save_model
,keras.saving.custom_object_scope
,keras.saving.get_custom_objects
,keras.saving.register_keras_serializable
,keras.saving.get_registered_name
andkeras.saving.get_registered_object
. The previous API locations (inkeras.utils
andkeras.models
) will stay available indefinitely, but we recommend that you update your code to point to the new API locations. - Improvements and fixes in Keras loss masking:
- Whether you represent a ragged tensor as a
tf.RaggedTensor
or using keras masking, the returned loss values should be the identical to each other. In previous versions Keras may have silently ignored the mask. - If you use masked losses with Keras the loss values may be different
in TensorFlow
2.12
compared to previous versions. - In cases where the mask was previously ignored, you will now get an error if you pass a mask with an incompatible shape.
- Whether you represent a ragged tensor as a
- Moved all saving-related utilities to a new namespace,
-
tf.SavedModel
- Introduce new class
tf.saved_model.experimental.Fingerprint
that contains the fingerprint of the SavedModel. See the SavedModel Fingerprinting RFC for details. - Introduce API
tf.saved_model.experimental.read_fingerprint(export_dir)
for reading the fingerprint of a SavedModel.
- Introduce new class
Known Caveats
- <CAVEATS REGARDING THE RELEASE (BUT NOT BREAKING CHANGES).>
- <ADDING/BUMPING DEPENDENCIES SHOULD GO HERE>
- <KNOWN LACK OF SUPPORT ON SOME PLATFORM, SHOULD GO HERE>
Major Features and Improvements
-
tf.lite
:- Add 16-bit float type support for built-in op
fill
. - Transpose now supports 6D tensors.
- Float LSTM now supports diagonal recurrent tensors: https://arxiv.org/abs/1903.08023
- Add 16-bit float type support for built-in op
-
tf.keras
:- The new Keras model saving format (
.keras
) is available. You can start using it viamodel.save(f"{fname}.keras", save_format="keras_v3")
. In the future it will become the default for all files with the.keras
extension. This file format targets the Python runtime only and makes it possible to reload Python objects identical to the saved originals. The format supports non-numerical state such as vocabulary files and lookup tables, and it is easy to customize in the case of custom layers with exotic elements of state (e.g. a FIFOQueue). The format does not rely on bytecode or pickling, and is safe by default. Note that as a result, Pythonlambdas
are disallowed at loading time. If you want to uselambdas
, you can passsafe_mode=False
to the loading method (only do this if you trust the source of the model). - Added a
model.export(filepath)
API to create a lightweight SavedModel artifact that can be used for inference (e.g. with TF-Serving). - Added
keras.export.ExportArchive
class for low-level customization of the process of exporting SavedModel artifacts for inference. Both ways of exporting models are based ontf.function
tracing and produce a TF program composed of TF ops. They are meant primarily for environments where the TF runtime is available, but not the Python interpreter, as is typical for production with TF Serving. - Added utility
tf.keras.utils.FeatureSpace
, a one-stop shop for structured data preprocessing and encoding. - Added
tf.SparseTensor
input support totf.keras.layers.Embedding
layer. The layer now accepts a new boolean argumentsparse
. Ifsparse
is set to True, the layer returns a SparseTensor instead of a dense Tensor. Defaults to False. - Added
jit_compile
as a settable property totf.keras.Model
. - Added
synchronized
optional parameter tolayers.BatchNormalization
. - Added deprecation warning to
layers.experimental.SyncBatchNormalization
and suggested to uselayers.BatchNormalization
withsynchronized=True
instead. - Updated
tf.keras.layers.BatchNormalization
to support masking of the inputs (mask
argument) when computing the mean and variance. - Add
tf.keras.layers.Identity
, a placeholder pass-through layer. - Add
show_trainable
option totf.keras.utils.model_to_dot
to display layer trainable status in model plots. - Add ability to save a
tf.keras.utils.FeatureSpace
object, viafeature_space.save("myfeaturespace.keras")
, and reload it viafeature_space = tf.keras.models.load_model("myfeaturespace.keras")
. - Added utility
tf.keras.utils.to_ordinal
to convert class vector to ordinal regression / classification matrix.
- The new Keras model saving format (
-
tf.experimental.dtensor
:- Coordination service now works with
dtensor.initialize_accelerator_system
, and enabled by default. - Add
tf.experimental.dtensor.is_dtensor
to check if a tensor is a DTensor instance.
- Coordination service now works with
-
tf.data
:- Added support for alternative checkpointing protocol which makes it
possible to checkpoint the state of the input pipeline without having to
store the contents of internal buffers. The new functionality can be
enabled through the
experimental_symbolic_checkpoint
option oftf.data.Options()
. - Added a new
rerandomize_each_iteration
argument for thetf.data.Dataset.random()
operation, which controls whether the sequence of generated random numbers should be re-randomized every epoch or not (the default behavior). Ifseed
is set andrerandomize_each_iteration=True
, therandom()
operation will produce a different (deterministic) sequence of numbers every epoch. - Added a new
rerandomize_each_iteration
argument for thetf.data.Dataset.sample_from_datasets()
operation, which controls whether the sequence of generated random numbers used for sampling should be re-randomized every epoch or not. Ifseed
is set andrerandomize_each_iteration=True
, thesample_from_datasets()
operation will use a different (deterministic) sequence of numbers every epoch.
- Added support for alternative checkpointing protocol which makes it
possible to checkpoint the state of the input pipeline without having to
store the contents of internal buffers. The new functionality can be
enabled through the
-
tf.test
:- Added
tf.test.experimental.sync_devices
, which is useful for accurately measuring performance in benchmarks.
- Added
-
tf.experimental.dtensor
:- Added experimental support to ReduceScatter fuse on GPU (NCCL).
Bug Fixes and Other Changes
-
<SIMILAR TO ABOVE SECTION, BUT FOR OTHER IMPORTANT CHANGES / BUG FIXES>
-
<IF A CHANGE CLOSES A GITHUB ISSUE, IT SHOULD BE DOCUMENTED HERE>
-
tf.random
- Added non-experimental aliases for
tf.random.split
andtf.random.fold_in
, the experimental endpoints are still available so no code changes are necessary.
- Added non-experimental aliases for
-
tf.experimental.ExtensionType
- Added function
experimental.extension_type.as_dict()
, which converts an instance oftf.experimental.ExtensionType
to adict
representation.
- Added function
-
stream_executor
- Top level
stream_executor
directory has been deleted, users should use equivalent headers and targets undercompiler/xla/stream_executor
.
- Top level
-
tf.nn
- Added
tf.nn.experimental.general_dropout
, which is similar totf.random.experimental.stateless_dropout
but accepts a custom sampler function.
- Added
-
tf.types.experimental.GenericFunction
- The
experimental_get_compiler_ir
method supports tf.TensorSpec compilation arguments.
- The
-
tf.config.experimental.mlir_bridge_rollout
- Removed enums
MLIR_BRIDGE_ROLLOUT_SAFE_MODE_ENABLED
andMLIR_BRIDGE_ROLLOUT_SAFE_MODE_FALLBACK_ENABLED
which are no longer used by the tf2xla bridge
- Removed enums
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
, , , , ,
Release 2.11.0
Breaking Changes
-
tf.keras.optimizers.Optimizer
now points to the new Keras optimizer, and old optimizers have moved to thetf.keras.optimizers.legacy
namespace. If you find your workflow failing due to this change, you may be facing one of the following issues:- Checkpoint loading failure. The new optimizer handles optimizer
state differently from the old optimizer, which simplies the logic of
checkpoint saving/loading, but at the cost of breaking checkpoint
backward compatibility in some cases. If you want to keep using an old
checkpoint, please change your optimizer to
tf.keras.optimizers.legacy.XXX
(e.g.tf.keras.optimizers.legacy.Adam
). - TF1 compatibility. The new optimizer does not support TF1 any more,
so please use the legacy optimizer
tf.keras.optimizer.legacy.XXX
. We highly recommend to migrate your workflow to TF2 for stable support and new features. - API not found. The new optimizer has a different set of public APIs from the old optimizer. These API changes are mostly related to getting rid of slot variables and TF1 support. Please check the API documentation to find alternatives to the missing API. If you must call the deprecated API, please change your optimizer to the legacy optimizer.
- Learning rate schedule access. When using a
LearningRateSchedule
, The new optimizer'slearning_rate
property returns the current learning rate value instead of aLearningRateSchedule
object as before. If you need to access theLearningRateSchedule
object, please useoptimizer._learning_rate
. - You implemented a custom optimizer based on the old optimizer.
Please set your optimizer to subclass
tf.keras.optimizer.legacy.XXX
. If you want to migrate to the new optimizer and find it does not support your optimizer, please file an issue in the Keras GitHub repo. - Error such as
Cannot recognize variable...
. The new optimizer requires all optimizer variables to be created at the firstapply_gradients()
orminimize()
call. If your workflow calls optimizer to update different parts of model in multiple stages, please calloptimizer.build(model.trainable_variables)
before the training loop. - Performance regression on
ParameterServerStrategy
. This could be significant if you have many PS servers. We are aware of this issue and working on fixes, for now we suggest using the legacy optimizers when usingParameterServerStrategy
. - Timeout or performance loss. We don't anticipate this to happen, but if you see such issues, please use the legacy optimizer, and file an issue in the Keras GitHub repo.
The old Keras optimizer will never be deleted, but will not see any new feature additions. New optimizers (e.g.,
Adafactor
) will only be implemented based ontf.keras.optimizers.Optimizer
, the new base class. - Checkpoint loading failure. The new optimizer handles optimizer
state differently from the old optimizer, which simplies the logic of
checkpoint saving/loading, but at the cost of breaking checkpoint
backward compatibility in some cases. If you want to keep using an old
checkpoint, please change your optimizer to
Major Features and Improvements
-
tf.lite
:- New operations supported:
- tf.unsortedsegmentmin op is supported.
- tf.atan2 op is supported.
- tf.sign op is supported.
- Updates to existing operations:
- tfl.mul now supports complex32 inputs.
- New operations supported:
-
tf.experimental.StructuredTensor
- Introduced
tf.experimental.StructuredTensor
, which provides a flexible and Tensorflow-native way to encode structured data such as protocol buffers or pandas dataframes.
- Introduced
-
tf.keras
:- Added method
get_metrics_result()
totf.keras.models.Model
.- Returns the current metrics values of the model as a dict.
- Added group normalization layer
tf.keras.layers.GroupNormalization
. - Added weight decay support for all Keras optimizers.
- Added Adafactor optimizer
tf.keras.optimizers.Adafactor
. - Added
warmstart_embedding_matrix
totf.keras.utils
. This utility can be used to warmstart an embeddings matrix so you reuse previously-learned word embeddings when working with a new set of words which may include previously unseen words (the embedding vectors for unseen words will be randomly initialized).
- Added method
-
tf.Variable
:- Added
CompositeTensor
as a baseclass toResourceVariable
. This allowstf.Variable
s to be nested intf.experimental.ExtensionType
s. - Added a new constructor argument
experimental_enable_variable_lifting
totf.Variable
, defaulting to True. When it'sFalse
, the variable won't be lifted out oftf.function
, thus it can be used as atf.function
-local variable: during each execution of thetf.function
, the variable will be created and then disposed, similar to a local (i.e. stack-allocated) variable in C/C++. Currentlyexperimental_enable_variable_lifting=False
only works on non-XLA devices (e.g. under@tf.function(jit_compile=False)
).
- Added
-
TF SavedModel:
- Added
fingerprint.pb
to the SavedModel directory. Thefingerprint.pb
file is a protobuf containing the "fingerprint" of the SavedModel. See the RFC for more details regarding its design and properties.
- Added
-
tf.data
:- Graduated experimental APIs:
tf.data.Dataset.ragged_batch
, which batches elements oftf.data.Dataset
s intotf.RaggedTensor
s.tf.data.Dataset.sparse_batch
, which batches elements oftf.data.Dataset
s intotf.sparse.SparseTensor
s.
- Graduated experimental APIs:
Bug Fixes and Other Changes
-
tf.image
- Added an optional parameter
return_index_map
totf.image.ssim
which causes the returned value to be the local SSIM map instead of the global mean.
- Added an optional parameter
-
TF Core:
tf.custom_gradient
can now be applied to functions that accept "composite" tensors, such astf.RaggedTensor
, as inputs.- Fix device placement issues related to datasets with ragged tensors of strings (i.e. variant encoded data with types not supported on GPU).
- 'experimental_follow_type_hints' for tf.function has been deprecated. Please use input_signature or reduce_retracing to minimize retracing.
-
tf.SparseTensor
:- Introduced
set_shape
, which sets the static dense shape of the sparse tensor and has the same semantics astf.Tensor.set_shape
.
- Introduced
Security
- TF is currently using giflib 5.2.1 which has CVE-2022-28506. TF is not affected by the CVE as it does not use
DumpScreen2RGB
at all. - Fixes an OOB seg fault in
DynamicStitch
due to missing validation (CVE-2022-41883) - Fixes an overflow in
tf.keras.losses.poisson
(CVE-2022-41887) - Fixes a heap OOB failure in
ThreadUnsafeUnigramCandidateSampler
caused by missing validation (CVE-2022-41880) - Fixes a segfault in
ndarray_tensor_bridge
(CVE-2022-41884) - Fixes an overflow in
FusedResizeAndPadConv2D
(CVE-2022-41885) - Fixes a overflow in
ImageProjectiveTransformV2
(CVE-2022-41886) - Fixes an FPE in
tf.image.generate_bounding_box_proposals
on GPU (CVE-2022-41888) - Fixes a segfault in
pywrap_tfe_src
caused by invalid attributes (CVE-2022-41889) - Fixes a
CHECK
fail inBCast
(CVE-2022-41890) - Fixes a segfault in
TensorListConcat
(CVE-2022-41891) - Fixes a
CHECK_EQ
fail inTensorListResize
(CVE-2022-41893) - Fixes an overflow in
CONV_3D_TRANSPOSE
on TFLite (CVE-2022-41894) - Fixes a heap OOB in
MirrorPadGrad
(CVE-2022-41895) - Fixes a crash in
Mfcc
(CVE-2022-41896) - Fixes a heap OOB in
FractionalMaxPoolGrad
(CVE-2022-41897) - Fixes a
CHECK
fail inSparseFillEmptyRowsGrad
(CVE-2022-41898) - Fixes a
CHECK
fail inSdcaOptimizer
(CVE-2022-41899) - Fixes a heap OOB in
FractionalAvgPool
andFractionalMaxPool
(CVE-2022-41900) - Fixes a
CHECK_EQ
inSparseMatrixNNZ
(CVE-2022-41901) - Fixes an OOB write in grappler (CVE-2022-41902)
- Fixes a overflow in
ResizeNearestNeighborGrad
(CVE-2022-41907) - Fixes a
CHECK
fail inPyFunc
(CVE-2022-41908) - Fixes a segfault in
CompositeTensorVariantToComponents
(CVE-2022-41909) - Fixes a invalid char to bool conversion in printing a tensor (CVE-2022-41911)
- Fixes a heap overflow in
QuantizeAndDequantizeV2
(CVE-2022-41910) - Fixes a
CHECK
failure inSobolSample
via missing validation (CVE-2022-35935) - Fixes a
CHECK
fail inTensorListScatter
andTensorListScatterV2
in eager mode (CVE-2022-35935)
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
103yiran, 8bitmp3, Aakar Dwivedi, Alexander Grund, alif_elham, Aman Agarwal, amoitra, Andrei Ivanov, andreii, Andrew Goodbody, angerson, Ashay Rane, Azeem Shaikh, Ben Barsdell, bhack, Bhavani Subramanian, Cedric Nugteren, Chandra Kumar Ramasamy, Christopher Bate, CohenAriel, Cotarou, cramasam, Enrico Minack, Francisco Unda, Frederic Bastien, gadagashwini, Gauri1 Deshpande, george, Jake, Jeff, Jerry Ge, Jingxuan He, Jojimon Varghese, Jonathan Dekhtiar, Kaixi Hou, Kanvi Khanna, kcoul, Keith Smiley, Kevin Hu, Kun Lu, kushanam, Lianmin Zheng, liuyuanqiang, Louis Sugy, Mahmoud Abuzaina, Marius Brehler, mdfaijul, Meenakshi Venkataraman, Milos Puzovic, mohantym, Namrata-Ibm, Nathan John Sircombe, Nathan Luehr, Olaf Lipinski, Om Thakkar, Osman F Bayram, Patrice Vignola, Pavani Majety, Philipp Hack, Prianka Liz Kariat, Rahul Batra, RajeshT, Renato Golin, riestere, Roger Iyengar, Rohit Santhanam, Rsanthanam-Amd, Sadeed Pv, Samuel Marks, Shimokawa, Naoaki, Siddhesh Kothadi, Simengliu-Nv, Sindre Seppola, snadampal, Srinivasan Narayanamoorthy, sushreebarsa, syedshahbaaz, Tamas Bela Feher, Tatwai Chong, Thibaut Goetghebuer-Planchon, tilakrayal, Tom Anderson, Tomohiro Endo, Trevor Morris, vibhutisawant, Victor Zhang, Vremold, Xavier Bonaventura, Yanming Wang, Yasir Modak, Yimei Sun, Yong Tang, Yulv-Git, zhuoran.liu, zotanika
Release 2.10.1
This release introduces several vulnerability fixes:
- Fixes an OOB seg fault in
DynamicStitch
due to missing validation (CVE-2022-41883) - Fixes an overflow in
tf.keras.losses.poisson
(CVE-2022-41887) - Fixes a heap OOB failure in
ThreadUnsafeUnigramCandidateSampler
caused by missing validation (CVE-2022-41880) - Fixes a segfault in
ndarray_tensor_bridge
(CVE-2022-41884) - Fixes an overflow in
FusedResizeAndPadConv2D
(CVE-2022-41885) - Fixes a overflow in
ImageProjectiveTransformV2
(CVE-2022-41886) - Fixes an FPE in
tf.image.generate_bounding_box_proposals
on GPU (CVE-2022-41888) - Fixes a segfault in
pywrap_tfe_src
caused by invalid attributes (CVE-2022-41889) - Fixes a
CHECK
fail inBCast
(CVE-2022-41890) - Fixes a segfault in
TensorListConcat
(CVE-2022-41891) - Fixes a
CHECK_EQ
fail inTensorListResize
(CVE-2022-41893) - Fixes an overflow in
CONV_3D_TRANSPOSE
on TFLite (CVE-2022-41894) - Fixes a heap OOB in
MirrorPadGrad
(CVE-2022-41895) - Fixes a crash in
Mfcc
(CVE-2022-41896) - Fixes a heap OOB in
FractionalMaxPoolGrad
(CVE-2022-41897) - Fixes a
CHECK
fail inSparseFillEmptyRowsGrad
(CVE-2022-41898) - Fixes a
CHECK
fail inSdcaOptimizer
(CVE-2022-41899) - Fixes a heap OOB in
FractionalAvgPool
andFractionalMaxPool
(CVE-2022-41900) - Fixes a
CHECK_EQ
inSparseMatrixNNZ
(CVE-2022-41901) - Fixes an OOB write in grappler (CVE-2022-41902)
- Fixes a overflow in
ResizeNearestNeighborGrad
(CVE-2022-41907) - Fixes a
CHECK
fail inPyFunc
(CVE-2022-41908) - Fixes a segfault in
CompositeTensorVariantToComponents
(CVE-2022-41909) - Fixes a invalid char to bool conversion in printing a tensor (CVE-2022-41911)
- Fixes a heap overflow in
QuantizeAndDequantizeV2
(CVE-2022-41910) - Fixes a
CHECK
failure inSobolSample
via missing validation (CVE-2022-35935) - Fixes a
CHECK
fail inTensorListScatter
andTensorListScatterV2
in eager mode (CVE-2022-35935)
Release 2.9.3
This release introduces several vulnerability fixes:
- Fixes an overflow in
tf.keras.losses.poisson
(CVE-2022-41887) - Fixes a heap OOB failure in
ThreadUnsafeUnigramCandidateSampler
caused by missing validation (CVE-2022-41880) - Fixes a segfault in
ndarray_tensor_bridge
(CVE-2022-41884) - Fixes an overflow in
FusedResizeAndPadConv2D
(CVE-2022-41885) - Fixes a overflow in
ImageProjectiveTransformV2
(CVE-2022-41886) - Fixes an FPE in
tf.image.generate_bounding_box_proposals
on GPU (CVE-2022-41888) - Fixes a segfault in
pywrap_tfe_src
caused by invalid attributes (CVE-2022-41889) - Fixes a
CHECK
fail inBCast
(CVE-2022-41890) - Fixes a segfault in
TensorListConcat
(CVE-2022-41891) - Fixes a
CHECK_EQ
fail inTensorListResize
(CVE-2022-41893) - Fixes an overflow in
CONV_3D_TRANSPOSE
on TFLite (CVE-2022-41894) - Fixes a heap OOB in
MirrorPadGrad
(CVE-2022-41895) - Fixes a crash in
Mfcc
(CVE-2022-41896) - Fixes a heap OOB in
FractionalMaxPoolGrad
(CVE-2022-41897) - Fixes a
CHECK
fail inSparseFillEmptyRowsGrad
(CVE-2022-41898) - Fixes a
CHECK
fail inSdcaOptimizer
(CVE-2022-41899) - Fixes a heap OOB in
FractionalAvgPool
andFractionalMaxPool
(CVE-2022-41900) - Fixes a
CHECK_EQ
inSparseMatrixNNZ
(CVE-2022-41901) - Fixes an OOB write in grappler (CVE-2022-41902)
- Fixes a overflow in
ResizeNearestNeighborGrad
(CVE-2022-41907) - Fixes a
CHECK
fail inPyFunc
(CVE-2022-41908) - Fixes a segfault in
CompositeTensorVariantToComponents
(CVE-2022-41909) - Fixes a invalid char to bool conversion in printing a tensor (CVE-2022-41911)
- Fixes a heap overflow in
QuantizeAndDequantizeV2
(CVE-2022-41910) - Fixes a
CHECK
failure inSobolSample
via missing validation (CVE-2022-35935) - Fixes a
CHECK
fail inTensorListScatter
andTensorListScatterV2
in eager mode (CVE-2022-35935)
Release 2.8.4
This release introduces several vulnerability fixes:
- Fixes a heap OOB failure in
ThreadUnsafeUnigramCandidateSampler
caused by missing validation (CVE-2022-41880) - Fixes a segfault in
ndarray_tensor_bridge
(CVE-2022-41884) - Fixes an overflow in
FusedResizeAndPadConv2D
(CVE-2022-41885) - Fixes a overflow in
ImageProjectiveTransformV2
(CVE-2022-41886) - Fixes an FPE in
tf.image.generate_bounding_box_proposals
on GPU (CVE-2022-41888) - Fixes a segfault in
pywrap_tfe_src
caused by invalid attributes (CVE-2022-41889) - Fixes a
CHECK
fail inBCast
(CVE-2022-41890) - Fixes a segfault in
TensorListConcat
(CVE-2022-41891) - Fixes a
CHECK_EQ
fail inTensorListResize
(CVE-2022-41893) - Fixes an overflow in
CONV_3D_TRANSPOSE
on TFLite (CVE-2022-41894) - Fixes a heap OOB in
MirrorPadGrad
(CVE-2022-41895) - Fixes a crash in
Mfcc
(CVE-2022-41896) - Fixes a heap OOB in
FractionalMaxPoolGrad
(CVE-2022-41897) - Fixes a
CHECK
fail inSparseFillEmptyRowsGrad
(CVE-2022-41898) - Fixes a
CHECK
fail inSdcaOptimizer
(CVE-2022-41899) - Fixes a heap OOB in
FractionalAvgPool
andFractionalMaxPool
(CVE-2022-41900) - Fixes a
CHECK_EQ
inSparseMatrixNNZ
(CVE-2022-41901) - Fixes an OOB write in grappler (CVE-2022-41902)
- Fixes a overflow in
ResizeNearestNeighborGrad
(CVE-2022-41907) - Fixes a
CHECK
fail inPyFunc
(CVE-2022-41908) - Fixes a segfault in
CompositeTensorVariantToComponents
(CVE-2022-41909) - Fixes a invalid char to bool conversion in printing a tensor (CVE-2022-41911)
- Fixes a heap overflow in
QuantizeAndDequantizeV2
(CVE-2022-41910) - Fixes a
CHECK
failure inSobolSample
via missing validation (CVE-2022-35935) - Fixes a
CHECK
fail inTensorListScatter
andTensorListScatterV2
in eager mode (CVE-2022-35935)
Release 2.10.0
Breaking Changes
- Causal attention in
keras.layers.Attention
andkeras.layers.AdditiveAttention
is now specified in thecall()
method via theuse_causal_mask
argument (rather than in the constructor), for consistency with other layers. - Some files in
tensorflow/python/training
have been moved totensorflow/python/tracking
andtensorflow/python/checkpoint
. Please update your imports accordingly, the old files will be removed in Release 2.11. tf.keras.optimizers.experimental.Optimizer
will graduate in Release 2.11, which meanstf.keras.optimizers.Optimizer
will be an alias oftf.keras.optimizers.experimental.Optimizer
. The currenttf.keras.optimizers.Optimizer
will continue to be supported astf.keras.optimizers.legacy.Optimizer
, e.g.,tf.keras.optimizers.legacy.Adam
. Most users won't be affected by this change, but please check the API doc if any API used in your workflow is changed or deprecated, and make adaptations. If you decide to keep using the old optimizer, please explicitly change your optimizer totf.keras.optimizers.legacy.Optimizer
.- RNG behavior change for
tf.keras.initializers
. Keras initializers will now use stateless random ops to generate random numbers.- Both seeded and unseeded initializers will always generate the same
values every time they are called (for a given variable shape). For
unseeded initializers (
seed=None
), a random seed will be created and assigned at initializer creation (different initializer instances get different seeds). - An unseeded initializer will raise a warning if it is reused (called) multiple times. This is because it would produce the same values each time, which may not be intended.
- Both seeded and unseeded initializers will always generate the same
values every time they are called (for a given variable shape). For
unseeded initializers (
- API changes under
tf.experimental.dtensor
:- New API for initialization of CPU/GPU/TPU in dtensor.
dtensor.initialize_accelerator_system
anddtensor.shutdown_accelerator_system
. - The following existing API will be removed:
dtensor.initialize_multi_client
,dtensor.initialize_tpu_system
, anddtensor.shutdown_tpu_system
.
- New API for initialization of CPU/GPU/TPU in dtensor.
Deprecations
- The C++
tensorflow::Code
andtensorflow::Status
will become aliases of respectivelyabsl::StatusCode
andabsl::Status
in some future release.- Use
tensorflow::OkStatus()
instead oftensorflow::Status::OK()
. - Stop constructing
Status
objects fromtensorflow::error::Code
. - One MUST NOT access
tensorflow::errors::Code
fields. Accessingtensorflow::error::Code
fields is fine.- Use the constructors such as
tensorflow::errors:InvalidArgument
to create status using an error code without accessing it. - Use the free functions such as
tensorflow::errors::IsInvalidArgument
if needed. - In the last resort, use
e.g.
static_cast<tensorflow::errors::Code>(error::Code::INVALID_ARGUMENT)
orstatic_cast<int>(code)
for comparisons.
- Use the constructors such as
- Use
tensorflow::StatusOr
will also become in the future an alias toabsl::StatusOr
, so useStatusOr::value
instead ofStatusOr::ConsumeValueOrDie
.
Major Features and Improvements
-
tf.lite
:- New operations supported:
- tflite SelectV2 now supports 5D.
tf.einsum
is supported with multiple unknown shapes.tf.unsortedsegmentprod
op is supported.tf.unsortedsegmentmax
op is supported.tf.unsortedsegmentsum
op is supported.
- Updates to existing operations:
tfl.scatter_nd
now supports I1 for theupdate
arg.
- Upgrade Flatbuffers v2.0.5 from v1.12.0
- New operations supported:
-
tf.keras
:EinsumDense
layer is moved from experimental to core. Its import path is moved fromtf.keras.layers.experimental.EinsumDense
totf.keras.layers.EinsumDense
.- Added
tf.keras.utils.audio_dataset_from_directory
utility to easily generate audio classification datasets from directories of.wav
files. - Added
subset="both"
support intf.keras.utils.image_dataset_from_directory
,tf.keras.utils.text_dataset_from_directory
, andaudio_dataset_from_directory
, to be used with thevalidation_split
argument, for returning both dataset splits at once, as a tuple. - Added
tf.keras.utils.split_dataset
utility to split aDataset
object or a list/tuple of arrays into twoDataset
objects (e.g. train/test). - Added step granularity to
BackupAndRestore
callback for handling distributed training failures & restarts. The training state can now be restored at the exact epoch and step at which it was previously saved before failing. - Added
tf.keras.dtensor.experimental.optimizers.AdamW
. This optimizer is similar to the existingkeras.optimizers.experimental.AdamW
, and works in the DTensor training use case. - Improved masking support for
tf.keras.layers.MultiHeadAttention
.- Implicit masks for
query
,key
andvalue
inputs will automatically be used to compute a correct attention mask for the layer. These padding masks will be combined with anyattention_mask
passed in directly when calling the layer. This can be used withtf.keras.layers.Embedding
withmask_zero=True
to automatically infer a correct padding mask. - Added a
use_causal_mask
call time argument to the layer. Passinguse_causal_mask=True
will compute a causal attention mask, and optionally combine it with anyattention_mask
passed in directly when calling the layer.
- Implicit masks for
- Added
ignore_class
argument in the lossSparseCategoricalCrossentropy
and metricsIoU
andMeanIoU
, to specify a class index to be ignored during loss/metric computation (e.g. a background/void class). - Added
tf.keras.models.experimental.SharpnessAwareMinimization
. This class implements the sharpness-aware minimization technique, which boosts model performance on various tasks, e.g., ResNet on image classification.
-
tf.data
:- Added support for cross-trainer data caching in tf.data service. This saves computation resources when concurrent training jobs train from the same dataset. See (https://www.tensorflow.org/api_docs/python/tf/data/experimental/service#sharing_tfdata_service_with_concurrent_trainers) for more details.
- Added
dataset_id
totf.data.experimental.service.register_dataset
. If provided,tf.data
service will use the provided ID for the dataset. If the dataset ID already exists, no new dataset will be registered. This is useful if multiple training jobs need to use the same dataset for training. In this case, users should callregister_dataset
with the samedataset_id
. - Added a new field,
inject_prefetch
, totf.data.experimental.OptimizationOptions
. If it is set toTrue
,tf.data
will now automatically add aprefetch
transformation to datasets that end in synchronous transformations. This enables data generation to be overlapped with data consumption. This may cause a small increase in memory usage due to buffering. To enable this behavior, setinject_prefetch=True
intf.data.experimental.OptimizationOptions
. - Added a new value to
tf.data.Options.autotune.autotune_algorithm
:STAGE_BASED
. If the autotune algorithm is set toSTAGE_BASED
, then it runs a new algorithm that can get the same performance with lower CPU/memory usage. - Added
tf.data.experimental.from_list
, a new API for creatingDataset
s from lists of elements. - Graduated experimental APIs:
tf.data.Dataset.counter
, which createsDataset
s of indefinite sequences of numbers.tf.data.Dataset.ignore_errors
, which drops erroneous elements fromDataset
s.
- Added
tf.data.Dataset.rebatch
, a new API for rebatching the elements of a dataset.
-
tf.distribute
:- Added
tf.distribute.experimental.PreemptionCheckpointHandler
to handle worker preemption/maintenance and cluster-wise consistent error reporting fortf.distribute.MultiWorkerMirroredStrategy
. Specifically, for the type of interruption with advance notice, it automatically saves a checkpoint, exits the program without raising an unrecoverable error, and restores the progress when training restarts.
- Added
-
tf.math
:- Added
tf.math.approx_max_k
andtf.math.approx_min_k
which are the optimized alternatives totf.math.top_k
on TPU. The performance difference ranges from 8 to 100 times depending on the size of k. When running on CPU and GPU, a non-optimized XLA kernel is used.
- Added
-
tf.train
:- Added
tf.train.TrackableView
which allows users to inspect the TensorFlow Trackable object (e.g.tf.Module
, Keras Layers and models).
- Added
-
tf.vectorized_map
:- Added an optional parameter:
warn
. This parameter controls whether or not warnings will be printed when operations in the providedfn
fall back to a while loop.
- Added an optional parameter:
-
XLA:
tf.distribute.MultiWorkerMirroredStrategy
is now compilable with XLA.- Compute Library for the Arm® Architecture (ACL) is supported for aarch64 CPU XLA runtime
-
CPU performance optimizations:
- x86 CPUs:
oneDNN
bfloat16 auto-mixed precision grappler graph optimization pass has been
renamed from
auto_mixed_precision_mkl
toauto_mixed_precision_onednn_bfloat16
. See example usage here. - aarch64 CPUs: Experimental performance optimizations from
Compute Library for the Arm® Architecture (ACL)
are available through oneDNN in the default Linux aarch64 package (
pip install tensorflow
).- The optimizations are disabled by default.
- Set the environment variable
TF_ENABLE_ONEDNN_OPTS=1
to enable the optimizations. Setting the variable to 0 or unsetting it will disable the optimizations. - These optimizations can yield slightly different numerical results from when they are off due to floating-point round-off errors from different computation approaches and orders.
- To verify that the optimizations are on, look for a message with "oneDNN custom operations are on" in the log. If the exact phrase is not there, it means they are off.
- x86 CPUs:
oneDNN
bfloat16 auto-mixed precision grappler graph optimization pass has been
renamed from
Bug Fixes and Other Changes
-
New argument
experimental_device_ordinal
inLogicalDeviceConfiguration
to control the order of logical devices (GPU only). -
tf.keras
:- Changed the TensorBoard tag names produced by the
tf.keras.callbacks.TensorBoard
callback, so that summaries logged automatically for model weights now include either a/histogram
or/image
suffix in their tag names, in order to prevent tag name collisions across summary types.
- Changed the TensorBoard tag names produced by the
-
When running on GPU (with cuDNN version 7.6.3 or later),
tf.nn.depthwise_conv2d
backprop tofilter
(and therefore alsotf.keras.layers.DepthwiseConv2D
) now operate deterministically (andtf.errors.UnimplementedError
is no longer thrown) when op-determinism has been enabled viatf.config.experimental.enable_op_determinism
. This closes issue 47174. -
tf.random
- Added
tf.random.experimental.stateless_shuffle
, a stateless version oftf.random.shuffle
.
- Added
Security
- Fixes a
CHECK
failure in tf.reshape caused by overflows (CVE-2022-35934) - Fixes a
CHECK
failure inSobolSample
caused by missing validation (CVE-2022-35935) - Fixes an OOB read in
Gather_nd
op in TF Lite (CVE-2022-35937) - Fixes a
CHECK
failure inTensorListReserve
caused by missing validation (CVE-2022-35960) - Fixes an OOB write in
Scatter_nd
op in TF Lite (CVE-2022-35939) - Fixes an integer overflow in
RaggedRangeOp
(CVE-2022-35940) - Fixes a
CHECK
failure inAvgPoolOp
(CVE-2022-35941) - Fixes a
CHECK
failures inUnbatchGradOp
(CVE-2022-35952) - Fixes a segfault TFLite converter on per-channel quantized transposed convolutions (CVE-2022-36027)
- Fixes a
CHECK
failures inAvgPool3DGrad
(CVE-2022-35959) - Fixes a
CHECK
failures inFractionalAvgPoolGrad
(CVE-2022-35963) - Fixes a segfault in
BlockLSTMGradV2
(CVE-2022-35964) - Fixes a segfault in
LowerBound
andUpperBound
(CVE-2022-35965) - Fixes a segfault in
QuantizedAvgPool
(CVE-2022-35966) - Fixes a segfault in
QuantizedAdd
(CVE-2022-35967) - Fixes a
CHECK
fail inAvgPoolGrad
(CVE-2022-35968) - Fixes a
CHECK
fail inConv2DBackpropInput
(CVE-2022-35969) - Fixes a segfault in
QuantizedInstanceNorm
(CVE-2022-35970) - Fixes a
CHECK
fail inFakeQuantWithMinMaxVars
(CVE-2022-35971) - Fixes a segfault in
Requantize
(CVE-2022-36017) - Fixes a segfault in
QuantizedBiasAdd
(CVE-2022-35972) - Fixes a
CHECK
fail inFakeQuantWithMinMaxVarsPerChannel
(CVE-2022-36019) - Fixes a segfault in
QuantizedMatMul
(CVE-2022-35973) - Fixes a segfault in
QuantizeDownAndShrinkRange
(CVE-2022-35974) - Fixes segfaults in
QuantizedRelu
andQuantizedRelu6
(CVE-2022-35979) - Fixes a
CHECK
fail inFractionalMaxPoolGrad
(CVE-2022-35981) - Fixes a
CHECK
fail inRaggedTensorToVariant
(CVE-2022-36018) - Fixes a
CHECK
fail inQuantizeAndDequantizeV3
(CVE-2022-36026) - Fixes a segfault in
SparseBincount
(CVE-2022-35982) - Fixes a
CHECK
fail inSave
andSaveSlices
(CVE-2022-35983) - Fixes a
CHECK
fail inParameterizedTruncatedNormal
(CVE-2022-35984) - Fixes a
CHECK
fail inLRNGrad
(CVE-2022-35985) - Fixes a segfault in
RaggedBincount
(CVE-2022-35986) - Fixes a
CHECK
fail inDenseBincount
(CVE-2022-35987) - Fixes a
CHECK
fail intf.linalg.matrix_rank
(CVE-2022-35988) - Fixes a
CHECK
fail inMaxPool
(CVE-2022-35989) - Fixes a
CHECK
fail inConv2DBackpropInput
(CVE-2022-35999) - Fixes a
CHECK
fail inEmptyTensorList
(CVE-2022-35998) - Fixes a
CHECK
fail intf.sparse.cross
(CVE-2022-35997) - Fixes a floating point exception in
Conv2D
(CVE-2022-35996) - Fixes a
CHECK
fail inAudioSummaryV2
(CVE-2022-35995) - Fixes a
CHECK
fail inCollectiveGather
(CVE-2022-35994) - Fixes a
CHECK
fail inSetSize
(CVE-2022-35993) - Fixes a
CHECK
fail inTensorListFromTensor
(CVE-2022-35992) - Fixes a
CHECK
fail inTensorListScatter
andTensorListScatterV2
(CVE-2022-35991) - Fixes a
CHECK
fail inFakeQuantWithMinMaxVarsPerChannelGradient
(CVE-2022-35990) - Fixes a
CHECK
fail inFakeQuantWithMinMaxVarsGradient
(CVE-2022-36005) - Fixes a
CHECK
fail intf.random.gamma
(CVE-2022-36004) - Fixes a
CHECK
fail inRandomPoissonV2
(CVE-2022-36003) - Fixes a
CHECK
fail inUnbatch
(CVE-2022-36002) - Fixes a
CHECK
fail inDrawBoundingBoxes
(CVE-2022-36001) - Fixes a
CHECK
fail inEig
(CVE-2022-36000) - Fixes a null dereference on MLIR on empty function attributes (CVE-2022-36011)
- Fixes an assertion failure on MLIR empty edge names (CVE-2022-36012)
- Fixes a null-dereference in
mlir::tfg::GraphDefImporter::ConvertNodeDef
(CVE-2022-36013) - Fixes a null-dereference in
mlir::tfg::TFOp::nameAttr
(CVE-2022-36014) - Fixes an integer overflow in math ops (CVE-2022-36015)
- Fixes a
CHECK
-fail intensorflow::full_type::SubstituteFromAttrs
(CVE-2022-36016) - Fixes an OOB read in
Gather_nd
op in TF Lite Micro (CVE-2022-35938)
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
Abolfazl Shahbazi, Adam Lanicek, Amin Benarieb, andreii, Andrew Fitzgibbon, Andrew Goodbody, angerson, Ashiq Imran, Aurélien Geron, Banikumar Maiti (Intel Aipg), Ben Barsdell, Ben Mares, bhack, Bhavani Subramanian, Bill Schnurr, Byungsoo Oh, Chandra Sr Potula, Chengji Yao, Chris Carpita, Christopher Bate, chunduriv, Cliff Woolley, Cliffs Dover, Cloud Han, Code-Review-Doctor, DEKHTIARJonathan, Deven Desai, Djacon, Duncan Riach, fedotoff, fo40225, Frederic Bastien, gadagashwini, Gauri1 Deshpande, guozhong.zhuang, Hui Peng, James Gerity, Jason Furmanek, Jonathan Dekhtiar, Jueon Park, Kaixi Hou, Kanvi Khanna, Keith Smiley, Koan-Sin Tan, Kulin Seth, kushanam, Learning-To-Play, Li-Wen Chang, lipracer, liuyuanqiang, Louis Sugy, Lucas David, Lukas Geiger, Mahmoud Abuzaina, Marius Brehler, Maxiwell S. Garcia, mdfaijul, Meenakshi Venkataraman, Michal Szutenberg, Michele Di Giorgio, Mickaël Salamin, Nathan John Sircombe, Nathan Luehr, Neil Girdhar, Nils Reichardt, Nishidha Panpaliya, Nobuo Tsukamoto, Om Thakkar, Patrice Vignola, Philipp Hack, Pooya Jannaty, Prianka Liz Kariat, pshiko, Rajeshwar Reddy T, rdl4199, Rohit Santhanam, Rsanthanam-Amd, Sachin Muradi, Saoirse Stewart, Serge Panev, Shu Wang, Srinivasan Narayanamoorthy, Stella Stamenova, Stephan Hartmann, Sunita Nadampalli, synandi, Tamas Bela Feher, Tao Xu, Thibaut Goetghebuer-Planchon, Trevor Morris, Xiaoming (Jason) Cui, Yimei Sun, Yong Tang, Yuanqiang Liu, Yulv-Git, Zhoulong Jiang, ZihengJiang
Release 2.9.2
This releases introduces several vulnerability fixes:
- Fixes a
CHECK
failure in tf.reshape caused by overflows (CVE-2022-35934) - Fixes a
CHECK
failure inSobolSample
caused by missing validation (CVE-2022-35935) - Fixes an OOB read in
Gather_nd
op in TF Lite (CVE-2022-35937) - Fixes a
CHECK
failure inTensorListReserve
caused by missing validation (CVE-2022-35960) - Fixes an OOB write in
Scatter_nd
op in TF Lite (CVE-2022-35939) - Fixes an integer overflow in
RaggedRangeOp
(CVE-2022-35940) - Fixes a
CHECK
failure inAvgPoolOp
(CVE-2022-35941) - Fixes a
CHECK
failures inUnbatchGradOp
(CVE-2022-35952) - Fixes a segfault TFLite converter on per-channel quantized transposed convolutions (CVE-2022-36027)
- Fixes a
CHECK
failures inAvgPool3DGrad
(CVE-2022-35959) - Fixes a
CHECK
failures inFractionalAvgPoolGrad
(CVE-2022-35963) - Fixes a segfault in
BlockLSTMGradV2
(CVE-2022-35964) - Fixes a segfault in
LowerBound
andUpperBound
(CVE-2022-35965) - Fixes a segfault in
QuantizedAvgPool
(CVE-2022-35966) - Fixes a segfault in
QuantizedAdd
(CVE-2022-35967) - Fixes a
CHECK
fail inAvgPoolGrad
(CVE-2022-35968) - Fixes a
CHECK
fail inConv2DBackpropInput
(CVE-2022-35969) - Fixes a segfault in
QuantizedInstanceNorm
(CVE-2022-35970) - Fixes a
CHECK
fail inFakeQuantWithMinMaxVars
(CVE-2022-35971) - Fixes a segfault in
Requantize
(CVE-2022-36017) - Fixes a segfault in
QuantizedBiasAdd
(CVE-2022-35972) - Fixes a
CHECK
fail inFakeQuantWithMinMaxVarsPerChannel
(CVE-2022-36019) - Fixes a segfault in
QuantizedMatMul
(CVE-2022-35973) - Fixes a segfault in
QuantizeDownAndShrinkRange
(CVE-2022-35974) - Fixes segfaults in
QuantizedRelu
andQuantizedRelu6
(CVE-2022-35979) - Fixes a
CHECK
fail inFractionalMaxPoolGrad
(CVE-2022-35981) - Fixes a
CHECK
fail inRaggedTensorToVariant
(CVE-2022-36018) - Fixes a
CHECK
fail inQuantizeAndDequantizeV3
(CVE-2022-36026) - Fixes a segfault in
SparseBincount
(CVE-2022-35982) - Fixes a
CHECK
fail inSave
andSaveSlices
(CVE-2022-35983) - Fixes a
CHECK
fail inParameterizedTruncatedNormal
(CVE-2022-35984) - Fixes a
CHECK
fail inLRNGrad
(CVE-2022-35985) - Fixes a segfault in
RaggedBincount
(CVE-2022-35986) - Fixes a
CHECK
fail inDenseBincount
(CVE-2022-35987) - Fixes a
CHECK
fail intf.linalg.matrix_rank
(CVE-2022-35988) - Fixes a
CHECK
fail inMaxPool
(CVE-2022-35989) - Fixes a
CHECK
fail inConv2DBackpropInput
(CVE-2022-35999) - Fixes a
CHECK
fail inEmptyTensorList
(CVE-2022-35998) - Fixes a
CHECK
fail intf.sparse.cross
(CVE-2022-35997) - Fixes a floating point exception in
Conv2D
(CVE-2022-35996) - Fixes a
CHECK
fail inAudioSummaryV2
(CVE-2022-35995) - Fixes a
CHECK
fail inCollectiveGather
(CVE-2022-35994) - Fixes a
CHECK
fail inSetSize
(CVE-2022-35993) - Fixes a
CHECK
fail inTensorListFromTensor
(CVE-2022-35992) - Fixes a
CHECK
fail inTensorListScatter
andTensorListScatterV2
(CVE-2022-35991) - Fixes a
CHECK
fail inFakeQuantWithMinMaxVarsPerChannelGradient
(CVE-2022-35990) - Fixes a
CHECK
fail inFakeQuantWithMinMaxVarsGradient
(CVE-2022-36005) - Fixes a
CHECK
fail intf.random.gamma
(CVE-2022-36004) - Fixes a
CHECK
fail inRandomPoissonV2
(CVE-2022-36003) - Fixes a
CHECK
fail inUnbatch
(CVE-2022-36002) - Fixes a
CHECK
fail inDrawBoundingBoxes
(CVE-2022-36001) - Fixes a
CHECK
fail inEig
(CVE-2022-36000) - Fixes a null dereference on MLIR on empty function attributes (CVE-2022-36011)
- Fixes an assertion failure on MLIR empty edge names (CVE-2022-36012)
- Fixes a null-dereference in
mlir::tfg::GraphDefImporter::ConvertNodeDef
(CVE-2022-36013) - Fixes a null-dereference in
mlir::tfg::TFOp::nameAttr
(CVE-2022-36014) - Fixes an integer overflow in math ops (CVE-2022-36015)
- Fixes a
CHECK
-fail intensorflow::full_type::SubstituteFromAttrs
(CVE-2022-36016) - Fixes an OOB read in
Gather_nd
op in TF Lite Micro (CVE-2022-35938)
Release 2.8.3
This releases introduces several vulnerability fixes:
- Fixes a
CHECK
failure in tf.reshape caused by overflows (CVE-2022-35934) - Fixes a
CHECK
failure inSobolSample
caused by missing validation (CVE-2022-35935) - Fixes an OOB read in
Gather_nd
op in TF Lite (CVE-2022-35937) - Fixes a
CHECK
failure inTensorListReserve
caused by missing validation (CVE-2022-35960) - Fixes an OOB write in
Scatter_nd
op in TF Lite (CVE-2022-35939) - Fixes an integer overflow in
RaggedRangeOp
(CVE-2022-35940) - Fixes a
CHECK
failure inAvgPoolOp
(CVE-2022-35941) - Fixes a
CHECK
failures inUnbatchGradOp
(CVE-2022-35952) - Fixes a segfault TFLite converter on per-channel quantized transposed convolutions (CVE-2022-36027)
- Fixes a
CHECK
failures inAvgPool3DGrad
(CVE-2022-35959) - Fixes a
CHECK
failures inFractionalAvgPoolGrad
(CVE-2022-35963) - Fixes a segfault in
BlockLSTMGradV2
(CVE-2022-35964) - Fixes a segfault in
LowerBound
andUpperBound
(CVE-2022-35965) - Fixes a segfault in
QuantizedAvgPool
(CVE-2022-35966) - Fixes a segfault in
QuantizedAdd
(CVE-2022-35967) - Fixes a
CHECK
fail inAvgPoolGrad
(CVE-2022-35968) - Fixes a
CHECK
fail inConv2DBackpropInput
(CVE-2022-35969) - Fixes a segfault in
QuantizedInstanceNorm
(CVE-2022-35970) - Fixes a
CHECK
fail inFakeQuantWithMinMaxVars
(CVE-2022-35971) - Fixes a segfault in
Requantize
(CVE-2022-36017) - Fixes a segfault in
QuantizedBiasAdd
(CVE-2022-35972) - Fixes a
CHECK
fail inFakeQuantWithMinMaxVarsPerChannel
(CVE-2022-36019) - Fixes a segfault in
QuantizedMatMul
(CVE-2022-35973) - Fixes a segfault in
QuantizeDownAndShrinkRange
(CVE-2022-35974) - Fixes segfaults in
QuantizedRelu
andQuantizedRelu6
(CVE-2022-35979) - Fixes a
CHECK
fail inFractionalMaxPoolGrad
(CVE-2022-35981) - Fixes a
CHECK
fail inRaggedTensorToVariant
(CVE-2022-36018) - Fixes a
CHECK
fail inQuantizeAndDequantizeV3
(CVE-2022-36026) - Fixes a segfault in
SparseBincount
(CVE-2022-35982) - Fixes a
CHECK
fail inSave
andSaveSlices
(CVE-2022-35983) - Fixes a
CHECK
fail inParameterizedTruncatedNormal
(CVE-2022-35984) - Fixes a
CHECK
fail inLRNGrad
(CVE-2022-35985) - Fixes a segfault in
RaggedBincount
(CVE-2022-35986) - Fixes a
CHECK
fail inDenseBincount
(CVE-2022-35987) - Fixes a
CHECK
fail intf.linalg.matrix_rank
(CVE-2022-35988) - Fixes a
CHECK
fail inMaxPool
(CVE-2022-35989) - Fixes a
CHECK
fail inConv2DBackpropInput
(CVE-2022-35999) - Fixes a
CHECK
fail inEmptyTensorList
(CVE-2022-35998) - Fixes a
CHECK
fail intf.sparse.cross
(CVE-2022-35997) - Fixes a floating point exception in
Conv2D
(CVE-2022-35996) - Fixes a
CHECK
fail inAudioSummaryV2
(CVE-2022-35995) - Fixes a
CHECK
fail inCollectiveGather
(CVE-2022-35994) - Fixes a
CHECK
fail inSetSize
(CVE-2022-35993) - Fixes a
CHECK
fail inTensorListFromTensor
(CVE-2022-35992) - Fixes a
CHECK
fail inTensorListScatter
andTensorListScatterV2
(CVE-2022-35991) - Fixes a
CHECK
fail inFakeQuantWithMinMaxVarsPerChannelGradient
(CVE-2022-35990) - Fixes a
CHECK
fail inFakeQuantWithMinMaxVarsGradient
(CVE-2022-36005) - Fixes a
CHECK
fail intf.random.gamma
(CVE-2022-36004) - Fixes a
CHECK
fail inRandomPoissonV2
(CVE-2022-36003) - Fixes a
CHECK
fail inUnbatch
(CVE-2022-36002) - Fixes a
CHECK
fail inDrawBoundingBoxes
(CVE-2022-36001) - Fixes a
CHECK
fail inEig
(CVE-2022-36000) - Fixes a null dereference on MLIR on empty function attributes (CVE-2022-36011)
- Fixes an assertion failure on MLIR empty edge names (CVE-2022-36012)
- Fixes a null-dereference in
mlir::tfg::GraphDefImporter::ConvertNodeDef
(CVE-2022-36013) - Fixes a null-dereference in
mlir::tfg::TFOp::nameAttr
(CVE-2022-36014) - Fixes an integer overflow in math ops (CVE-2022-36015)
- Fixes a
CHECK
-fail intensorflow::full_type::SubstituteFromAttrs
(CVE-2022-36016) - Fixes an OOB read in
Gather_nd
op in TF Lite Micro (CVE-2022-35938)
Release 2.7.4
Note: This is the last release in the 2.7.x series
This releases introduces several vulnerability fixes:
- Fixes a
CHECK
failure in tf.reshape caused by overflows (CVE-2022-35934) - Fixes a
CHECK
failure inSobolSample
caused by missing validation (CVE-2022-35935) - Fixes an OOB read in
Gather_nd
op in TF Lite (CVE-2022-35937) - Fixes a
CHECK
failure inTensorListReserve
caused by missing validation (CVE-2022-35960) - Fixes an OOB write in
Scatter_nd
op in TF Lite (CVE-2022-35939) - Fixes an integer overflow in
RaggedRangeOp
(CVE-2022-35940) - Fixes a
CHECK
failure inAvgPoolOp
(CVE-2022-35941) - Fixes a
CHECK
failures inUnbatchGradOp
(CVE-2022-35952) - Fixes a segfault TFLite converter on per-channel quantized transposed convolutions (CVE-2022-36027)
- Fixes a
CHECK
failures inAvgPool3DGrad
(CVE-2022-35959) - Fixes a
CHECK
failures inFractionalAvgPoolGrad
(CVE-2022-35963) - Fixes a segfault in
BlockLSTMGradV2
(CVE-2022-35964) - Fixes a segfault in
LowerBound
andUpperBound
(CVE-2022-35965) - Fixes a segfault in
QuantizedAvgPool
(CVE-2022-35966) - Fixes a segfault in
QuantizedAdd
(CVE-2022-35967) - Fixes a
CHECK
fail inAvgPoolGrad
(CVE-2022-35968) - Fixes a
CHECK
fail inConv2DBackpropInput
(CVE-2022-35969) - Fixes a segfault in
QuantizedInstanceNorm
(CVE-2022-35970) - Fixes a
CHECK
fail inFakeQuantWithMinMaxVars
(CVE-2022-35971) - Fixes a segfault in
Requantize
(CVE-2022-36017) - Fixes a segfault in
QuantizedBiasAdd
(CVE-2022-35972) - Fixes a
CHECK
fail inFakeQuantWithMinMaxVarsPerChannel
(CVE-2022-36019) - Fixes a segfault in
QuantizedMatMul
(CVE-2022-35973) - Fixes a segfault in
QuantizeDownAndShrinkRange
(CVE-2022-35974) - Fixes segfaults in
QuantizedRelu
andQuantizedRelu6
(CVE-2022-35979) - Fixes a
CHECK
fail inFractionalMaxPoolGrad
(CVE-2022-35981) - Fixes a
CHECK
fail inRaggedTensorToVariant
(CVE-2022-36018) - Fixes a
CHECK
fail inQuantizeAndDequantizeV3
(CVE-2022-36026) - Fixes a segfault in
SparseBincount
(CVE-2022-35982) - Fixes a
CHECK
fail inSave
andSaveSlices
(CVE-2022-35983) - Fixes a
CHECK
fail inParameterizedTruncatedNormal
(CVE-2022-35984) - Fixes a
CHECK
fail inLRNGrad
(CVE-2022-35985) - Fixes a segfault in
RaggedBincount
(CVE-2022-35986) - Fixes a
CHECK
fail inDenseBincount
(CVE-2022-35987) - Fixes a
CHECK
fail intf.linalg.matrix_rank
(CVE-2022-35988) - Fixes a
CHECK
fail inMaxPool
(CVE-2022-35989) - Fixes a
CHECK
fail inConv2DBackpropInput
(CVE-2022-35999) - Fixes a
CHECK
fail inEmptyTensorList
(CVE-2022-35998) - Fixes a
CHECK
fail intf.sparse.cross
(CVE-2022-35997) - Fixes a floating point exception in
Conv2D
(CVE-2022-35996) - Fixes a
CHECK
fail inAudioSummaryV2
(CVE-2022-35995) - Fixes a
CHECK
fail inCollectiveGather
(CVE-2022-35994) - Fixes a
CHECK
fail inSetSize
(CVE-2022-35993) - Fixes a
CHECK
fail inTensorListFromTensor
(CVE-2022-35992) - Fixes a
CHECK
fail inTensorListScatter
andTensorListScatterV2
(CVE-2022-35991) - Fixes a
CHECK
fail inFakeQuantWithMinMaxVarsPerChannelGradient
(CVE-2022-35990) - Fixes a
CHECK
fail inFakeQuantWithMinMaxVarsGradient
(CVE-2022-36005) - Fixes a
CHECK
fail intf.random.gamma
(CVE-2022-36004) - Fixes a
CHECK
fail inRandomPoissonV2
(CVE-2022-36003) - Fixes a
CHECK
fail inUnbatch
(CVE-2022-36002) - Fixes a
CHECK
fail inDrawBoundingBoxes
(CVE-2022-36001) - Fixes a
CHECK
fail inEig
(CVE-2022-36000) - Fixes a null dereference on MLIR on empty function attributes (CVE-2022-36011)
- Fixes an assertion failure on MLIR empty edge names (CVE-2022-36012)
- Fixes a null-dereference in
mlir::tfg::GraphDefImporter::ConvertNodeDef
(CVE-2022-36013) - Fixes a null-dereference in
mlir::tfg::TFOp::nameAttr
(CVE-2022-36014) - Fixes an integer overflow in math ops (CVE-2022-36015)
- Fixes a
CHECK
-fail intensorflow::full_type::SubstituteFromAttrs
(CVE-2022-36016) - Fixes an OOB read in
Gather_nd
op in TF Lite Micro (CVE-2022-35938)
Release 2.9.1
Add an upper bound for protobuf
in setup.py
since protobuf
after version 3.20 is currently incompatible with TensorFlow. See #53234, protocolbuffers/protobuf#9954 and #56077.
Release 2.8.2
Add an upper bound for protobuf
in setup.py
since protobuf
after version 3.20 is currently incompatible with TensorFlow. See #53234, protocolbuffers/protobuf#9954 and #56077.
Release 2.7.3
Add an upper bound for protobuf
in setup.py
since protobuf
after version 3.20 is currently incompatible with TensorFlow. See #53234, protocolbuffers/protobuf#9954 and #56077.
Release 2.6.5
Add an upper bound for protobuf
in setup.py
since protobuf
after version 3.20 is currently incompatible with TensorFlow. See #53234, protocolbuffers/protobuf#9954 and #56077.
Release 2.9.0
Breaking Changes
- Due to security issues in TF 2.8, all boosted trees code has now been removed (after being deprecated in TF 2.8). Users should switch to TensorFlow Decision Forests.
- Build, Compilation and Packaging
- TensorFlow is now compiled with
_GLIBCXX_USE_CXX11_ABI=1
. Downstream projects that encounterstd::__cxx11
or[abi:cxx11]
linker errors will need to adopt this compiler option. See the GNU C++ Library docs on Dual ABI. - TensorFlow Python wheels now specifically conform to manylinux2014, an upgrade from manylinux2010. The minimum Pip version supporting manylinux2014 is Pip 19.3 (see pypa/manylinux. This change may affect you if you have been using TensorFlow on a very old platform equivalent to CentOS 6, as manylinux2014 targets CentOS 7 as a compatibility base. Note that TensorFlow does not officially support either platform.
- Discussion for these changes can be found on SIG Build's TensorFlow Community Forum thread
- TensorFlow is now compiled with
- The
tf.keras.mixed_precision.experimental
API has been removed. The non-experimental symbols undertf.keras.mixed_precision
have been available since TensorFlow 2.4 and should be used instead.- The non-experimental API has some minor differences from the experimental API. In most cases, you only need to make three minor changes:
- Remove the word "experimental" from
tf.keras.mixed_precision
symbols. E.g., replacetf.keras.mixed_precision.experimental.global_policy
withtf.keras.mixed_precision.global_policy
. - Replace
tf.keras.mixed_precision.experimental.set_policy
withtf.keras.mixed_precision.set_global_policy
. The experimental symbolset_policy
was renamed toset_global_policy
in the non-experimental API. - Replace
LossScaleOptimizer(opt, "dynamic")
withLossScaleOptimizer(opt)
. If you pass anything other than"dynamic"
to the second argument, see (1) of the next section.
- Remove the word "experimental" from
- In the following rare cases, you need to make more changes when switching to the non-experimental API:
- If you passed anything other than
"dynamic"
to theloss_scale
argument (the second argument) ofLossScaleOptimizer
:- The LossScaleOptimizer constructor takes in different arguments. See the TF 2.7 documentation of tf.keras.mixed_precision.experimental.LossScaleOptimizer for details on the differences, which has examples on how to convert to the non-experimental LossScaleOptimizer.
- If you passed a value to the
loss_scale
argument (the second argument) ofPolicy
:- The experimental version of
Policy
optionally took in atf.compat.v1.mixed_precision.LossScale
in the constructor, which defaulted to a dynamic loss scale for the"mixed_float16"
policy and no loss scale for other policies. InModel.compile
, if the model's policy had a loss scale, the optimizer would be wrapped with aLossScaleOptimizer
. With the non-experimentalPolicy
, there is no loss scale associated with thePolicy
, andModel.compile
wraps the optimizer with aLossScaleOptimizer
if and only if the policy is a"mixed_float16"
policy. If you previously passed aLossScale
to the experimentalPolicy
, consider just removing it, as the default loss scaling behavior is usually what you want. If you really want to customize the loss scaling behavior, you can wrap your optimizer with aLossScaleOptimizer
before passing it toModel.compile
.
- The experimental version of
- If you use the very rarely-used function
tf.keras.mixed_precision.experimental.get_layer_policy
:- Replace
tf.keras.mixed_precision.experimental.get_layer_policy(layer)
withlayer.dtype_policy
.
- Replace
- If you passed anything other than
- The non-experimental API has some minor differences from the experimental API. In most cases, you only need to make three minor changes:
tf.mixed_precision.experimental.LossScale
and its subclasses have been removed from the TF2 namespace. This symbols were very rarely used and were only useful in TF2 for use in the now-removedtf.keras.mixed_precision.experimental
API. The symbols are still available undertf.compat.v1.mixed_precision
.- The
experimental_relax_shapes
heuristic fortf.function
has been deprecated and replaced withreduce_retracing
which encompasses broader heuristics to reduce the number of retraces (see below)
Major Features and Improvements
-
tf.keras
:- Added
tf.keras.applications.resnet_rs
models. This includes theResNetRS50
,ResNetRS101
,ResNetRS152
,ResNetRS200
,ResNetRS270
,ResNetRS350
andResNetRS420
model architectures. The ResNetRS models are based on the architecture described in Revisiting ResNets: Improved Training and Scaling Strategies - Added
tf.keras.optimizers.experimental.Optimizer
. The reworked optimizer gives more control over different phases of optimizer calls, and is easier to customize. We provide Adam, SGD, Adadelta, AdaGrad and RMSprop optimizers based ontf.keras.optimizers.experimental.Optimizer
. Generally the new optimizers work in the same way as the old ones, but support new constructor arguments. In the future, the symbolstf.keras.optimizers.Optimizer
/Adam
/etc will point to the new optimizers, and the previous generation of optimizers will be moved totf.keras.optimizers.legacy.Optimizer
/Adam
/etc. - Added L2 unit normalization layer
tf.keras.layers.UnitNormalization
. - Added
tf.keras.regularizers.OrthogonalRegularizer
, a new regularizer that encourages orthogonality between the rows (or columns) or a weight matrix. - Added
tf.keras.layers.RandomBrightness
layer for image preprocessing. - Added APIs for switching between interactive logging and absl logging.
By default, Keras always writes the logs to stdout. However, this is not
optimal in a non-interactive environment, where you don't have access to
stdout, but can only view the logs. You can use
tf.keras.utils.disable_interactive_logging()
to write the logs to ABSL logging. You can also usetf.keras.utils.enable_interactive_logging()
to change it back to stdout, ortf.keras.utils.is_interactive_logging_enabled()
to check if interactive logging is enabled. - Changed default value for the
verbose
argument ofModel.evaluate()
andModel.predict()
to"auto"
, which defaults toverbose=1
for most cases and defaults toverbose=2
when used withParameterServerStrategy
or with interactive logging disabled. - Argument
jit_compile
inModel.compile()
now applies toModel.evaluate()
andModel.predict()
. Settingjit_compile=True
incompile()
compiles the model's training, evaluation, and inference steps to XLA. Note thatjit_compile=True
may not necessarily work for all models. - Added DTensor-related Keras APIs under
tf.keras.dtensor
namespace. The APIs are still classified as experimental. You are welcome to try it out. Please check the tutorial and guide on https://www.tensorflow.org/ for more details about DTensor.
- Added
-
tf.lite
:- Added TFLite builtin op support for the following TF ops:
tf.math.argmin
/tf.math.argmax
for input data typetf.bool
on CPU.tf.nn.gelu
op for output data typetf.float32
and quantization on CPU.
- Add nominal support for unsigned 16-bit integer tensor types. Note that very few TFLite kernels support this type natively, so its use in mobile ML authoring is generally discouraged.
- Add support for unsigned 16-bit integer tensor types in cast op.
- Experimental support for lowering
list_ops.tensor_list_set_item
withDynamicUpdateSlice
. - Enabled a new MLIR-based dynamic range quantization backend by default
- The new backend is used for post-training int8 dynamic range quantization and post-training float16 quantization.
- Set
experimental_new_dynamic_range_quantizer
in tf.lite.TFLiteConverter to False to disable this change
- Native TF Lite variables are now enabled during conversion by default on
all v2 TfLiteConverter entry points.
experimental_enable_resource_variables
on tf.lite.TFLiteConverter is now True by default and will be removed in the future.
- Added TFLite builtin op support for the following TF ops:
-
tf.function
:- Custom classes used as arguments for
tf.function
can now specify rules regarding when retracing needs to occur by implementing the Tracing Protocol available throughtf.types.experimental.SupportsTracingProtocol
. TypeSpec
classes (as associated withExtensionTypes
) also implement the Tracing Protocol which can be overridden if necessary.- The newly introduced
reduce_retracing
option also uses the Tracing Protocol to proactively generate generalized traces similar toexperimental_relax_shapes
(which has now been deprecated).
- Custom classes used as arguments for
-
Unified eager and
tf.function
execution:- Eager mode can now execute each op as a
tf.function
, allowing for more consistent feature support in future releases. - It is available for immediate use.
- See the
TF_RUN_EAGER_OP_AS_FUNCTION
environment variable in eager context. - Eager performance should be similar with this feature enabled.
- A roughly 5us per-op overhead may be observed when running many small functions.
- Note a known issue with GPU performance.
- The behavior of
tf.function
itself is unaffected.
- See the
- Note: This feature will be enabled by default in an upcoming version of TensorFlow.
- Eager mode can now execute each op as a
-
tf.experimental.dtensor
: Added DTensor, an extension to TensorFlow for large-scale modeling with minimal changes to user code. You are welcome to try it out, though be aware that the DTensor API is experimental and up-to backward-incompatible changes. DTensor and Keras integration is published undertf.keras.dtensor
in this release (refer to thetf.keras
entry). The tutoral and guide for DTensor will be published on https://www.tensorflow.org/. Please stay tuned. -
oneDNN CPU performance optimizations are available in Linux x86, Windows x86, and Linux aarch64 packages.
- Linux x86 packages:
- oneDNN optimizations are enabled by default on CPUs with neural-network-focused hardware features such as AVX512_VNNI, AVX512_BF16, AMX, etc. (Intel Cascade Lake and newer CPUs.)
- For older CPUs, oneDNN optimizations are disabled by default.
- Windows x86 package: oneDNN optimizations are disabled by default.
- Linux aach64 (
--config=mkl_aarch64
) package:- Experimental oneDNN optimizations are disabled by default.
- If you experience issues with oneDNN optimizations on, we recommend turning them off.
- To explicitly enable or disable oneDNN optimizations, set the
environment variable
TF_ENABLE_ONEDNN_OPTS
to1
(enable) or0
(disable) before running TensorFlow. (The variable is checked duringimport tensorflow
.) To fall back to default settings, unset the environment variable. - These optimizations can yield slightly different numerical results from when they are off due to floating-point round-off errors from different computation approaches and orders.
- To verify that the optimizations are on, look for a message with "oneDNN custom operations are on" in the log. If the exact phrase is not there, it means they are off.
- Linux x86 packages:
Bug Fixes and Other Changes
-
tf.data
:- Fixed bug in
tf.data.experimental.parse_example_dataset
whentf.io.RaggedFeatures
would specifyvalue_key
but nopartitions
. Before the fix, settingvalue_key
but nopartitions
would result in the feature key being replaced by the value key, e.g.{'value_key': <RaggedTensor>}
instead of{'key': <RaggedTensor>}
. Now the correct feature key will be used. This aligns the behavior oftf.data.experimental.parse_example_dataset
to match the behavior oftf.io.parse_example
. - Added a new field,
filter_parallelization
, totf.data.experimental.OptimizationOptions
. If it is set toTrue
, tf.data will runFilter
transformation with multiple threads. Its default value isFalse
if not specified.
- Fixed bug in
-
tf.keras
:- Fixed bug in optimizers that prevented them from properly checkpointing slot variables when they are
ShardedVariable
s (used for training withtf.distribute.experimental.ParameterServerStrategy
).
- Fixed bug in optimizers that prevented them from properly checkpointing slot variables when they are
-
tf.random
:- Added
tf.random.experimental.index_shuffle
, for shuffling a sequence without materializing the sequence in memory.
- Added
-
tf.RaggedTensor
:- Introduced
tf.experimental.RowPartition
, which encodes how one dimension in a RaggedTensor relates to another, into the public API. - Introduced
tf.experimental.DynamicRaggedShape
, which represents the shape of a RaggedTensor.
- Introduced
Security
- Fixes a code injection in
saved_model_cli
(CVE-2022-29216) - Fixes a missing validation which causes
TensorSummaryV2
to crash (CVE-2022-29193) - Fixes a missing validation which crashes
QuantizeAndDequantizeV4Grad
(CVE-2022-29192) - Fixes a missing validation which causes denial of service via
DeleteSessionTensor
(CVE-2022-29194) - Fixes a missing validation which causes denial of service via
GetSessionTensor
(CVE-2022-29191) - Fixes a missing validation which causes denial of service via
StagePeek
(CVE-2022-29195) - Fixes a missing validation which causes denial of service via
UnsortedSegmentJoin
(CVE-2022-29197) - Fixes a missing validation which causes denial of service via
LoadAndRemapMatrix
(CVE-2022-29199) - Fixes a missing validation which causes denial of service via
SparseTensorToCSRSparseMatrix
(CVE-2022-29198) - Fixes a missing validation which causes denial of service via
LSTMBlockCell
(CVE-2022-29200) - Fixes a missing validation which causes denial of service via
Conv3DBackpropFilterV2
(CVE-2022-29196) - Fixes a
CHECK
failure in depthwise ops via overflows (CVE-2021-41197) - Fixes issues arising from undefined behavior stemming from users supplying invalid resource handles (CVE-2022-29207)
- Fixes a segfault due to missing support for quantized types (CVE-2022-29205)
- Fixes a missing validation which results in undefined behavior in
SparseTensorDenseAdd
(CVE-2022-29206) - Fixes a missing validation which results in undefined behavior in
QuantizedConv2D
(CVE-2022-29201) - Fixes an integer overflow in
SpaceToBatchND
(CVE-2022-29203) - Fixes a segfault and OOB write due to incomplete validation in
EditDistance
(CVE-2022-29208) - Fixes a missing validation which causes denial of service via
Conv3DBackpropFilterV2
(CVE-2022-29204) - Fixes a denial of service in
tf.ragged.constant
due to lack of validation (CVE-2022-29202) - Fixes a segfault when
tf.histogram_fixed_width
is called with NaN values (CVE-2022-29211) - Fixes a core dump when loading TFLite models with quantization (CVE-2022-29212)
- Fixes crashes stemming from incomplete validation in signal ops (CVE-2022-29213)
- Fixes a type confusion leading to
CHECK
-failure based denial of service (CVE-2022-29209) - Fixes a heap buffer overflow due to incorrect hash function (CVE-2022-29210)
- Updates
curl
to7.83.1
to handle (CVE-2022-22576, (CVE-2022-27774, (CVE-2022-27775, (CVE-2022-27776, (CVE-2022-27778, (CVE-2022-27779, (CVE-2022-27780, (CVE-2022-27781, (CVE-2022-27782 and (CVE-2022-30115 - Updates
zlib
to1.2.12
after1.2.11
was pulled due to security issue
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
Aaron Debattista, Abel Soares Siqueira, Abhishek Varma, Andrei Ivanov, andreii, Andrew Goodbody, apeltop, Arnab Dutta, Ashiq Imran, Banikumar Maiti (Intel Aipg), Ben Greiner, Benjamin Peterson, bhack, Christopher Bate, chunduriv, Copybara-Service, DEKHTIARJonathan, Deven Desai, Duncan Riach, Eric Kunze, Everton Constantino, Faruk D, Fredrik Knutsson, gadagashwini, Gauri1 Deshpande, gtiHibGele, Guozhong Zhuang, Islem-Esi, Ivanov Viktor, Jason Furmanek, Jason Zaman, Jim, Jinzhe Zeng, John Laxson, Jonas Eschle, Jonas Eschle 'Mayou36, Jonathan Dekhtiar, Kaixi Hou, Kanvi Khanna, KaurkerDevourer, Koan-Sin Tan, kushanam, Laramie Leavitt, Li-Wen Chang, lipracer, Louis Sugy, Lu Teng, Mahmoud Abuzaina, Malcolm Slaney, Malik Shahzad Muzaffar, Marek Šuppa, Matt Conley, Michael Melesse, Milos Puzovic, mohantym, Nathan John Sircombe, Nathan Luehr, Nilesh Agarwalla, Patrice Vignola, peterjc123, Philip Turner, Rajeshwar Reddy T, Robert Kalmar, Rodrigo Formigone, Rohit Santhanam, rui, Sachin Muradi, Saduf2019, sandip, Scott Leishman, Serge Panev, Shi,Guangyong, Srinivasan Narayanamoorthy, stanley, Steven I Reeves, stevenireeves, sushreebarsa, Tamas Bela Feher, Tao He, Thomas Schmeyer, Tiago Almeida, Trevor Morris, Uday Bondhugula, Uwe L. Korn, Varghese, Jojimon, Vishnuvardhan Janapati, William Muir, William Raveane, xutianming, Yasuhiro Matsumoto, Yimei Sun, Yong Tang, Yu Feng, Yuriy Chernyshov, zhaozheng09
Release 2.8.1
This releases introduces several vulnerability fixes:
- Fixes a code injection in
saved_model_cli
(CVE-2022-29216) - Fixes a missing validation which causes
TensorSummaryV2
to crash (CVE-2022-29193) - Fixes a missing validation which crashes
QuantizeAndDequantizeV4Grad
(CVE-2022-29192) - Fixes a missing validation which causes denial of service via
DeleteSessionTensor
(CVE-2022-29194) - Fixes a missing validation which causes denial of service via
GetSessionTensor
(CVE-2022-29191) - Fixes a missing validation which causes denial of service via
StagePeek
(CVE-2022-29195) - Fixes a missing validation which causes denial of service via
UnsortedSegmentJoin
(CVE-2022-29197) - Fixes a missing validation which causes denial of service via
LoadAndRemapMatrix
(CVE-2022-29199) - Fixes a missing validation which causes denial of service via
SparseTensorToCSRSparseMatrix
(CVE-2022-29198) - Fixes a missing validation which causes denial of service via
LSTMBlockCell
(CVE-2022-29200) - Fixes a missing validation which causes denial of service via
Conv3DBackpropFilterV2
(CVE-2022-29196) - Fixes a
CHECK
failure in depthwise ops via overflows (CVE-2021-41197) - Fixes issues arising from undefined behavior stemming from users supplying invalid resource handles (CVE-2022-29207)
- Fixes a segfault due to missing support for quantized types (CVE-2022-29205)
- Fixes a missing validation which results in undefined behavior in
SparseTensorDenseAdd
(CVE-2022-29206) - Fixes a missing validation which results in undefined behavior in
QuantizedConv2D
(CVE-2022-29201) - Fixes an integer overflow in
SpaceToBatchND
(CVE-2022-29203) - Fixes a segfault and OOB write due to incomplete validation in
EditDistance
(CVE-2022-29208) - Fixes a missing validation which causes denial of service via
Conv3DBackpropFilterV2
(CVE-2022-29204) - Fixes a denial of service in
tf.ragged.constant
due to lack of validation (CVE-2022-29202) - Fixes a segfault when
tf.histogram_fixed_width
is called with NaN values (CVE-2022-29211) - Fixes a core dump when loading TFLite models with quantization (CVE-2022-29212)
- Fixes crashes stemming from incomplete validation in signal ops (CVE-2022-29213)
- Fixes a type confusion leading to
CHECK
-failure based denial of service (CVE-2022-29209) - Fixes a heap buffer overflow due to incorrect hash function (CVE-2022-29210)
- Updates
curl
to7.83.1
to handle (CVE-2022-22576, (CVE-2022-27774, (CVE-2022-27775, (CVE-2022-27776, (CVE-2022-27778, (CVE-2022-27779, (CVE-2022-27780, (CVE-2022-27781, (CVE-2022-27782 and (CVE-2022-30115 - Updates
zlib
to1.2.12
after1.2.11
was pulled due to security issue
Release 2.7.2
This releases introduces several vulnerability fixes:
- Fixes a code injection in
saved_model_cli
(CVE-2022-29216) - Fixes a missing validation which causes
TensorSummaryV2
to crash (CVE-2022-29193) - Fixes a missing validation which crashes
QuantizeAndDequantizeV4Grad
(CVE-2022-29192) - Fixes a missing validation which causes denial of service via
DeleteSessionTensor
(CVE-2022-29194) - Fixes a missing validation which causes denial of service via
GetSessionTensor
(CVE-2022-29191) - Fixes a missing validation which causes denial of service via
StagePeek
(CVE-2022-29195) - Fixes a missing validation which causes denial of service via
UnsortedSegmentJoin
(CVE-2022-29197) - Fixes a missing validation which causes denial of service via
LoadAndRemapMatrix
(CVE-2022-29199) - Fixes a missing validation which causes denial of service via
SparseTensorToCSRSparseMatrix
(CVE-2022-29198) - Fixes a missing validation which causes denial of service via
LSTMBlockCell
(CVE-2022-29200) - Fixes a missing validation which causes denial of service via
Conv3DBackpropFilterV2
(CVE-2022-29196) - Fixes a
CHECK
failure in depthwise ops via overflows (CVE-2021-41197) - Fixes issues arising from undefined behavior stemming from users supplying invalid resource handles (CVE-2022-29207)
- Fixes a segfault due to missing support for quantized types (CVE-2022-29205)
- Fixes a missing validation which results in undefined behavior in
SparseTensorDenseAdd
(CVE-2022-29206) - Fixes a missing validation which results in undefined behavior in
QuantizedConv2D
(CVE-2022-29201) - Fixes an integer overflow in
SpaceToBatchND
(CVE-2022-29203) - Fixes a segfault and OOB write due to incomplete validation in
EditDistance
(CVE-2022-29208) - Fixes a missing validation which causes denial of service via
Conv3DBackpropFilterV2
(CVE-2022-29204) - Fixes a denial of service in
tf.ragged.constant
due to lack of validation (CVE-2022-29202) - Fixes a segfault when
tf.histogram_fixed_width
is called with NaN values (CVE-2022-29211) - Fixes a core dump when loading TFLite models with quantization (CVE-2022-29212)
- Fixes crashes stemming from incomplete validation in signal ops (CVE-2022-29213)
- Fixes a type confusion leading to
CHECK
-failure based denial of service (CVE-2022-29209) - Updates
curl
to7.83.1
to handle (CVE-2022-22576, (CVE-2022-27774, (CVE-2022-27775, (CVE-2022-27776, (CVE-2022-27778, (CVE-2022-27779, (CVE-2022-27780, (CVE-2022-27781, (CVE-2022-27782 and (CVE-2022-30115 - Updates
zlib
to1.2.12
after1.2.11
was pulled due to security issue
Release 2.6.4
This releases introduces several vulnerability fixes:
- Fixes a code injection in
saved_model_cli
(CVE-2022-29216) - Fixes a missing validation which causes
TensorSummaryV2
to crash (CVE-2022-29193) - Fixes a missing validation which crashes
QuantizeAndDequantizeV4Grad
(CVE-2022-29192) - Fixes a missing validation which causes denial of service via
DeleteSessionTensor
(CVE-2022-29194) - Fixes a missing validation which causes denial of service via
GetSessionTensor
(CVE-2022-29191) - Fixes a missing validation which causes denial of service via
StagePeek
(CVE-2022-29195) - Fixes a missing validation which causes denial of service via
UnsortedSegmentJoin
(CVE-2022-29197) - Fixes a missing validation which causes denial of service via
LoadAndRemapMatrix
(CVE-2022-29199) - Fixes a missing validation which causes denial of service via
SparseTensorToCSRSparseMatrix
(CVE-2022-29198) - Fixes a missing validation which causes denial of service via
LSTMBlockCell
(CVE-2022-29200) - Fixes a missing validation which causes denial of service via
Conv3DBackpropFilterV2
(CVE-2022-29196) - Fixes a
CHECK
failure in depthwise ops via overflows (CVE-2021-41197) - Fixes issues arising from undefined behavior stemming from users supplying invalid resource handles (CVE-2022-29207)
- Fixes a segfault due to missing support for quantized types (CVE-2022-29205)
- Fixes a missing validation which results in undefined behavior in
SparseTensorDenseAdd
(CVE-2022-29206) - Fixes a missing validation which results in undefined behavior in
QuantizedConv2D
(CVE-2022-29201) - Fixes an integer overflow in
SpaceToBatchND
(CVE-2022-29203) - Fixes a segfault and OOB write due to incomplete validation in
EditDistance
(CVE-2022-29208) - Fixes a missing validation which causes denial of service via
Conv3DBackpropFilterV2
(CVE-2022-29204) - Fixes a denial of service in
tf.ragged.constant
due to lack of validation (CVE-2022-29202) - Fixes a segfault when
tf.histogram_fixed_width
is called with NaN values (CVE-2022-29211) - Fixes a core dump when loading TFLite models with quantization (CVE-2022-29212)
- Fixes crashes stemming from incomplete validation in signal ops (CVE-2022-29213)
- Fixes a type confusion leading to
CHECK
-failure based denial of service (CVE-2022-29209) - Updates
curl
to7.83.1
to handle (CVE-2022-22576, (CVE-2022-27774, (CVE-2022-27775, (CVE-2022-27776, (CVE-2022-27778, (CVE-2022-27779, (CVE-2022-27780, (CVE-2022-27781, (CVE-2022-27782 and (CVE-2022-30115 - Updates
zlib
to1.2.12
after1.2.11
was pulled due to security issue
Release 2.8.0
Major Features and Improvements
-
tf.lite
:- Added TFLite builtin op support for the following TF ops:
tf.raw_ops.Bucketize
op on CPU.tf.where
op for data typestf.int32
/tf.uint32
/tf.int8
/tf.uint8
/tf.int64
.tf.random.normal
op for output data typetf.float32
on CPU.tf.random.uniform
op for output data typetf.float32
on CPU.tf.random.categorical
op for output data typetf.int64
on CPU.
- Added TFLite builtin op support for the following TF ops:
-
tensorflow.experimental.tensorrt
:conversion_params
is now deprecated insideTrtGraphConverterV2
in favor of direct arguments:max_workspace_size_bytes
,precision_mode
,minimum_segment_size
,maximum_cached_engines
,use_calibration
andallow_build_at_runtime
.- Added a new parameter called
save_gpu_specific_engines
to the.save()
function insideTrtGraphConverterV2
. WhenFalse
, the.save()
function won't save any TRT engines that have been built. WhenTrue
(default), the original behavior is preserved. TrtGraphConverterV2
provides a new API called.summary()
which outputs a summary of the inference converted by TF-TRT. It namely shows eachTRTEngineOp
with their input(s)' and output(s)' shape and dtype. A detailed version of the summary is available which prints additionally all the TensorFlow OPs included in each of theTRTEngineOp
s.
-
tf.tpu.experimental.embedding
:tf.tpu.experimental.embedding.FeatureConfig
now takes an additional argumentoutput_shape
which can specify the shape of the output activation for the feature.tf.tpu.experimental.embedding.TPUEmbedding
now has the same behavior astf.tpu.experimental.embedding.serving_embedding_lookup
which can take arbitrary rank of dense and sparse tensor. For ragged tensor, though the input tensor remains to be rank 2, the activations now can be rank 2 or above by specifying the output shape in the feature config or via the build method.
-
Add
tf.config.experimental.enable_op_determinism
, which makes TensorFlow ops run deterministically at the cost of performance. Replaces theTF_DETERMINISTIC_OPS
environmental variable, which is now deprecated. The "Bug Fixes and Other Changes" section lists more determinism-related changes. -
(Since TF 2.7) Add PluggableDevice support to TensorFlow Profiler.
Bug Fixes and Other Changes
-
tf.data
:- Fixed a bug where setting
options.deterministic = False
would only modify one transformation to run non-deterministically, leaving other transformations deterministic. The option will now apply the same across all transformations. - The optimization
parallel_batch
now becomes default if not disabled by users, which will parallelize copying of batch elements. - Added the ability for
TensorSliceDataset
to identify and handle inputs that are files. This enables creating hermetic SavedModels when using datasets created from files.
- Fixed a bug where setting
-
tf.lite
:- Adds GPU Delegation support for serialization to Java API. This boosts initialization time up to 90% when OpenCL is available.
- Deprecated
Interpreter::SetNumThreads
, in favor ofInterpreterBuilder::SetNumThreads
.
-
tf.keras
:- Adds
tf.compat.v1.keras.utils.get_or_create_layer
to aid migration to TF2 by enabling tracking of nested keras models created in TF1-style, when used with thetf.compat.v1.keras.utils.track_tf1_style_variables
decorator. - Added a
tf.keras.layers.experimental.preprocessing.HashedCrossing
layer which applies the hashing trick to the concatenation of crossed scalar inputs. This provides a stateless way to try adding feature crosses of integer or string data to a model. - Removed
keras.layers.experimental.preprocessing.CategoryCrossing
. Users should migrate to theHashedCrossing
layer or usetf.sparse.cross
/tf.ragged.cross
directly. - Added additional
standardize
andsplit
modes toTextVectorization
:standardize="lower"
will lowercase inputs.standardize="string_punctuation"
will remove all punctuation.split="character"
will split on every unicode character.
- Added an
output_mode
argument to theDiscretization
andHashing
layers with the same semantics as other preprocessing layers. All categorical preprocessing layers now supportoutput_mode
. - All preprocessing layer output will follow the compute dtype of a
tf.keras.mixed_precision.Policy
, unless constructed withoutput_mode="int"
in which case output will betf.int64
. The output type of any preprocessing layer can be controlled individually by passing adtype
argument to the layer. tf.random.Generator
for keras initializers and all RNG code.- Added 3 new APIs for enable/disable/check the usage of
tf.random.Generator
in keras backend, which will be the new backend for all the RNG in Keras. We plan to switch on the new code path by default in tf 2.8, and the behavior change will likely to cause some breakage on user side (eg if the test is checking against some golden number). These 3 APIs will allow user to disable and switch back to legacy behavior if they prefer. In future (eg TF 2.10), we expect to totally remove the legacy code path (stateful random Ops), and these 3 APIs will be removed as well. tf.keras.callbacks.experimental.BackupAndRestore
is now available astf.keras.callbacks.BackupAndRestore
. The experimental endpoint is deprecated and will be removed in a future release.tf.keras.experimental.SidecarEvaluator
is now available astf.keras.utils.SidecarEvaluator
. The experimental endpoint is deprecated and will be removed in a future release.- Metrics update and collection logic in default
Model.train_step()
is now customizable via overridingModel.compute_metrics()
. - Losses computation logic in default
Model.train_step()
is now customizable via overridingModel.compute_loss()
. jit_compile
added toModel.compile()
on an opt-in basis to compile the model's training step with XLA. Note thatjit_compile=True
may not necessarily work for all models.
- Adds
-
Deterministic Op Functionality:
- Fix regression in deterministic selection of deterministic cuDNN convolution algorithms, a regression that was introduced in v2.5. Note that nondeterministic out-of-memory events while selecting algorithms could still lead to nondeterminism, although this is very unlikely. This additional, unlikely source will be eliminated in a later version.
- Add deterministic GPU implementations of:
tf.function(jit_compile=True)
's that useScatter
.- (since v2.7) Stateful ops used in
tf.data.Dataset
- (since v2.7)
tf.convert_to_tensor
when fed with (sparse)tf.IndexedSlices
(because it usestf.math.unsorted_segment_sum
) - (since v2.7)
tf.gather
backprop (becausetf.convert_to_tensor
reducestf.gather
's (sparse)tf.IndexedSlices
gradients into its denseparams
input) - (since v2.7)
tf.math.segment_mean
- (since v2.7)
tf.math.segment_prod
- (since v2.7)
tf.math.segment_sum
- (since v2.7)
tf.math.unsorted_segment_mean
- (since v2.7)
tf.math.unsorted_segment_prod
- (since v2.7)
tf.math.unsorted_segment_sum
- (since v2.7)
tf.math.unsorted_segment_sqrt
- (since v2.7)
tf.nn.ctc_loss
(resolved, possibly in prior release, and confirmed with tests) - (since v2.7)
tf.nn.sparse_softmax_crossentropy_with_logits
- (since v2.7) Run
tf.scatter_nd
and other related scatter functions, such astf.tensor_scatter_nd_update
, on CPU (with significant performance penalty). - Add determinism-unimplemented exception-throwing to the following ops.
When op-determinism is expected (i.e. after
tf.config.experimental.enable_op_determinism
has been called), an attempt to use the specified paths through the following ops on a GPU will causetf.errors.UnimplementedError
(with an understandable message), unless otherwise specified, to be thrown.FakeQuantWithMinMaxVarsGradient
andFakeQuantWithMinMaxVarsPerChannelGradient
- (since v2.7)
tf.compat.v1.get_seed
if the global random seed has not yet been set (viatf.random.set_seed
). ThrowsRuntimeError
from Python orInvalidArgument
from C++ - (since v2.7)
tf.compat.v1.nn.fused_batch_norm
backprop tooffset
whenis_training=False
- (since v2.7)
tf.image.adjust_contrast
forward - (since v2.7)
tf.image.resize
withmethod=ResizeMethod.NEAREST
backprop - (since v2.7)
tf.linalg.svd
- (since v2.7)
tf.math.bincount
- (since v2.7)
tf.nn.depthwise_conv2d
backprop tofilter
when not using cuDNN convolution - (since v2.7)
tf.nn.dilation2d
gradient - (since v2.7)
tf.nn.max_pool_with_argmax
gradient - (since v2.7)
tf.raw_ops.DebugNumericSummary
andtf.raw_ops.DebugNumericSummaryV2
- (since v2.7)
tf.timestamp
. ThrowsFailedPrecondition
- (since v2.7)
tf.Variable.scatter_add
(and other scatter methods, both on ref and resource variables) - (since v2.7) The random-number-generating ops in the
tf.random
module when the global random seed has not yet been set (viatf.random.set_seed
). ThrowsRuntimeError
from Python orInvalidArgument
from C++
-
TensorFlow-oneDNN no longer supports explicit use of oneDNN blocked tensor format, e.g., setting the environment variable
TF_ENABLE_MKL_NATIVE_FORMAT
will not have any effect. -
TensorFlow has been validated on Windows Subsystem for Linux 2 (aka WSL 2) for both GPUs and CPUs.
-
Due to security issues (see section below), all boosted trees code has been deprecated. Users should switch to TensorFlow Decision Forests. TF's boosted trees code will be eliminated before the branch cut for TF 2.9 and will no longer be present since that release.
Security
- Fixes a floating point division by 0 when executing convolution operators (CVE-2022-21725)
- Fixes a heap OOB read in shape inference for
ReverseSequence
(CVE-2022-21728) - Fixes a heap OOB access in
Dequantize
(CVE-2022-21726) - Fixes an integer overflow in shape inference for
Dequantize
(CVE-2022-21727) - Fixes a heap OOB access in
FractionalAvgPoolGrad
(CVE-2022-21730) - Fixes an overflow and divide by zero in
UnravelIndex
(CVE-2022-21729) - Fixes a type confusion in shape inference for
ConcatV2
(CVE-2022-21731) - Fixes an OOM in
ThreadPoolHandle
(CVE-2022-21732) - Fixes an OOM due to integer overflow in
StringNGrams
(CVE-2022-21733) - Fixes more issues caused by incomplete validation in boosted trees code (CVE-2021-41208)
- Fixes an integer overflows in most sparse component-wise ops (CVE-2022-23567)
- Fixes an integer overflows in
AddManySparseToTensorsMap
(CVE-2022-23568) - Fixes a number of
CHECK
-failures inMapStage
(CVE-2022-21734) - Fixes a division by zero in
FractionalMaxPool
(CVE-2022-21735) - Fixes a number of
CHECK
-fails when building invalid/overflowing tensor shapes (CVE-2022-23569) - Fixes an undefined behavior in
SparseTensorSliceDataset
(CVE-2022-21736) - Fixes an assertion failure based denial of service via faulty bin count operations (CVE-2022-21737)
- Fixes a reference binding to null pointer in
QuantizedMaxPool
(CVE-2022-21739) - Fixes an integer overflow leading to crash in
SparseCountSparseOutput
(CVE-2022-21738) - Fixes a heap overflow in
SparseCountSparseOutput
(CVE-2022-21740) - Fixes an FPE in
BiasAndClamp
in TFLite (CVE-2022-23557) - Fixes an FPE in depthwise convolutions in TFLite (CVE-2022-21741)
- Fixes an integer overflow in TFLite array creation (CVE-2022-23558)
- Fixes an integer overflow in TFLite (CVE-2022-23559)
- Fixes a dangerous OOB write in TFLite (CVE-2022-23561)
- Fixes a vulnerability leading to read and write outside of bounds in TFLite (CVE-2022-23560)
- Fixes a set of vulnerabilities caused by using insecure temporary files (CVE-2022-23563)
- Fixes an integer overflow in Range resulting in undefined behavior and OOM (CVE-2022-23562)
- Fixes a vulnerability where missing validation causes
tf.sparse.split
to crash whenaxis
is a tuple (CVE-2021-41206) - Fixes a
CHECK
-fail when decoding resource handles from proto (CVE-2022-23564) - Fixes a
CHECK
-fail with repeatedAttrDef
(CVE-2022-23565) - Fixes a heap OOB write in Grappler (CVE-2022-23566)
- Fixes a
CHECK
-fail when decoding invalid tensors from proto (CVE-2022-23571) - Fixes a null-dereference when specializing tensor type (CVE-2022-23570)
- Fixes a crash when type cannot be specialized (CVE-2022-23572)
- Fixes a heap OOB read/write in
SpecializeType
(CVE-2022-23574) - Fixes an unitialized variable access in
AssignOp
(CVE-2022-23573) - Fixes an integer overflow in
OpLevelCostEstimator::CalculateTensorSize
(CVE-2022-23575) - Fixes an integer overflow in
OpLevelCostEstimator::CalculateOutputSize
(CVE-2022-23576) - Fixes a null dereference in
GetInitOp
(CVE-2022-23577) - Fixes a memory leak when a graph node is invalid (CVE-2022-23578)
- Fixes an abort caused by allocating a vector that is too large (CVE-2022-23580)
- Fixes multiple
CHECK
-failures during Grappler'sIsSimplifiableReshape
(CVE-2022-23581) - Fixes multiple
CHECK
-failures during Grappler'sSafeToRemoveIdentity
(CVE-2022-23579) - Fixes multiple
CHECK
-failures inTensorByteSize
(CVE-2022-23582) - Fixes multiple
CHECK
-failures in binary ops due to type confusion (CVE-2022-23583) - Fixes a use after free in
DecodePng
kernel (CVE-2022-23584) - Fixes a memory leak in decoding PNG images (CVE-2022-23585)
- Fixes multiple
CHECK
-fails infunction.cc
(CVE-2022-23586) - Fixes multiple
CHECK
-fails due to attempting to build a reference tensor (CVE-2022-23588) - Fixes an integer overflow in Grappler cost estimation of crop and resize operation (CVE-2022-23587)
- Fixes a null pointer dereference in Grappler's
IsConstant
(CVE-2022-23589) - Fixes a
CHECK
failure in constant folding (CVE-2021-41197) - Fixes a stack overflow due to self-recursive function in
GraphDef
(CVE-2022-23591) - Fixes a heap OOB access in
RunForwardTypeInference
(CVE-2022-23592) - Fixes a crash due to erroneous
StatusOr
(CVE-2022-23590) - Fixes multiple crashes and heap OOB accesses in TFG dialect (MLIR) (CVE-2022-23594)
- Fixes a segfault in
simplifyBroadcast
(MLIR) (CVE-2022-23593) - Fixes a null pointer dereference in
BuildXlaCompilationCache
(XLA) (CVE-2022-23595) - Updates
icu
to69.1
to handle CVE-2020-10531
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
8bitmp3, Adam Lanicek, ag.ramesh, alesapin, Andrew Goodbody, annasuheyla, Ariel Elkin, Arnab Dutta, Ben Barsdell, bhack, cfRod, Chengji Yao, Christopher Bate, dan, Dan F-M, David Korczynski, DEKHTIARJonathan, dengzhiyuan, Deven Desai, Duncan Riach, Eli Osherovich, Ewout Ter Hoeven, ez2take, Faijul Amin, fo40225, Frederic Bastien, gadagashwini, Gauri1 Deshpande, Georgiy Manuilov, Guilherme De Lázari, Guozhong Zhuang, H1Gdev, homuler, Hongxu Jia, Jacky_Yin, jayfurmanek, jgehw, Jhalak Patel, Jinzhe Zeng, Johan Gunnarsson, Jonathan Dekhtiar, Kaixi Hou, Kanvi Khanna, Kevin Cheng, Koan-Sin Tan, Kruglov-Dmitry, Kun Lu, Lemo, Lequn Chen, long.chen, Louis Sugy, Mahmoud Abuzaina, Mao, Marius Brehler, Mark Harfouche, Martin Patz, Maxiwell S. Garcia, Meenakshi Venkataraman, Michael Melesse, Mrinal Tyagi, Måns Nilsson, Nathan John Sircombe, Nathan Luehr, Nilesh Agarwalla, Oktay Ozturk, Patrice Vignola, Pawel-Polyai, Rama Ketineni, Ramesh Sampath, Reza Rahimi, Rob Suderman, Robert Kalmar, Rohit Santhanam, Sachin Muradi, Saduf2019, Samuel Marks, Shi,Guangyong, Sidong-Wei, Srinivasan Narayanamoorthy, Srishti Srivastava, Steven I Reeves, stevenireeves, Supernovae, Tamas Bela Feher, Tao Xu, Thibaut Goetghebuer-Planchon, Thomas Schmeyer, tilakrayal, Valery Mironov, Victor Guo, Vignesh Kothapalli, Vishnuvardhan Janapati, wamuir, Wang,Quintin, William Muir, William Raveane, Yash Goel, Yimei Sun, Yong Tang, Yuduo Wu
Release 2.7.1
This releases introduces several vulnerability fixes:
- Fixes a floating point division by 0 when executing convolution operators (CVE-2022-21725)
- Fixes a heap OOB read in shape inference for
ReverseSequence
(CVE-2022-21728) - Fixes a heap OOB access in
Dequantize
(CVE-2022-21726) - Fixes an integer overflow in shape inference for
Dequantize
(CVE-2022-21727) - Fixes a heap OOB access in
FractionalAvgPoolGrad
(CVE-2022-21730) - Fixes an overflow and divide by zero in
UnravelIndex
(CVE-2022-21729) - Fixes a type confusion in shape inference for
ConcatV2
(CVE-2022-21731) - Fixes an OOM in
ThreadPoolHandle
(CVE-2022-21732) - Fixes an OOM due to integer overflow in
StringNGrams
(CVE-2022-21733) - Fixes more issues caused by incomplete validation in boosted trees code (CVE-2021-41208)
- Fixes an integer overflows in most sparse component-wise ops (CVE-2022-23567)
- Fixes an integer overflows in
AddManySparseToTensorsMap
(CVE-2022-23568) - Fixes a number of
CHECK
-failures inMapStage
(CVE-2022-21734) - Fixes a division by zero in
FractionalMaxPool
(CVE-2022-21735) - Fixes a number of
CHECK
-fails when building invalid/overflowing tensor shapes (CVE-2022-23569) - Fixes an undefined behavior in
SparseTensorSliceDataset
(CVE-2022-21736) - Fixes an assertion failure based denial of service via faulty bin count operations (CVE-2022-21737)
- Fixes a reference binding to null pointer in
QuantizedMaxPool
(CVE-2022-21739) - Fixes an integer overflow leading to crash in
SparseCountSparseOutput
(CVE-2022-21738) - Fixes a heap overflow in
SparseCountSparseOutput
(CVE-2022-21740) - Fixes an FPE in
BiasAndClamp
in TFLite (CVE-2022-23557) - Fixes an FPE in depthwise convolutions in TFLite (CVE-2022-21741)
- Fixes an integer overflow in TFLite array creation (CVE-2022-23558)
- Fixes an integer overflow in TFLite (CVE-2022-23559)
- Fixes a dangerous OOB write in TFLite (CVE-2022-23561)
- Fixes a vulnerability leading to read and write outside of bounds in TFLite (CVE-2022-23560)
- Fixes a set of vulnerabilities caused by using insecure temporary files (CVE-2022-23563)
- Fixes an integer overflow in Range resulting in undefined behavior and OOM (CVE-2022-23562)
- Fixes a vulnerability where missing validation causes
tf.sparse.split
to crash whenaxis
is a tuple (CVE-2021-41206) - Fixes a
CHECK
-fail when decoding resource handles from proto (CVE-2022-23564) - Fixes a
CHECK
-fail with repeatedAttrDef
(CVE-2022-23565) - Fixes a heap OOB write in Grappler (CVE-2022-23566)
- Fixes a
CHECK
-fail when decoding invalid tensors from proto (CVE-2022-23571) - Fixes a null-dereference when specializing tensor type (CVE-2022-23570)
- Fixes a crash when type cannot be specialized (CVE-2022-23572)
- Fixes a heap OOB read/write in
SpecializeType
(CVE-2022-23574) - Fixes an uninitialized variable access in
AssignOp
(CVE-2022-23573) - Fixes an integer overflow in
OpLevelCostEstimator::CalculateTensorSize
(CVE-2022-23575) - Fixes an integer overflow in
OpLevelCostEstimator::CalculateOutputSize
(CVE-2022-23576) - Fixes a null dereference in
GetInitOp
(CVE-2022-23577) - Fixes a memory leak when a graph node is invalid (CVE-2022-23578)
- Fixes an abort caused by allocating a vector that is too large (CVE-2022-23580)
- Fixes multiple
CHECK
-failures during Grappler'sIsSimplifiableReshape
(CVE-2022-23581) - Fixes multiple
CHECK
-failures during Grappler'sSafeToRemoveIdentity
(CVE-2022-23579) - Fixes multiple
CHECK
-failures inTensorByteSize
(CVE-2022-23582) - Fixes multiple
CHECK
-failures in binary ops due to type confusion (CVE-2022-23583) - Fixes a use after free in
DecodePng
kernel (CVE-2022-23584) - Fixes a memory leak in decoding PNG images (CVE-2022-23585)
- Fixes multiple
CHECK
-fails infunction.cc
(CVE-2022-23586) - Fixes multiple
CHECK
-fails due to attempting to build a reference tensor (CVE-2022-23588) - Fixes an integer overflow in Grappler cost estimation of crop and resize operation (CVE-2022-23587)
- Fixes a null pointer dereference in Grappler's
IsConstant
(CVE-2022-23589) - Fixes a
CHECK
failure in constant folding (CVE-2021-41197) - Fixes a stack overflow due to self-recursive function in
GraphDef
(CVE-2022-23591) - Fixes a crash due to erroneous
StatusOr
(CVE-2022-23590) - Fixes multiple crashes and heap OOB accesses in TFG dialect (MLIR) (CVE-2022-23594)
- Fixes a null pointer dereference in
BuildXlaCompilationCache
(XLA) (CVE-2022-23595) - Updates
icu
to69.1
to handle CVE-2020-10531
Release 2.6.3
This releases introduces several vulnerability fixes:
- Fixes a floating point division by 0 when executing convolution operators (CVE-2022-21725)
- Fixes a heap OOB read in shape inference for
ReverseSequence
(CVE-2022-21728) - Fixes a heap OOB access in
Dequantize
(CVE-2022-21726) - Fixes an integer overflow in shape inference for
Dequantize
(CVE-2022-21727) - Fixes a heap OOB access in
FractionalAvgPoolGrad
(CVE-2022-21730) - Fixes an overflow and divide by zero in
UnravelIndex
(CVE-2022-21729) - Fixes a type confusion in shape inference for
ConcatV2
(CVE-2022-21731) - Fixes an OOM in
ThreadPoolHandle
(CVE-2022-21732) - Fixes an OOM due to integer overflow in
StringNGrams
(CVE-2022-21733) - Fixes more issues caused by incomplete validation in boosted trees code (CVE-2021-41208)
- Fixes an integer overflows in most sparse component-wise ops (CVE-2022-23567)
- Fixes an integer overflows in
AddManySparseToTensorsMap
(CVE-2022-23568) - Fixes a number of
CHECK
-failures inMapStage
(CVE-2022-21734) - Fixes a division by zero in
FractionalMaxPool
(CVE-2022-21735) - Fixes a number of
CHECK
-fails when building invalid/overflowing tensor shapes (CVE-2022-23569) - Fixes an undefined behavior in
SparseTensorSliceDataset
(CVE-2022-21736) - Fixes an assertion failure based denial of service via faulty bin count operations (CVE-2022-21737)
- Fixes a reference binding to null pointer in
QuantizedMaxPool
(CVE-2022-21739) - Fixes an integer overflow leading to crash in
SparseCountSparseOutput
(CVE-2022-21738) - Fixes a heap overflow in
SparseCountSparseOutput
(CVE-2022-21740) - Fixes an FPE in
BiasAndClamp
in TFLite (CVE-2022-23557) - Fixes an FPE in depthwise convolutions in TFLite (CVE-2022-21741)
- Fixes an integer overflow in TFLite array creation (CVE-2022-23558)
- Fixes an integer overflow in TFLite (CVE-2022-23559)
- Fixes a dangerous OOB write in TFLite (CVE-2022-23561)
- Fixes a vulnerability leading to read and write outside of bounds in TFLite (CVE-2022-23560)
- Fixes a set of vulnerabilities caused by using insecure temporary files (CVE-2022-23563)
- Fixes an integer overflow in Range resulting in undefined behavior and OOM (CVE-2022-23562)
- Fixes a vulnerability where missing validation causes
tf.sparse.split
to crash whenaxis
is a tuple (CVE-2021-41206) - Fixes a
CHECK
-fail when decoding resource handles from proto (CVE-2022-23564) - Fixes a
CHECK
-fail with repeatedAttrDef
(CVE-2022-23565) - Fixes a heap OOB write in Grappler (CVE-2022-23566)
- Fixes a
CHECK
-fail when decoding invalid tensors from proto (CVE-2022-23571) - Fixes a null-dereference when specializing tensor type (CVE-2022-23570)
- Fixes a crash when type cannot be specialized (CVE-2022-23572)
- Fixes a heap OOB read/write in
SpecializeType
(CVE-2022-23574) - Fixes an unitialized variable access in
AssignOp
(CVE-2022-23573) - Fixes an integer overflow in
OpLevelCostEstimator::CalculateTensorSize
(CVE-2022-23575) - Fixes an integer overflow in
OpLevelCostEstimator::CalculateOutputSize
(CVE-2022-23576) - Fixes a null dereference in
GetInitOp
(CVE-2022-23577) - Fixes a memory leak when a graph node is invalid (CVE-2022-23578)
- Fixes an abort caused by allocating a vector that is too large (CVE-2022-23580)
- Fixes multiple
CHECK
-failures during Grappler'sIsSimplifiableReshape
(CVE-2022-23581) - Fixes multiple
CHECK
-failures during Grappler'sSafeToRemoveIdentity
(CVE-2022-23579) - Fixes multiple
CHECK
-failures inTensorByteSize
(CVE-2022-23582) - Fixes multiple
CHECK
-failures in binary ops due to type confusion (CVE-2022-23583) - Fixes a use after free in
DecodePng
kernel (CVE-2022-23584) - Fixes a memory leak in decoding PNG images (CVE-2022-23585)
- Fixes multiple
CHECK
-fails infunction.cc
(CVE-2022-23586) - Fixes multiple
CHECK
-fails due to attempting to build a reference tensor (CVE-2022-23588) - Fixes an integer overflow in Grappler cost estimation of crop and resize operation (CVE-2022-23587)
- Fixes a null pointer dereference in Grappler's
IsConstant
(CVE-2022-23589) - Fixes a
CHECK
failure in constant folding (CVE-2021-41197) - Fixes a stack overflow due to self-recursive function in
GraphDef
(CVE-2022-23591) - Fixes a null pointer dereference in
BuildXlaCompilationCache
(XLA) (CVE-2022-23595) - Updates
icu
to69.1
to handle CVE-2020-10531
Release 2.5.3
This releases introduces several vulnerability fixes:
- Fixes a floating point division by 0 when executing convolution operators (CVE-2022-21725)
- Fixes a heap OOB read in shape inference for
ReverseSequence
(CVE-2022-21728) - Fixes a heap OOB access in
Dequantize
(CVE-2022-21726) - Fixes an integer overflow in shape inference for
Dequantize
(CVE-2022-21727) - Fixes a heap OOB access in
FractionalAvgPoolGrad
(CVE-2022-21730) - Fixes an overflow and divide by zero in
UnravelIndex
(CVE-2022-21729) - Fixes a type confusion in shape inference for
ConcatV2
(CVE-2022-21731) - Fixes an OOM in
ThreadPoolHandle
(CVE-2022-21732) - Fixes an OOM due to integer overflow in
StringNGrams
(CVE-2022-21733) - Fixes more issues caused by incomplete validation in boosted trees code (CVE-2021-41208)
- Fixes an integer overflows in most sparse component-wise ops (CVE-2022-23567)
- Fixes an integer overflows in
AddManySparseToTensorsMap
(CVE-2022-23568) - Fixes a number of
CHECK
-failures inMapStage
(CVE-2022-21734) - Fixes a division by zero in
FractionalMaxPool
(CVE-2022-21735) - Fixes a number of
CHECK
-fails when building invalid/overflowing tensor shapes (CVE-2022-23569) - Fixes an undefined behavior in
SparseTensorSliceDataset
(CVE-2022-21736) - Fixes an assertion failure based denial of service via faulty bin count operations (CVE-2022-21737)
- Fixes a reference binding to null pointer in
QuantizedMaxPool
(CVE-2022-21739) - Fixes an integer overflow leading to crash in
SparseCountSparseOutput
(CVE-2022-21738) - Fixes a heap overflow in
SparseCountSparseOutput
(CVE-2022-21740) - Fixes an FPE in
BiasAndClamp
in TFLite (CVE-2022-23557) - Fixes an FPE in depthwise convolutions in TFLite (CVE-2022-21741)
- Fixes an integer overflow in TFLite array creation (CVE-2022-23558)
- Fixes an integer overflow in TFLite (CVE-2022-23559)
- Fixes a dangerous OOB write in TFLite (CVE-2022-23561)
- Fixes a vulnerability leading to read and write outside of bounds in TFLite (CVE-2022-23560)
- Fixes a set of vulnerabilities caused by using insecure temporary files (CVE-2022-23563)
- Fixes an integer overflow in Range resulting in undefined behavior and OOM (CVE-2022-23562)
- Fixes a vulnerability where missing validation causes
tf.sparse.split
to crash whenaxis
is a tuple (CVE-2021-41206) - Fixes a
CHECK
-fail when decoding resource handles from proto (CVE-2022-23564) - Fixes a
CHECK
-fail with repeatedAttrDef
(CVE-2022-23565) - Fixes a heap OOB write in Grappler (CVE-2022-23566)
- Fixes a
CHECK
-fail when decoding invalid tensors from proto (CVE-2022-23571) - Fixes an unitialized variable access in
AssignOp
(CVE-2022-23573) - Fixes an integer overflow in
OpLevelCostEstimator::CalculateTensorSize
(CVE-2022-23575) - Fixes an integer overflow in
OpLevelCostEstimator::CalculateOutputSize
(CVE-2022-23576) - Fixes a null dereference in
GetInitOp
(CVE-2022-23577) - Fixes a memory leak when a graph node is invalid (CVE-2022-23578)
- Fixes an abort caused by allocating a vector that is too large (CVE-2022-23580)
- Fixes multiple
CHECK
-failures during Grappler'sIsSimplifiableReshape
(CVE-2022-23581) - Fixes multiple
CHECK
-failures during Grappler'sSafeToRemoveIdentity
(CVE-2022-23579) - Fixes multiple
CHECK
-failures inTensorByteSize
(CVE-2022-23582) - Fixes multiple
CHECK
-failures in binary ops due to type confusion (CVE-2022-23583) - Fixes a use after free in
DecodePng
kernel (CVE-2022-23584) - Fixes a memory leak in decoding PNG images (CVE-2022-23585)
- Fixes multiple
CHECK
-fails infunction.cc
(CVE-2022-23586) - Fixes multiple
CHECK
-fails due to attempting to build a reference tensor (CVE-2022-23588) - Fixes an integer overflow in Grappler cost estimation of crop and resize operation (CVE-2022-23587)
- Fixes a null pointer dereference in Grappler's
IsConstant
(CVE-2022-23589) - Fixes a
CHECK
failure in constant folding (CVE-2021-41197) - Fixes a stack overflow due to self-recursive function in
GraphDef
(CVE-2022-23591) - Updates
icu
to69.1
to handle CVE-2020-10531
Release 2.7.0
Breaking Changes
-
tf.keras
:- The methods
Model.fit()
,Model.predict()
, andModel.evaluate()
will no longer uprank input data of shape(batch_size,)
to become(batch_size, 1)
. This enablesModel
subclasses to process scalar data in theirtrain_step()
/test_step()
/predict_step()
methods.
Note that this change may break certain subclassed models. You can revert back to the previous behavior by adding upranking yourself in thetrain_step()
/test_step()
/predict_step()
methods, e.g.if x.shape.rank == 1: x = tf.expand_dims(x, axis=-1)
. Functional models as well as Sequential models built with an explicit input shape are not affected. - The methods
Model.to_yaml()
andkeras.models.model_from_yaml
have been replaced to raise aRuntimeError
as they can be abused to cause arbitrary code execution. It is recommended to use JSON serialization instead of YAML, or, a better alternative, serialize to H5. LinearModel
andWideDeepModel
are moved to thetf.compat.v1.keras.models.
namespace (tf.compat.v1.keras.models.LinearModel
andtf.compat.v1.keras.models.WideDeepModel
), and theirexperimental
endpoints (tf.keras.experimental.models.LinearModel
andtf.keras.experimental.models.WideDeepModel
) are being deprecated.- RNG behavior change for all
tf.keras.initializers
classes. For any class constructed with a fixed seed, it will no longer generate same value when invoked multiple times. Instead, it will return different value, but a deterministic sequence. This change will make the initialize behavior align between v1 and v2.
- The methods
-
tf.lite
:- Rename fields
SignatureDef
table in schema to maximize the parity with TF SavedModel's Signature concept. - Deprecate Makefile builds. Makefile users need to migrate their builds to CMake or Bazel. Please refer to the Build TensorFlow Lite with CMake and Build TensorFlow Lite for ARM boards for the migration.
- Deprecate
tflite::OpResolver::GetDelegates
. The list returned by TfLite'sBuiltinOpResolver::GetDelegates
is now always empty. Instead, recommend using new methodtflite::OpResolver::GetDelegateCreators
in order to achieve lazy initialization on TfLite delegate instances.
- Rename fields
-
TF Core:
tf.Graph.get_name_scope()
now always returns a string, as documented. Previously, when called withinname_scope("")
orname_scope(None)
contexts, it returnedNone
; now it returns the empty string.tensorflow/core/ir/
contains a new MLIR-based Graph dialect that is isomorphic to GraphDef and will be used to replace GraphDef-based (e.g., Grappler) optimizations.- Deprecated and removed
attrs()
function in shape inference. All attributes should be queried by name now (rather than range returned) to enable changing the underlying storage there. - The following Python symbols were accidentally added in earlier versions
of TensorFlow and now are removed. Each symbol has a replacement that
should be used instead, but note the replacement's argument names are
different.
tf.quantize_and_dequantize_v4
(accidentally introduced in TensorFlow 2.4): Usetf.quantization.quantize_and_dequantize_v2
instead.tf.batch_mat_mul_v3
(accidentally introduced in TensorFlow 2.6): Usetf.linalg.matmul
instead.tf.sparse_segment_sum_grad
(accidentally introduced in TensorFlow 2.6): Usetf.raw_ops.SparseSegmentSumGrad
instead. Directly calling this op is typically not necessary, as it is automatically used when computing the gradient oftf.sparse.segment_sum
.
- Renaming of tensorflow::int64 to int_64_t in numerous places (the former is an alias for the latter) which could result in needing to regenerate selective op registration headers else execution would fail with unregistered kernels error.
-
Modular File System Migration:
- Support for S3 and HDFS file systems has been migrated to a modular file
systems based approach and is now available in
https://github.com/tensorflow/io. The
tensorflow-io
python package should be installed for S3 and HDFS support with tensorflow.
- Support for S3 and HDFS file systems has been migrated to a modular file
systems based approach and is now available in
https://github.com/tensorflow/io. The
Major Features and Improvements
-
Improvements to the TensorFlow debugging experience:
- Previously, TensorFlow error stack traces involved many internal frames, which could be challenging to read through, while not being actionable for end users. As of TF 2.7, TensorFlow filters internal frames in most errors that it raises, to keep stack traces short, readable, and focused on what's actionable for end users (their own code).
This behavior can be disabled by calling
tf.debugging.disable_traceback_filtering()
, and can be re-enabled viatf.debugging.enable_traceback_filtering()
. If you are debugging a TensorFlow-internal issue (e.g. to prepare a TensorFlow PR), make sure to disable traceback filtering. You can check whether this feature is currently enabled by callingtf.debugging.is_traceback_filtering_enabled()
.Note that this feature is only available with Python 3.7 or higher.
- Improve the informativeness of error messages raised by Keras
Layer.__call__()
, by adding the full list of argument values passed to the layer in every exception.
-
Introduce the
tf.compat.v1.keras.utils.track_tf1_style_variables
decorator, which enables using large classes of tf1-style variable_scope,get_variable
, andcompat.v1.layer
-based components from within TF2 models running with TF2 behavior enabled. -
tf.data
:-
tf.data service now supports auto-sharding. Users specify the sharding policy with
tf.data.experimental.service.ShardingPolicy
enum. It can be one ofOFF
(equivalent to today's"parallel_epochs"
mode),DYNAMIC
(equivalent to today's"distributed_epoch"
mode), or one of the static sharding policies:FILE
,DATA
,FILE_OR_DATA
, orHINT
(corresponding to values oftf.data.experimental.AutoShardPolicy
).Static sharding (auto-sharding) requires the number of tf.data service workers be fixed. Users need to specify the worker addresses in
tensorflow.data.experimental.DispatcherConfig
. -
tf.data.experimental.service.register_dataset
now accepts optionalcompression
argument.
-
-
Keras:
tf.keras.layers.Conv
now includes a publicconvolution_op
method. This method can be used to simplify the implementation of Conv subclasses. There are two primary ways to use this new method. The first is to use the method directly in your owncall
method:python class StandardizedConv2D(tf.keras.layers.Conv2D): def call(self, inputs): mean, var = tf.nn.moments(self.kernel, axes=[0, 1, 2], keepdims=True) return self.convolution_op(inputs, (self.kernel - mean) / tf.sqrt(var + 1e-10))
Alternatively, you can overrideconvolution_op
:python class StandardizedConv2D(tf.keras.Layer): def convolution_op(self, inputs, kernel): mean, var = tf.nn.moments(kernel, axes=[0, 1, 2], keepdims=True) # Author code uses std + 1e-5 return super().convolution_op(inputs, (kernel - mean) / tf.sqrt(var + 1e-10))
- Added
merge_state()
method totf.keras.metrics.Metric
for use in distributed computations. - Added
sparse
andragged
options totf.keras.layers.TextVectorization
to allow forSparseTensor
andRaggedTensor
outputs from the layer.
-
distribute.experimental.rpc package:
-
distribute.experimental.rpc package introduces APIs to create a GRPC based server to register tf.function methods and a GRPC client to invoke remote registered methods. RPC APIs are intended for multi-client setups i.e. server and clients are started in separate binaries independently.
-
Example usage to create server: ```python server = tf.distribute.experimental.rpc.Server.create("grpc", "127.0.0.1:1234") @tf.function(input_signature=[ tf.TensorSpec([], tf.int32), tf.TensorSpec([], dtypes.int32) ]) def _remote_multiply(a, b): return tf.math.multiply(a, b)
server.register("multiply", _remote_multiply) ```
-
Example usage to create client:
python client = tf.distribute.experimental.rpc.Client.create("grpc", address) a = tf.constant(2, dtype=tf.int32) b = tf.constant(3, dtype=tf.int32) result = client.multiply(a, b)
-
-
tf.lite
:- Add experimental API
experimental_from_jax
to support conversion from Jax models to TensorFlow Lite. - Support uint32 data type for cast op.
- Support int8 data type for cast op.
- Add experimental quantization debugger
tf.lite.QuantizationDebugger
- Add lite.experimental.authoring.compatible API
- A Python decorator to provide a way to check TFLite compatibility
issue of
tf.function
. This returns a callable object which validates TFLite compatibility. If an incompatible operation is encountered during execution, an exception will be raised with information about the incompatible ops.
- A Python decorator to provide a way to check TFLite compatibility
issue of
- Add lite.experimental.Analyzer API
- An experimental tool to analyze TFLite flatbuffer models. This API can be used to investigate TFLite model structure and check compatibility with GPU delegate.
- Add experimental API
-
Extension Types
- Add experimental API to define new Python classes that can be handled by
TensorFlow APIs. To create an extension type, simply define a Python
class with
tf.experimental.ExtensionType
as its base, and use type annotations to specify the type for each field. E.g.:python class MaskedTensor(tf.experimental.ExtensionType): values: tf.Tensor mask: tf.Tensor
Thetf.ExtensionType
base class works similarly totyping.NamedTuple
and@dataclasses.dataclass
from the standard Python library. - Extension types are supported by Keras, tf.data, TF-hub, SavedModel, tf.function, control flow ops, py_function, and distribution strategy.
- Add "dispatch decorators" that can be used to override the default
behavior of TensorFlow ops (such as
tf.add
ortf.concat
) when they are applied to ExtensionType values. - The
BatchableExtensionType
API can be used to define extension types that support APIs that make use of batching, such astf.data.Dataset
andtf.map_fn
. - For more information, see the Extension types guide.
- Add experimental API to define new Python classes that can be handled by
TensorFlow APIs. To create an extension type, simply define a Python
class with
Bug Fixes and Other Changes
- TF Core:
- Random number generation (RNG) system
- Add argument
alg
totf.random.stateless_*
functions to explicitly select the RNG algorithm. - Add
tf.nn.experimental.stateless_dropout
, a stateless version oftf.nn.dropout
. tf.random.Generator
now can be created inside the scope oftf.distribute.experimental.ParameterServerStrategy
andtf.distribute.experimental.CentralStorageStrategy
.
- Add argument
- Add an experimental session config
tf.experimental.disable_functional_ops_lowering
which disables functional control flow op lowering optimization. This is useful when executing within a portable runtime where control flow op kernels may not be loaded due to selective registration. - Add a new experimental argument
experimental_is_anonymous
totf.lookup.StaticHashTable.__init__
to create the table in anonymous mode. In this mode, the table resource can only be accessed via resource handles (not resource names) and will be deleted automatically when all resource handles pointing to it are gone.
- Random number generation (RNG) system
tf.data
:- Introduce the
tf.data.experimental.at
API which provides random access for input pipelines that consist of transformations that support random access. The initial set of transformations that support random access includes:tf.data.Dataset.from_tensor_slices
,tf.data.Dataset.shuffle
,tf.data.Dataset.batch
,tf.data.Dataset.shard
,tf.data.Dataset.map
, andtf.data.Dataset.range
. - Promote
tf.data.Options.experimental_deterministic
API totf.data.Options.deterministic
and deprecate the experimental endpoint. - Move autotuning options
from
tf.data.Options.experimental_optimization.autotune*
to a newly createdtf.data.Options.autotune.*
and remove support fortf.data.Options.experimental_optimization.autotune_buffers
. - Add support for user-defined names of tf.data core Python API, which can be used to disambiguate tf.data events in TF Profiler Trace Viewer.
- Promote
tf.data.experimental.sample_from_datasets
API totf.data.Dataset.sample_from_datasets
and deprecate the experimental endpoint. - Added
TF_GPU_ALLOCATOR=cuda_malloc_async
that use cudaMallocAsync from CUDA 11.2. This could become the default in the future.
- Introduce the
- TF SavedModel:
- Custom gradients are now saved by default. See
tf.saved_model.SaveOptions
to disable this. - The saved_model_cli's
--input_examples
inputs are now restricted to python literals to avoid code injection.
- Custom gradients are now saved by default. See
- XLA:
- Add a new API that allows custom call functions to signal errors. The old API will be deprecated in a future release. See https://www.tensorflow.org/xla/custom_call for details.
- XLA:GPU reductions are deterministic by default (reductions within
jit_compile=True
are now deterministic). - XLA:GPU works with Horovod (OSS contribution by Trent Lo from NVidia)
- XLA:CPU and XLA:GPU can compile tf.unique and tf.where when shapes are provably correct at compile time.
tf.saved_model.save
:- When saving a model, not specifying a namespace whitelist for custom ops with a namespace will now default to allowing rather than rejecting them all.
- Deterministic Op Functionality (enabled by setting the environment variable
TF_DETERMINISTIC_OPS
to"true"
or"1"
):- Add determinsitic GPU implementations of:
tf.math.segment_sum
tf.math.segment_prod
tf.math.segment_mean
tf.math.unsorted_segment_sum
tf.math.unsorted_segment_prod
tf.math.unsorted_segment_sqrt
tf.math.unsorted_segment_mean
tf.gather
backproptf.convert_to_tensor
when fed with (sparse)tf.IndexedSlices
tf.nn.sparse_softmax_crossentropy_with_logits
tf.nn.ctc_loss
(resolved, possibly in prior release, and confirmed with tests)- stateful ops used in
tf.data.Dataset
- Run the following ops on CPU (with significant performance penalty):
tf.scatter_nd
and other related scatter functions, such astf.tensor_scatter_nd_update
- Add determinism-unimplemented exception-throwing to the following ops.
When op-determinism is expected (i.e. when the environment variable
TF_DETERMINISTIC_OPS
is set to"true"
or"1"
), an attempt to use the specified paths through the following ops on a GPU will causetf.errors.UnimplementedError
(with an understandable message), unless otherwise specified, to be thrown.tf.compat.v1.nn.fused_batch_norm
backprop tooffset
whenis_training=False
tf.image.adjust_contrast
forwardtf.nn.depthwise_conv2d
backprop tofilter
when not using cuDNN convolutiontf.image.resize
withmethod=ResizeMethod.NEAREST
backproptf.math.bincount
- TODO: confirm exception addedtf.raw_ops.DebugNumericSummary
andtf.raw_ops.DebugNumericSummaryV2
tf.Variable.scatter_add
(and other scatter methods, both on ref and resource variables)tf.linalg.svd
tf.nn.dilation2d
gradienttf.nn.max_pool_with_argmax
gradienttf.timestamp
. ThrowsFailedPrecondition
- The random-number-generating ops in the
tf.random
module when the global random seed has not yet been set (viatf.random.set_seed
). ThrowsRuntimeError
from Python orInvalidArgument
from C++ tf.compat.v1.get_seed
if the global random seed has not yet been set (viatf.random.set_seed
). ThrowsRuntimeError
from Python orInvalidArgument
from C++
- Add determinsitic GPU implementations of:
Security
- Fixes a code injection issue in
saved_model_cli
(CVE-2021-41228) - Fixes a vulnerability due to use of uninitialized value in Tensorflow (CVE-2021-41225)
- Fixes a heap OOB in
FusedBatchNorm
kernels (CVE-2021-41223) - Fixes an arbitrary memory read in
ImmutableConst
(CVE-2021-41227) - Fixes a heap OOB in
SparseBinCount
(CVE-2021-41226) - Fixes a heap OOB in
SparseFillEmptyRows
(CVE-2021-41224) - Fixes a segfault due to negative splits in
SplitV
(CVE-2021-41222) - Fixes segfaults and vulnerabilities caused by accesses to invalid memory
during shape inference in
Cudnn*
ops (CVE-2021-41221) - Fixes a null pointer exception when
Exit
node is not preceded byEnter
op (CVE-2021-41217) - Fixes an integer division by 0 in
tf.raw_ops.AllToAll
(CVE-2021-41218) - Fixes a use after free and a memory leak in
CollectiveReduceV2
(CVE-2021-41220) - Fixes an undefined behavior via
nullptr
reference binding in sparse matrix multiplication (CVE-2021-41219) - Fixes a heap buffer overflow in
Transpose
(CVE-2021-41216) - Prevents deadlocks arising from mutually recursive
tf.function
objects (CVE-2021-41213) - Fixes a null pointer exception in
DeserializeSparse
(CVE-2021-41215) - Fixes an undefined behavior arising from reference binding to
nullptr
intf.ragged.cross
(CVE-2021-41214) - Fixes a heap OOB read in
tf.ragged.cross
(CVE-2021-41212) - Fixes a heap OOB in shape inference for
QuantizeV2
(CVE-2021-41211) - Fixes a heap OOB read in all
tf.raw_ops.QuantizeAndDequantizeV*
ops (CVE-2021-41205) - Fixes an FPE in
ParallelConcat
(CVE-2021-41207) - Fixes FPE issues in convolutions with zero size filters (CVE-2021-41209)
- Fixes a heap OOB read in
tf.raw_ops.SparseCountSparseOutput
(CVE-2021-41210) - Fixes vulnerabilities caused by incomplete validation in boosted trees code (CVE-2021-41208)
- Fixes vulnerabilities caused by incomplete validation of shapes in multiple TF ops (CVE-2021-41206)
- Fixes a segfault produced while copying constant resource tensor (CVE-2021-41204)
- Fixes a vulnerability caused by unitialized access in
EinsumHelper::ParseEquation
(CVE-2021-41201) - Fixes several vulnerabilities and segfaults caused by missing validation during checkpoint loading (CVE-2021-41203)
- Fixes an overflow producing a crash in
tf.range
(CVE-2021-41202) - Fixes an overflow producing a crash in
tf.image.resize
when size is large (CVE-2021-41199) - Fixes an overflow producing a crash in
tf.tile
when tiling tensor is large (CVE-2021-41198) - Fixes a vulnerability produced due to incomplete validation in
tf.summary.create_file_writer
(CVE-2021-41200) - Fixes multiple crashes due to overflow and
CHECK
-fail in ops with large tensor shapes (CVE-2021-41197) - Fixes a crash in
max_pool3d
when size argument is 0 or negative (CVE-2021-41196) - Fixes a crash in
tf.math.segment_*
operations (CVE-2021-41195) - Updates
curl
to7.78.0
to handle CVE-2021-22922, CVE-2021-22923, CVE-2021-22924, CVE-2021-22925, and CVE-2021-22926.
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
8bitmp3, Abhilash Majumder, abhilash1910, AdeshChoudhar, Adrian Garcia Badaracco, Adrian Ratiu, ag.ramesh, Aleksandr Nikolaev, Alexander Bosch, Alexander Grund, Annie Tallund, Anush Elangovan, Artem Sokolovskii, azazhu, Balint Cristian, Bas Aarts, Ben Barsdell, bhack, cfRod, Cheney-Wang, Cheng Ren, Christopher Bate, collin, Danila Bespalov, David Datascientist, Deven Desai, Duncan Riach, Ehsan Kia, Ellie, Fan Du, fo40225, Frederic Bastien, fsx950223, Gauri1 Deshpande, geetachavan1, Guillaume Klein, guozhong.zhuang, helen, Håkon Sandsmark, japm48, jgehw, Jinzhe Zeng, Jonathan Dekhtiar, Kai Zhu, Kaixi Hou, Kanvi Khanna, Koan-Sin Tan, Koki Ibukuro, Kulin Seth, KumaTea, Kun-Lu, Lemo, lipracer, liuyuanqiang, Mahmoud Abuzaina, Marius Brehler, Maxiwell S. Garcia, mdfaijul, metarutaiga, Michal Szutenberg, nammbash, Neil Girdhar, Nishidha Panpaliya, Nyadla-Sys, Patrice Vignola, Peter Kasting, Philipp Hack, PINTO0309, Prateek Gupta, puneeshkhanna, Rahul Butani, Rajeshwar Reddy T, Reza Rahimi, RinozaJiffry, rmothukuru, Rohit Santhanam, Saduf2019, Samuel Marks, sclarkson, Sergii Khomenko, Sheng, Yang, Sidong-Wei, slowy07, Srinivasan Narayanamoorthy, Srishti Srivastava, stanley, Stella Alice Schlotter, Steven I Reeves, stevenireeves, svobora, Takayoshi Koizumi, Tamas Bela Feher, Thibaut Goetghebuer-Planchon, Trent Lo, Twice, Varghese, Jojimon, Vishnuvardhan Janapati, Wang Yanzhang, Wang,Quintin, William Muir, William Raveane, Yasir Modak, Yasuhiro Matsumoto, Yi Li, Yong Tang, zhaozheng09, Zhoulong Jiang, zzpmiracle
Release 2.6.2
Fixes an issue where keras
, tensorflow_estimator
and tensorboard
were
missing proper upper bounds and resulted in broken installs after TF 2.7 release
Release 2.6.1
This release introduces several vulnerability fixes:
- Fixes a code injection issue in
saved_model_cli
(CVE-2021-41228) - Fixes a vulnerability due to use of uninitialized value in Tensorflow (CVE-2021-41225)
- Fixes a heap OOB in
FusedBatchNorm
kernels (CVE-2021-41223) - Fixes an arbitrary memory read in
ImmutableConst
(CVE-2021-41227) - Fixes a heap OOB in
SparseBinCount
(CVE-2021-41226) - Fixes a heap OOB in
SparseFillEmptyRows
(CVE-2021-41224) - Fixes a segfault due to negative splits in
SplitV
(CVE-2021-41222) - Fixes segfaults and vulnerabilities caused by accesses to invalid memory
during shape inference in
Cudnn*
ops (CVE-2021-41221) - Fixes a null pointer exception when
Exit
node is not preceded byEnter
op (CVE-2021-41217) - Fixes an integer division by 0 in
tf.raw_ops.AllToAll
(CVE-2021-41218) - Fixes a use after free and a memory leak in
CollectiveReduceV2
(CVE-2021-41220) - Fixes an undefined behavior via
nullptr
reference binding in sparse matrix multiplication (CVE-2021-41219) - Fixes a heap buffer overflow in
Transpose
(CVE-2021-41216) - Prevents deadlocks arising from mutually recursive
tf.function
objects (CVE-2021-41213) - Fixes a null pointer exception in
DeserializeSparse
(CVE-2021-41215) - Fixes an undefined behavior arising from reference binding to
nullptr
intf.ragged.cross
(CVE-2021-41214) - Fixes a heap OOB read in
tf.ragged.cross
(CVE-2021-41212) - Fixes a heap OOB in shape inference for
QuantizeV2
(CVE-2021-41211) - Fixes a heap OOB read in all
tf.raw_ops.QuantizeAndDequantizeV*
ops (CVE-2021-41205) - Fixes an FPE in
ParallelConcat
(CVE-2021-41207) - Fixes FPE issues in convolutions with zero size filters (CVE-2021-41209)
- Fixes a heap OOB read in
tf.raw_ops.SparseCountSparseOutput
(CVE-2021-41210) - Fixes vulnerabilities caused by incomplete validation in boosted trees code (CVE-2021-41208)
- Fixes vulnerabilities caused by incomplete validation of shapes in multiple TF ops (CVE-2021-41206)
- Fixes a segfault produced while copying constant resource tensor (CVE-2021-41204)
- Fixes a vulnerability caused by unitialized access in
EinsumHelper::ParseEquation
(CVE-2021-41201) - Fixes several vulnerabilities and segfaults caused by missing validation during checkpoint loading (CVE-2021-41203)
- Fixes an overflow producing a crash in
tf.range
(CVE-2021-41202) - Fixes an overflow producing a crash in
tf.image.resize
when size is large (CVE-2021-41199) - Fixes an overflow producing a crash in
tf.tile
when tiling tensor is large (CVE-2021-41198) - Fixes a vulnerability produced due to incomplete validation in
tf.summary.create_file_writer
(CVE-2021-41200) - Fixes multiple crashes due to overflow and
CHECK
-fail in ops with large tensor shapes (CVE-2021-41197) - Fixes a crash in
max_pool3d
when size argument is 0 or negative (CVE-2021-41196) - Fixes a crash in
tf.math.segment_*
operations (CVE-2021-41195) - Updates
curl
to7.78.0
to handle CVE-2021-22922, CVE-2021-22923, CVE-2021-22924, CVE-2021-22925, and CVE-2021-22926.
Release 2.6.0
Breaking Changes
-
tf.train.experimental.enable_mixed_precision_graph_rewrite
is removed, as the API only works in graph mode and is not customizable. The function is still accessible undertf.compat.v1.mixed_precision.enable_mixed_precision_graph_rewrite
, but it is recommended to use the Keras mixed precision API instead. -
tf.lite
:- Remove
experimental.nn.dynamic_rnn
,experimental.nn.TfLiteRNNCell
andexperimental.nn.TfLiteLSTMCell
since they're no longer supported. It's recommended to just use keras lstm instead.
- Remove
-
tf.keras
:- Keras been split into a separate PIP package (
keras
), and its code has been moved to the GitHub repositorykeras-team/keras. The API endpoints fortf.keras
stay unchanged, but are now backed by thekeras
PIP package. The existing code in tensorflow/python/keras is a staled copy and will be removed in future release (2.7). Please remove any imports totensorflow.python.keras
and replace them with public tf.keras API instead. - The methods
Model.to_yaml()
andkeras.models.model_from_yaml
have been replaced to raise aRuntimeError
as they can be abused to cause arbitrary code execution. It is recommended to use JSON serialization instead of YAML, or, a better alternative, serialize to H5.
- Keras been split into a separate PIP package (
Known Caveats
- TF Core:
- A longstanding bug in
tf.while_loop
, which caused it to execute sequentially, even whenparallel_iterations>1
, has now been fixed. However, the increased parallelism may result in increased memory use. Users who experience unwanted regressions should reset theirwhile_loop
'sparallel_iterations
value to 1, which is consistent with prior behavior.
- A longstanding bug in
Major Features and Improvements
-
tf.keras
:- Keras has been split into a separate PIP package (
keras
), and its code has been moved to the GitHub repository keras-team/keras. The API endpoints fortf.keras
stay unchanged, but are now backed by thekeras
PIP package. All Keras-related PRs and issues should now be directed to the GitHub repository. keras-team/keras. tf.keras.utils.experimental.DatasetCreator
now takes an optionaltf.distribute.InputOptions
for specific options when used with distribution.tf.keras.experimental.SidecarEvaluator
is now available for a program intended to be run on an evaluator task, which is commonly used to supplement a training cluster running withtf.distribute.experimental.ParameterServerStrategy
(see `https://www.tensorflow.org/tutorials/distribute/parameter_server_training). It can also be used with single-worker training or other strategies. See docstring for more info.- Preprocessing layers moved from experimental to core.
- Import paths moved from
tf.keras.layers.preprocessing.experimental
totf.keras.layers
.
- Import paths moved from
- Updates to Preprocessing layers API for consistency and clarity:
StringLookup
andIntegerLookup
default formask_token
changed toNone
. This matches the default masking behavior ofHashing
andEmbedding
layers. To keep existing behavior, passmask_token=""
during layer creation.- Renamed
"binary"
output mode to"multi_hot"
forCategoryEncoding
,StringLookup
,IntegerLookup
, andTextVectorization
. Multi-hot encoding will no longer automatically uprank rank 1 inputs, so these layers can now multi-hot encode unbatched multi-dimensional samples. - Added a new output mode
"one_hot"
forCategoryEncoding
,StringLookup
,IntegerLookup
, which will encode each element in an input batch individually, and automatically append a new output dimension if necessary. Use this mode on rank 1 inputs for the old"binary"
behavior of one-hot encoding a batch of scalars. Normalization
will no longer automatically uprank rank 1 inputs, allowing normalization of unbatched multi-dimensional samples.
- Keras has been split into a separate PIP package (
-
tf.lite
:- The recommended Android NDK version for building TensorFlow Lite has been changed from r18b to r19c.
- Supports int64 for mul.
- Supports native variable builtin ops - ReadVariable, AssignVariable.
- Converter:
- Experimental support for variables in TFLite. To enable through
conversion, users need to set
experimental_enable_resource_variables
on tf.lite.TFLiteConverter to True. Note: mutable variables is only available usingfrom_saved_model
in this release, support for other methods is coming soon. - Old Converter (TOCO) is getting removed from next release. It's been deprecated for few releases already.
- Experimental support for variables in TFLite. To enable through
conversion, users need to set
-
tf.saved_model
:- SavedModels can now save custom gradients. Use the option
tf.saved_model.SaveOption(experimental_custom_gradients=True)
to enable this feature. The documentation in Advanced autodiff has been updated. - Object metadata has now been deprecated and no longer saved to the SavedModel.
- SavedModels can now save custom gradients. Use the option
-
TF Core:
- Added
tf.config.experimental.reset_memory_stats
to reset the tracked peak memory returned bytf.config.experimental.get_memory_info
.
- Added
-
tf.data
:- Added
target_workers
param todata_service_ops.from_dataset_id
anddata_service_ops.distribute
. Users can specify"AUTO"
,"ANY"
, or"LOCAL"
(case insensitive). If"AUTO"
, tf.data service runtime decides which workers to read from. If"ANY"
, TF workers read from any tf.data service workers. If"LOCAL"
, TF workers will only read from local in-processs tf.data service workers."AUTO"
works well for most cases, while users can specify other targets. For example,"LOCAL"
would help avoid RPCs and data copy if every TF worker colocates with a tf.data service worker. Currently,"AUTO"
reads from any tf.data service workers to preserve existing behavior. The default value is"AUTO"
.
- Added
Bug Fixes and Other Changes
- TF Core:
- Added
tf.lookup.experimental.MutableHashTable
, which provides a generic mutable hash table implementation.- Compared to
tf.lookup.experimental.DenseHashTable
this offers lower overall memory usage, and a cleaner API. It does not require specifying adelete_key
andempty_key
that cannot be inserted into the table.
- Compared to
- Added support for specifying number of subdivisions in all reduce host collective. This parallelizes work on CPU and speeds up the collective performance. Default behavior is unchanged.
- Add an option
perturb_singular
totf.linalg.tridiagonal_solve
that allows solving linear systems with a numerically singular tridiagonal matrix, e.g. for use in inverse iteration. - Added
tf.linalg.eigh_tridiagonal
that computes the eigenvalues of a Hermitian tridiagonal matrix. tf.constant
now places its output on the current default device.- SavedModel
- Added
tf.saved_model.experimental.TrackableResource
, which allows the creation of custom wrapper objects for resource tensors. - Added a SavedModel load option to allow restoring partial
checkpoints into the SavedModel. See
tf.saved_model.LoadOptions
for details.
- Added
- Added a new op
SparseSegmentSumGrad
to match the other sparse segment gradient ops and avoid an extra gather operation that was in the previous gradient implementation. - Added a new session config setting
internal_fragmentation_fraction
, which controls when the BFC Allocator needs to split an oversized chunk to satisfy an allocation request. - Added
tf.get_current_name_scope()
which returns the current full name scope string that will be prepended to op names.
- Added
tf.data
:- Promoting
tf.data.experimental.bucket_by_sequence_length
API totf.data.Dataset.bucket_by_sequence_length
and deprecating the experimental endpoint. - Promoting
tf.data.experimental.get_single_element
API totf.data.Dataset.get_single_element
and deprecating the experimental endpoint. - Promoting
tf.data.experimental.group_by_window
API totf.data.Dataset.group_by_window
and deprecating the experimental endpoint. - Promoting
tf.data.experimental.RandomDataset
API totf.data.Dataset.random
and deprecating the experimental endpoint. - Promoting
tf.data.experimental.scan
API totf.data.Dataset.scan
and deprecating the experimental endpoint. - Promoting
tf.data.experimental.snapshot
API totf.data.Dataset.shapshot
and deprecating the experimental endpoint. - Promoting
tf.data.experimental.take_while
API totf.data.Dataset.take_while
and deprecating the experimental endpoint. - Promoting
tf.data.experimental.ThreadingOptions
API totf.data.ThreadingOptions
and deprecating the experimental endpoint. - Promoting
tf.data.experimental.unique
API totf.data.Dataset.unique
and deprecating the experimental endpoint. - Added
stop_on_empty_dataset
parameter tosample_from_datasets
andchoose_from_datasets
. Settingstop_on_empty_dataset=True
will stop sampling if it encounters an empty dataset. This preserves the sampling ratio throughout training. The prior behavior was to continue sampling, skipping over exhausted datasets, until all datasets are exhausted. By default, the original behavior (stop_on_empty_dataset=False
) is preserved. - Removed previously deprecated tf.data statistics related APIs:
tf.data.Options.experimental_stats
tf.data.experimental.StatsAggregator
tf.data.experimental.StatsOptions.*
tf.data.experimental.bytes_produced_stats
tf.data.experimental.latency_stats
- Removed the following experimental tf.data optimization APIs:
tf.data.experimental.MapVectorizationOptions.*
tf.data.experimental.OptimizationOptions.filter_with_random_uniform_fusion
tf.data.experimental.OptimizationOptions.hoist_random_uniform
tf.data.experimental.OptimizationOptions.map_vectorization
*tf.data.experimental.OptimizationOptions.reorder_data_discarding_ops
- Promoting
tf.keras
:- Fix usage of
__getitem__
slicing in Keras Functional APIs when the inputs areRaggedTensor
objects. - Add
keepdims
argument to allGlobalPooling
layers. - Add
include_preprocessing
argument toMobileNetV3
architectures to control the inclusion ofRescaling
layer in the model. - Add optional argument (
force
) tomake_(train|test|predict)_funtion
methods to skip the cached function and generate a new one. This is useful to regenerate in a single call the compiled training function when any.trainable
attribute of any model's layer has changed. - Models now have a
save_spec
property which contains theTensorSpec
specs for calling the model. This spec is automatically saved when the model is called for the first time.
- Fix usage of
tf.linalg
:- Add
CompositeTensor
as a base class toLinearOperator
.
- Add
tf.lite
:- Fix mean op reference quantization rounding issue.
- Added
framework_stable
BUILD target, which links in only the non-experimental TF Lite APIs. - Remove deprecated Java
Interpreter
methods:modifyGraphWithDelegate
- UseInterpreter.Options.addDelegate
setNumThreads
- UseInterpreter.Options.setNumThreads
- Add Conv3DTranspose as a builtin op.
tf.summary
:- Fix
tf.summary.should_record_summaries()
so it correctly reflects when summaries will be written, even whentf.summary.record_if()
is not n effect, by returning True tensor if default writer is present.
- Fix
- Grappler:
- Disable default Grappler optimization timeout to make the optimization pipeline deterministic. This may lead to increased model loading time, because time spent in graph optimizations is now unbounded (was 20 minutes).
- Deterministic Op Functionality (enabled by setting
TF_DETERMINISTIC_OPS
to"true"
or"1"
):- Add a deterministic GPU implementation of
tf.nn.softmax_cross_entropy_with_logits
. See PR 49178. - Add a deterministic CPU implementation of
tf.image.crop_and_resize
. See PR 48905. - Add determinism-unimplemented exception-throwing to the following ops.
When op-determinism is expected, an attempt to use the specified paths
through the following ops on a GPU will cause
tf.errors.UnimplementedError
(with an understandable message) to be thrown.
- Add a deterministic GPU implementation of
Security
- Fixes a heap out of bounds access in sparse reduction operations (CVE-2021-37635)
- Fixes a floating point exception in
SparseDenseCwiseDiv
(CVE-2021-37636) - Fixes a null pointer dereference in
CompressElement
(CVE-2021-37637) - Fixes a null pointer dereference in
RaggedTensorToTensor
(CVE-2021-37638) - Fixes a null pointer dereference and a heap OOB read arising from operations restoring tensors (CVE-2021-37639)
- Fixes an integer division by 0 in sparse reshaping (CVE-2021-37640)
- Fixes a division by 0 in
ResourceScatterDiv
(CVE-2021-37642) - Fixes a heap OOB in
RaggedGather
(CVE-2021-37641) - Fixes a
std::abort
raised fromTensorListReserve
(CVE-2021-37644) - Fixes a null pointer dereference in
MatrixDiagPartOp
(CVE-2021-37643) - Fixes an integer overflow due to conversion to unsigned (CVE-2021-37645)
- Fixes a bad allocation error in
StringNGrams
caused by integer conversion (CVE-2021-37646) - Fixes a null pointer dereference in
SparseTensorSliceDataset
(CVE-2021-37647) - Fixes an incorrect validation of
SaveV2
inputs (CVE-2021-37648) - Fixes a null pointer dereference in
UncompressElement
(CVE-2021-37649) - Fixes a segfault and a heap buffer overflow in
{Experimental,}DatasetToTFRecord
(CVE-2021-37650) - Fixes a heap buffer overflow in
FractionalAvgPoolGrad
(CVE-2021-37651) - Fixes a use after free in boosted trees creation (CVE-2021-37652)
- Fixes a division by 0 in
ResourceGather
(CVE-2021-37653) - Fixes a heap OOB and a
CHECK
fail inResourceGather
(CVE-2021-37654) - Fixes a heap OOB in
ResourceScatterUpdate
(CVE-2021-37655) - Fixes an undefined behavior arising from reference binding to nullptr in
RaggedTensorToSparse
(CVE-2021-37656) - Fixes an undefined behavior arising from reference binding to nullptr in
MatrixDiagV*
ops (CVE-2021-37657) - Fixes an undefined behavior arising from reference binding to nullptr in
MatrixSetDiagV*
ops (CVE-2021-37658) - Fixes an undefined behavior arising from reference binding to nullptr and heap OOB in binary cwise ops (CVE-2021-37659)
- Fixes a division by 0 in inplace operations (CVE-2021-37660)
- Fixes a crash caused by integer conversion to unsigned (CVE-2021-37661)
- Fixes an undefined behavior arising from reference binding to nullptr in boosted trees (CVE-2021-37662)
- Fixes a heap OOB in boosted trees (CVE-2021-37664)
- Fixes vulnerabilities arising from incomplete validation in
QuantizeV2
(CVE-2021-37663) - Fixes vulnerabilities arising from incomplete validation in MKL requantization (CVE-2021-37665)
- Fixes an undefined behavior arising from reference binding to nullptr in
RaggedTensorToVariant
(CVE-2021-37666) - Fixes an undefined behavior arising from reference binding to nullptr in unicode encoding (CVE-2021-37667)
- Fixes an FPE in
tf.raw_ops.UnravelIndex
(CVE-2021-37668) - Fixes a crash in NMS ops caused by integer conversion to unsigned (CVE-2021-37669)
- Fixes a heap OOB in
UpperBound
andLowerBound
(CVE-2021-37670) - Fixes an undefined behavior arising from reference binding to nullptr in map operations (CVE-2021-37671)
- Fixes a heap OOB in
SdcaOptimizerV2
(CVE-2021-37672) - Fixes a
CHECK
-fail inMapStage
(CVE-2021-37673) - Fixes a vulnerability arising from incomplete validation in
MaxPoolGrad
(CVE-2021-37674) - Fixes an undefined behavior arising from reference binding to nullptr in shape inference (CVE-2021-37676)
- Fixes a division by 0 in most convolution operators (CVE-2021-37675)
- Fixes vulnerabilities arising from missing validation in shape inference for
Dequantize
(CVE-2021-37677) - Fixes an arbitrary code execution due to YAML deserialization (CVE-2021-37678)
- Fixes a heap OOB in nested
tf.map_fn
withRaggedTensor
s (CVE-2021-37679) - Fixes a division by zero in TFLite (CVE-2021-37680)
- Fixes an NPE in TFLite (CVE-2021-37681)
- Fixes a vulnerability arising from use of unitialized value in TFLite (CVE-2021-37682)
- Fixes an FPE in TFLite division operations (CVE-2021-37683)
- Fixes an FPE in TFLite pooling operations (CVE-2021-37684)
- Fixes an infinite loop in TFLite (CVE-2021-37686)
- Fixes a heap OOB in TFLite (CVE-2021-37685)
- Fixes a heap OOB in TFLite's
Gather*
implementations (CVE-2021-37687) - Fixes an undefined behavior arising from null pointer dereference in TFLite (CVE-2021-37688)
- Fixes an undefined behavior arising from null pointer dereference in TFLite MLIR optimizations (CVE-2021-37689)
- Fixes a FPE in LSH in TFLite (CVE-2021-37691)
- Fixes a segfault on strings tensors with mismatched dimensions, arising in Go code (CVE-2021-37692)
- Fixes a use after free and a potential segfault in shape inference functions (CVE-2021-37690)
- Updates
curl
to7.77.0
to handle CVE-2021-22876, CVE-2021-22897, CVE-2021-22898, and CVE-2021-22901.
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
Aadhitya A, Abhilash Mahendrakar, Abhishek Varma, Abin Shahab, Adam Hillier, Aditya Kane, AdityaKane2001, ag.ramesh, Amogh Joshi, Armen Poghosov, armkevincheng, Avrosh K, Ayan Moitra, azazhu, Banikumar Maiti, Bas Aarts, bhack, Bhanu Prakash Bandaru Venkata, Billy Cao, Bohumir Zamecnik, Bradley Reece, CyanXu, Daniel Situnayake, David Pal, Ddavis-2015, DEKHTIARJonathan, Deven Desai, Duncan Riach, Edward, Eli Osherovich, Eugene Kuznetsov, europeanplaice, evelynmitchell, Evgeniy Polyakov, Felix Vollmer, Florentin Hennecker, François Chollet, Frederic Bastien, Fredrik Knutsson, Gabriele Macchi, Gaurav Shukla, Gauri1 Deshpande, geetachavan1, Georgiy Manuilov, H, Hengwen Tong, Henri Woodcock, Hiran Sarkar, Ilya Arzhannikov, Janghoo Lee, jdematos, Jens Meder, Jerry Shih, jgehw, Jim Fisher, Jingbei Li, Jiri Podivin, Joachim Gehweiler, Johannes Lade, Jonas I. Liechti, Jonas Liechti, Jonas Ohlsson, Jonathan Dekhtiar, Julian Gross, Kaixi Hou, Kevin Cheng, Koan-Sin Tan, Kulin Seth, linzewen, Liubov Batanina, luisleee, Lukas Geiger, Mahmoud Abuzaina, mathgaming, Matt Conley, Max H. Gerlach, mdfaijul, Mh Kwon, Michael Martis, Michal Szutenberg, Måns Nilsson, nammbash, Neil Girdhar, Nicholas Vadivelu, Nick Kreeger, Nirjas Jakilim, okyanusoz, Patrice Vignola, Patrik Laurell, Pedro Marques, Philipp Hack, Phillip Cloud, Piergiacomo De Marchi, Prashant Kumar, puneeshkhanna, pvarouktsis, QQ喵, Rajeshwar Reddy T, Rama Ketineni, Reza Rahimi, Robert Kalmar, rsun, Ryan Kuester, Saduf2019, Sean Morgan, Sean Moriarity, Shaochen Shi, Sheng, Yang, Shu Wang, Shuai Zhang, Soojeong, Stanley-Nod, Steven I Reeves, stevenireeves, Suraj Sudhir, Sven Mayer, Tamas Bela Feher, tashuang.zk, tcervi, Teng Lu, Thales Elero Cervi, Thibaut Goetghebuer-Planchon, Thomas Walther, Till Brychcy, Trent Lo, Uday Bondhugula, vishakha.agrawal, Vishnuvardhan Janapati, wamuir, Wenwen Ouyang, wenwu, Williard Joshua Jose, xiaohong1031, Xiaoming (Jason) Cui, Xinan Jiang, Yasir Modak, Yi Li, Yong Tang, zilinzhu, 박상준, 이장
Release 2.5.2
This release introduces several vulnerability fixes:
- Fixes a code injection issue in
saved_model_cli
(CVE-2021-41228) - Fixes a vulnerability due to use of uninitialized value in Tensorflow (CVE-2021-41225)
- Fixes a heap OOB in
FusedBatchNorm
kernels (CVE-2021-41223) - Fixes an arbitrary memory read in
ImmutableConst
(CVE-2021-41227) - Fixes a heap OOB in
SparseBinCount
(CVE-2021-41226) - Fixes a heap OOB in
SparseFillEmptyRows
(CVE-2021-41224) - Fixes a segfault due to negative splits in
SplitV
(CVE-2021-41222) - Fixes segfaults and vulnerabilities caused by accesses to invalid memory
during shape inference in
Cudnn*
ops (CVE-2021-41221) - Fixes a null pointer exception when
Exit
node is not preceded byEnter
op (CVE-2021-41217) - Fixes an integer division by 0 in
tf.raw_ops.AllToAll
(CVE-2021-41218) - Fixes an undefined behavior via
nullptr
reference binding in sparse matrix multiplication (CVE-2021-41219) - Fixes a heap buffer overflow in
Transpose
(CVE-2021-41216) - Prevents deadlocks arising from mutually recursive
tf.function
objects (CVE-2021-41213) - Fixes a null pointer exception in
DeserializeSparse
(CVE-2021-41215) - Fixes an undefined behavior arising from reference binding to
nullptr
intf.ragged.cross
(CVE-2021-41214) - Fixes a heap OOB read in
tf.ragged.cross
(CVE-2021-41212) - Fixes a heap OOB read in all
tf.raw_ops.QuantizeAndDequantizeV*
ops (CVE-2021-41205) - Fixes an FPE in
ParallelConcat
(CVE-2021-41207) - Fixes FPE issues in convolutions with zero size filters (CVE-2021-41209)
- Fixes a heap OOB read in
tf.raw_ops.SparseCountSparseOutput
(CVE-2021-41210) - Fixes vulnerabilities caused by incomplete validation in boosted trees code (CVE-2021-41208)
- Fixes vulnerabilities caused by incomplete validation of shapes in multiple TF ops (CVE-2021-41206)
- Fixes a segfault produced while copying constant resource tensor (CVE-2021-41204)
- Fixes a vulnerability caused by unitialized access in
EinsumHelper::ParseEquation
(CVE-2021-41201) - Fixes several vulnerabilities and segfaults caused by missing validation during checkpoint loading (CVE-2021-41203)
- Fixes an overflow producing a crash in
tf.range
(CVE-2021-41202) - Fixes an overflow producing a crash in
tf.image.resize
when size is large (CVE-2021-41199) - Fixes an overflow producing a crash in
tf.tile
when tiling tensor is large (CVE-2021-41198) - Fixes a vulnerability produced due to incomplete validation in
tf.summary.create_file_writer
(CVE-2021-41200) - Fixes multiple crashes due to overflow and
CHECK
-fail in ops with large tensor shapes (CVE-2021-41197) - Fixes a crash in
max_pool3d
when size argument is 0 or negative (CVE-2021-41196) - Fixes a crash in
tf.math.segment_*
operations (CVE-2021-41195) - Updates
curl
to7.78.0
to handle CVE-2021-22922, CVE-2021-22923, CVE-2021-22924, CVE-2021-22925, and CVE-2021-22926.
Release 2.5.1
This release introduces several vulnerability fixes:
- Fixes a heap out of bounds access in sparse reduction operations (CVE-2021-37635)
- Fixes a floating point exception in
SparseDenseCwiseDiv
(CVE-2021-37636) - Fixes a null pointer dereference in
CompressElement
(CVE-2021-37637) - Fixes a null pointer dereference in
RaggedTensorToTensor
(CVE-2021-37638) - Fixes a null pointer dereference and a heap OOB read arising from operations restoring tensors (CVE-2021-37639)
- Fixes an integer division by 0 in sparse reshaping (CVE-2021-37640)
- Fixes a division by 0 in
ResourceScatterDiv
(CVE-2021-37642) - Fixes a heap OOB in
RaggedGather
(CVE-2021-37641) - Fixes a
std::abort
raised fromTensorListReserve
(CVE-2021-37644) - Fixes a null pointer dereference in
MatrixDiagPartOp
(CVE-2021-37643) - Fixes an integer overflow due to conversion to unsigned (CVE-2021-37645)
- Fixes a bad allocation error in
StringNGrams
caused by integer conversion (CVE-2021-37646) - Fixes a null pointer dereference in
SparseTensorSliceDataset
(CVE-2021-37647) - Fixes an incorrect validation of
SaveV2
inputs (CVE-2021-37648) - Fixes a null pointer dereference in
UncompressElement
(CVE-2021-37649) - Fixes a segfault and a heap buffer overflow in
{Experimental,}DatasetToTFRecord
(CVE-2021-37650) - Fixes a heap buffer overflow in
FractionalAvgPoolGrad
(CVE-2021-37651) - Fixes a use after free in boosted trees creation (CVE-2021-37652)
- Fixes a division by 0 in
ResourceGather
(CVE-2021-37653) - Fixes a heap OOB and a
CHECK
fail inResourceGather
(CVE-2021-37654) - Fixes a heap OOB in
ResourceScatterUpdate
(CVE-2021-37655) - Fixes an undefined behavior arising from reference binding to nullptr in
RaggedTensorToSparse
(CVE-2021-37656) - Fixes an undefined behavior arising from reference binding to nullptr in
MatrixDiagV*
ops (CVE-2021-37657) - Fixes an undefined behavior arising from reference binding to nullptr in
MatrixSetDiagV*
ops (CVE-2021-37658) - Fixes an undefined behavior arising from reference binding to nullptr and heap OOB in binary cwise ops (CVE-2021-37659)
- Fixes a division by 0 in inplace operations (CVE-2021-37660)
- Fixes a crash caused by integer conversion to unsigned (CVE-2021-37661)
- Fixes an undefined behavior arising from reference binding to nullptr in boosted trees (CVE-2021-37662)
- Fixes a heap OOB in boosted trees (CVE-2021-37664)
- Fixes vulnerabilities arising from incomplete validation in
QuantizeV2
(CVE-2021-37663) - Fixes vulnerabilities arising from incomplete validation in MKL requantization (CVE-2021-37665)
- Fixes an undefined behavior arising from reference binding to nullptr in
RaggedTensorToVariant
(CVE-2021-37666) - Fixes an undefined behavior arising from reference binding to nullptr in unicode encoding (CVE-2021-37667)
- Fixes an FPE in
tf.raw_ops.UnravelIndex
(CVE-2021-37668) - Fixes a crash in NMS ops caused by integer conversion to unsigned (CVE-2021-37669)
- Fixes a heap OOB in
UpperBound
andLowerBound
(CVE-2021-37670) - Fixes an undefined behavior arising from reference binding to nullptr in map operations (CVE-2021-37671)
- Fixes a heap OOB in
SdcaOptimizerV2
(CVE-2021-37672) - Fixes a
CHECK
-fail inMapStage
(CVE-2021-37673) - Fixes a vulnerability arising from incomplete validation in
MaxPoolGrad
(CVE-2021-37674) - Fixes an undefined behavior arising from reference binding to nullptr in shape inference (CVE-2021-37676)
- Fixes a division by 0 in most convolution operators (CVE-2021-37675)
- Fixes vulnerabilities arising from missing validation in shape inference for
Dequantize
(CVE-2021-37677) - Fixes an arbitrary code execution due to YAML deserialization (CVE-2021-37678)
- Fixes a heap OOB in nested
tf.map_fn
withRaggedTensor
s (CVE-2021-37679) - Fixes a division by zero in TFLite (CVE-2021-37680)
- Fixes an NPE in TFLite (CVE-2021-37681)
- Fixes a vulnerability arising from use of unitialized value in TFLite (CVE-2021-37682)
- Fixes an FPE in TFLite division operations (CVE-2021-37683)
- Fixes an FPE in TFLite pooling operations (CVE-2021-37684)
- Fixes an infinite loop in TFLite (CVE-2021-37686)
- Fixes a heap OOB in TFLite (CVE-2021-37685)
- Fixes a heap OOB in TFLite's
Gather*
implementations (CVE-2021-37687) - Fixes an undefined behavior arising from null pointer dereference in TFLite (CVE-2021-37688)
- Fixes an undefined behavior arising from null pointer dereference in TFLite MLIR optimizations (CVE-2021-37689)
- Fixes a FPE in LSH in TFLite (CVE-2021-37691)
- Fixes a segfault on strings tensors with mismatched dimensions, arising in Go code (CVE-2021-37692)
- Fixes a use after free and a potential segfault in shape inference functions (CVE-2021-37690)
- Updates
curl
to7.77.0
to handle CVE-2021-22876, CVE-2021-22897, CVE-2021-22898, and CVE-2021-22901.
Release 2.4.4
This release introduces several vulnerability fixes:
- Fixes a code injection issue in
saved_model_cli
(CVE-2021-41228) - Fixes a vulnerability due to use of uninitialized value in Tensorflow (CVE-2021-41225)
- Fixes a heap OOB in
FusedBatchNorm
kernels (CVE-2021-41223) - Fixes an arbitrary memory read in
ImmutableConst
(CVE-2021-41227) - Fixes a heap OOB in
SparseBinCount
(CVE-2021-41226) - Fixes a heap OOB in
SparseFillEmptyRows
(CVE-2021-41224) - Fixes a segfault due to negative splits in
SplitV
(CVE-2021-41222) - Fixes segfaults and vulnerabilities caused by accesses to invalid memory
during shape inference in
Cudnn*
ops (CVE-2021-41221) - Fixes a null pointer exception when
Exit
node is not preceded byEnter
op (CVE-2021-41217) - Fixes an integer division by 0 in
tf.raw_ops.AllToAll
(CVE-2021-41218) - Fixes an undefined behavior via
nullptr
reference binding in sparse matrix multiplication (CVE-2021-41219) - Fixes a heap buffer overflow in
Transpose
(CVE-2021-41216) - Prevents deadlocks arising from mutually recursive
tf.function
objects (CVE-2021-41213) - Fixes a null pointer exception in
DeserializeSparse
(CVE-2021-41215) - Fixes an undefined behavior arising from reference binding to
nullptr
intf.ragged.cross
(CVE-2021-41214) - Fixes a heap OOB read in
tf.ragged.cross
(CVE-2021-41212) - Fixes a heap OOB read in all
tf.raw_ops.QuantizeAndDequantizeV*
ops (CVE-2021-41205) - Fixes an FPE in
ParallelConcat
(CVE-2021-41207) - Fixes FPE issues in convolutions with zero size filters (CVE-2021-41209)
- Fixes a heap OOB read in
tf.raw_ops.SparseCountSparseOutput
(CVE-2021-41210) - Fixes vulnerabilities caused by incomplete validation in boosted trees code (CVE-2021-41208)
- Fixes vulnerabilities caused by incomplete validation of shapes in multiple TF ops (CVE-2021-41206)
- Fixes a segfault produced while copying constant resource tensor (CVE-2021-41204)
- Fixes a vulnerability caused by unitialized access in
EinsumHelper::ParseEquation
(CVE-2021-41201) - Fixes several vulnerabilities and segfaults caused by missing validation during checkpoint loading (CVE-2021-41203)
- Fixes an overflow producing a crash in
tf.range
(CVE-2021-41202) - Fixes an overflow producing a crash in
tf.image.resize
when size is large (CVE-2021-41199) - Fixes an overflow producing a crash in
tf.tile
when tiling tensor is large (CVE-2021-41198) - Fixes a vulnerability produced due to incomplete validation in
tf.summary.create_file_writer
(CVE-2021-41200) - Fixes multiple crashes due to overflow and
CHECK
-fail in ops with large tensor shapes (CVE-2021-41197) - Fixes a crash in
max_pool3d
when size argument is 0 or negative (CVE-2021-41196) - Fixes a crash in
tf.math.segment_*
operations (CVE-2021-41195) - Updates
curl
to7.78.0
to handle CVE-2021-22922, CVE-2021-22923, CVE-2021-22924, CVE-2021-22925, and CVE-2021-22926.
Release 2.4.3
This release introduces several vulnerability fixes:
- Fixes a heap out of bounds access in sparse reduction operations (CVE-2021-37635)
- Fixes a floating point exception in
SparseDenseCwiseDiv
(CVE-2021-37636) - Fixes a null pointer dereference in
CompressElement
(CVE-2021-37637) - Fixes a null pointer dereference in
RaggedTensorToTensor
(CVE-2021-37638) - Fixes a null pointer dereference and a heap OOB read arising from operations restoring tensors (CVE-2021-37639)
- Fixes an integer division by 0 in sparse reshaping (CVE-2021-37640)
- Fixes a division by 0 in
ResourceScatterDiv
(CVE-2021-37642) - Fixes a heap OOB in
RaggedGather
(CVE-2021-37641) - Fixes a
std::abort
raised fromTensorListReserve
(CVE-2021-37644) - Fixes a null pointer dereference in
MatrixDiagPartOp
(CVE-2021-37643) - Fixes an integer overflow due to conversion to unsigned (CVE-2021-37645)
- Fixes a bad allocation error in
StringNGrams
caused by integer conversion (CVE-2021-37646) - Fixes a null pointer dereference in
SparseTensorSliceDataset
(CVE-2021-37647) - Fixes an incorrect validation of
SaveV2
inputs (CVE-2021-37648) - Fixes a null pointer dereference in
UncompressElement
(CVE-2021-37649) - Fixes a segfault and a heap buffer overflow in
{Experimental,}DatasetToTFRecord
(CVE-2021-37650) - Fixes a heap buffer overflow in
FractionalAvgPoolGrad
(CVE-2021-37651) - Fixes a use after free in boosted trees creation (CVE-2021-37652)
- Fixes a division by 0 in
ResourceGather
(CVE-2021-37653) - Fixes a heap OOB and a
CHECK
fail inResourceGather
(CVE-2021-37654) - Fixes a heap OOB in
ResourceScatterUpdate
(CVE-2021-37655) - Fixes an undefined behavior arising from reference binding to nullptr in
RaggedTensorToSparse
(CVE-2021-37656) - Fixes an undefined behavior arising from reference binding to nullptr in
MatrixDiagV*
ops (CVE-2021-37657) - Fixes an undefined behavior arising from reference binding to nullptr in
MatrixSetDiagV*
ops (CVE-2021-37658) - Fixes an undefined behavior arising from reference binding to nullptr and heap OOB in binary cwise ops (CVE-2021-37659)
- Fixes a division by 0 in inplace operations (CVE-2021-37660)
- Fixes a crash caused by integer conversion to unsigned (CVE-2021-37661)
- Fixes an undefined behavior arising from reference binding to nullptr in boosted trees (CVE-2021-37662)
- Fixes a heap OOB in boosted trees (CVE-2021-37664)
- Fixes vulnerabilities arising from incomplete validation in
QuantizeV2
(CVE-2021-37663) - Fixes vulnerabilities arising from incomplete validation in MKL requantization (CVE-2021-37665)
- Fixes an undefined behavior arising from reference binding to nullptr in
RaggedTensorToVariant
(CVE-2021-37666) - Fixes an undefined behavior arising from reference binding to nullptr in unicode encoding (CVE-2021-37667)
- Fixes an FPE in
tf.raw_ops.UnravelIndex
(CVE-2021-37668) - Fixes a crash in NMS ops caused by integer conversion to unsigned (CVE-2021-37669)
- Fixes a heap OOB in
UpperBound
andLowerBound
(CVE-2021-37670) - Fixes an undefined behavior arising from reference binding to nullptr in map operations (CVE-2021-37671)
- Fixes a heap OOB in
SdcaOptimizerV2
(CVE-2021-37672) - Fixes a
CHECK
-fail inMapStage
(CVE-2021-37673) - Fixes a vulnerability arising from incomplete validation in
MaxPoolGrad
(CVE-2021-37674) - Fixes an undefined behavior arising from reference binding to nullptr in shape inference (CVE-2021-37676)
- Fixes a division by 0 in most convolution operators (CVE-2021-37675)
- Fixes vulnerabilities arising from missing validation in shape inference for
Dequantize
(CVE-2021-37677) - Fixes an arbitrary code execution due to YAML deserialization (CVE-2021-37678)
- Fixes a heap OOB in nested
tf.map_fn
withRaggedTensor
s (CVE-2021-37679) - Fixes a division by zero in TFLite (CVE-2021-37680)
- Fixes an NPE in TFLite (CVE-2021-37681)
- Fixes a vulnerability arising from use of unitialized value in TFLite (CVE-2021-37682)
- Fixes an FPE in TFLite division operations (CVE-2021-37683)
- Fixes an FPE in TFLite pooling operations (CVE-2021-37684)
- Fixes an infinite loop in TFLite (CVE-2021-37686)
- Fixes a heap OOB in TFLite (CVE-2021-37685)
- Fixes a heap OOB in TFLite's
Gather*
implementations (CVE-2021-37687) - Fixes an undefined behavior arising from null pointer dereference in TFLite (CVE-2021-37688)
- Fixes an undefined behavior arising from null pointer dereference in TFLite MLIR optimizations (CVE-2021-37689)
- Fixes a FPE in LSH in TFLite (CVE-2021-37691)
- Fixes a segfault on strings tensors with mismatched dimensions, arising in Go code (CVE-2021-37692)
- Fixes a use after free and a potential segfault in shape inference functions (CVE-2021-37690)
- Updates
curl
to7.77.0
to handle CVE-2021-22876, CVE-2021-22897, CVE-2021-22898, and CVE-2021-22901.
Release 2.3.4
This release introduces several vulnerability fixes:
- Fixes a heap out of bounds access in sparse reduction operations (CVE-2021-37635)
- Fixes a floating point exception in
SparseDenseCwiseDiv
(CVE-2021-37636) - Fixes a null pointer dereference in
CompressElement
(CVE-2021-37637) - Fixes a null pointer dereference in
RaggedTensorToTensor
(CVE-2021-37638) - Fixes a null pointer dereference and a heap OOB read arising from operations restoring tensors (CVE-2021-37639)
- Fixes an integer division by 0 in sparse reshaping (CVE-2021-37640)
- Fixes a division by 0 in
ResourceScatterDiv
(CVE-2021-37642) - Fixes a heap OOB in
RaggedGather
(CVE-2021-37641) - Fixes a
std::abort
raised fromTensorListReserve
(CVE-2021-37644) - Fixes a null pointer dereference in
MatrixDiagPartOp
(CVE-2021-37643) - Fixes an integer overflow due to conversion to unsigned (CVE-2021-37645)
- Fixes a bad allocation error in
StringNGrams
caused by integer conversion (CVE-2021-37646) - Fixes a null pointer dereference in
SparseTensorSliceDataset
(CVE-2021-37647) - Fixes an incorrect validation of
SaveV2
inputs (CVE-2021-37648) - Fixes a null pointer dereference in
UncompressElement
(CVE-2021-37649) - Fixes a segfault and a heap buffer overflow in
{Experimental,}DatasetToTFRecord
(CVE-2021-37650) - Fixes a heap buffer overflow in
FractionalAvgPoolGrad
(CVE-2021-37651) - Fixes a use after free in boosted trees creation (CVE-2021-37652)
- Fixes a division by 0 in
ResourceGather
(CVE-2021-37653) - Fixes a heap OOB and a
CHECK
fail inResourceGather
(CVE-2021-37654) - Fixes a heap OOB in
ResourceScatterUpdate
(CVE-2021-37655) - Fixes an undefined behavior arising from reference binding to nullptr in
RaggedTensorToSparse
(CVE-2021-37656) - Fixes an undefined behavior arising from reference binding to nullptr in
MatrixDiagV*
ops (CVE-2021-37657) - Fixes an undefined behavior arising from reference binding to nullptr in
MatrixSetDiagV*
ops (CVE-2021-37658) - Fixes an undefined behavior arising from reference binding to nullptr and heap OOB in binary cwise ops (CVE-2021-37659)
- Fixes a division by 0 in inplace operations (CVE-2021-37660)
- Fixes a crash caused by integer conversion to unsigned (CVE-2021-37661)
- Fixes an undefined behavior arising from reference binding to nullptr in boosted trees (CVE-2021-37662)
- Fixes a heap OOB in boosted trees (CVE-2021-37664)
- Fixes vulnerabilities arising from incomplete validation in
QuantizeV2
(CVE-2021-37663) - Fixes vulnerabilities arising from incomplete validation in MKL requantization (CVE-2021-37665)
- Fixes an undefined behavior arising from reference binding to nullptr in
RaggedTensorToVariant
(CVE-2021-37666) - Fixes an undefined behavior arising from reference binding to nullptr in unicode encoding (CVE-2021-37667)
- Fixes an FPE in
tf.raw_ops.UnravelIndex
(CVE-2021-37668) - Fixes a crash in NMS ops caused by integer conversion to unsigned (CVE-2021-37669)
- Fixes a heap OOB in
UpperBound
andLowerBound
(CVE-2021-37670) - Fixes an undefined behavior arising from reference binding to nullptr in map operations (CVE-2021-37671)
- Fixes a heap OOB in
SdcaOptimizerV2
(CVE-2021-37672) - Fixes a
CHECK
-fail inMapStage
(CVE-2021-37673) - Fixes a vulnerability arising from incomplete validation in
MaxPoolGrad
(CVE-2021-37674) - Fixes an undefined behavior arising from reference binding to nullptr in shape inference (CVE-2021-37676)
- Fixes a division by 0 in most convolution operators (CVE-2021-37675)
- Fixes vulnerabilities arising from missing validation in shape inference for
Dequantize
(CVE-2021-37677) - Fixes an arbitrary code execution due to YAML deserialization (CVE-2021-37678)
- Fixes a heap OOB in nested
tf.map_fn
withRaggedTensor
s (CVE-2021-37679) - Fixes a division by zero in TFLite (CVE-2021-37680)
- Fixes an NPE in TFLite (CVE-2021-37681)
- Fixes a vulnerability arising from use of unitialized value in TFLite (CVE-2021-37682)
- Fixes an FPE in TFLite division operations (CVE-2021-37683)
- Fixes an FPE in TFLite pooling operations (CVE-2021-37684)
- Fixes an infinite loop in TFLite (CVE-2021-37686)
- Fixes a heap OOB in TFLite (CVE-2021-37685)
- Fixes a heap OOB in TFLite's
Gather*
implementations (CVE-2021-37687) - Fixes an undefined behavior arising from null pointer dereference in TFLite (CVE-2021-37688)
- Fixes an undefined behavior arising from null pointer dereference in TFLite MLIR optimizations (CVE-2021-37689)
- Fixes a FPE in LSH in TFLite (CVE-2021-37691)
- Fixes a segfault on strings tensors with mismatched dimensions, arising in Go code (CVE-2021-37692)
- Fixes a use after free and a potential segfault in shape inference functions (CVE-2021-37690)
- Updates
curl
to7.77.0
to handle CVE-2021-22876, CVE-2021-22897, CVE-2021-22898, and CVE-2021-22901.
Release 2.4.2
This release introduces several vulnerability fixes:
- Fixes a heap buffer overflow in
RaggedBinCount
(CVE-2021-29512) - Fixes a heap out of bounds write in
RaggedBinCount
(CVE-2021-29514) - Fixes a type confusion during tensor casts which leads to dereferencing null pointers (CVE-2021-29513)
- Fixes a reference binding to null pointer in
MatrixDiag*
ops (CVE-2021-29515) - Fixes a null pointer dereference via invalid Ragged Tensors (CVE-2021-29516)
- Fixes a division by zero in
Conv3D
(CVE-2021-29517) - Fixes vulnerabilities where session operations in eager mode lead to null pointer dereferences (CVE-2021-29518)
- Fixes a
CHECK
-fail inSparseCross
caused by type confusion (CVE-2021-29519) - Fixes a segfault in
SparseCountSparseOutput
(CVE-2021-29521) - Fixes a heap buffer overflow in
Conv3DBackprop*
(CVE-2021-29520) - Fixes a division by 0 in
Conv3DBackprop*
(CVE-2021-29522) - Fixes a
CHECK
-fail inAddManySparseToTensorsMap
(CVE-2021-29523) - Fixes a division by 0 in
Conv2DBackpropFilter
(CVE-2021-29524) - Fixes a division by 0 in
Conv2DBackpropInput
(CVE-2021-29525) - Fixes a division by 0 in
Conv2D
(CVE-2021-29526) - Fixes a division by 0 in
QuantizedConv2D
(CVE-2021-29527) - Fixes a division by 0 in
QuantizedMul
(CVE-2021-29528) - Fixes vulnerabilities caused by invalid validation in
SparseMatrixSparseCholesky
(CVE-2021-29530) - Fixes a heap buffer overflow caused by rounding (CVE-2021-29529)
- Fixes a
CHECK
-fail intf.raw_ops.EncodePng
(CVE-2021-29531) - Fixes a heap out of bounds read in
RaggedCross
(CVE-2021-29532) - Fixes a
CHECK
-fail inDrawBoundingBoxes
(CVE-2021-29533) - Fixes a heap buffer overflow in
QuantizedMul
(CVE-2021-29535) - Fixes a
CHECK
-fail inSparseConcat
(CVE-2021-29534) - Fixes a heap buffer overflow in
QuantizedResizeBilinear
(CVE-2021-29537) - Fixes a heap buffer overflow in
QuantizedReshape
(CVE-2021-29536) - Fixes a division by zero in
Conv2DBackpropFilter
(CVE-2021-29538) - Fixes a heap buffer overflow in
Conv2DBackpropFilter
(CVE-2021-29540) - Fixes a heap buffer overflow in
StringNGrams
(CVE-2021-29542) - Fixes a null pointer dereference in
StringNGrams
(CVE-2021-29541) - Fixes a
CHECK
-fail inQuantizeAndDequantizeV4Grad
(CVE-2021-29544) - Fixes a
CHECK
-fail inCTCGreedyDecoder
(CVE-2021-29543) - Fixes a heap buffer overflow in
SparseTensorToCSRSparseMatrix
(CVE-2021-29545) - Fixes a division by 0 in
QuantizedBiasAdd
(CVE-2021-29546) - Fixes a heap out of bounds in
QuantizedBatchNormWithGlobalNormalization
(CVE-2021-29547) - Fixes a division by 0 in
QuantizedBatchNormWithGlobalNormalization
(CVE-2021-29548) - Fixes a division by 0 in
QuantizedAdd
(CVE-2021-29549) - Fixes a division by 0 in
FractionalAvgPool
(CVE-2021-29550) - Fixes an OOB read in
MatrixTriangularSolve
(CVE-2021-29551) - Fixes a heap OOB in
QuantizeAndDequantizeV3
(CVE-2021-29553) - Fixes a
CHECK
-failure inUnsortedSegmentJoin
(CVE-2021-29552) - Fixes a division by 0 in
DenseCountSparseOutput
(CVE-2021-29554) - Fixes a division by 0 in
FusedBatchNorm
(CVE-2021-29555) - Fixes a division by 0 in
SparseMatMul
(CVE-2021-29557) - Fixes a division by 0 in
Reverse
(CVE-2021-29556) - Fixes a heap buffer overflow in
SparseSplit
(CVE-2021-29558) - Fixes a heap OOB access in unicode ops (CVE-2021-29559)
- Fixes a heap buffer overflow in
RaggedTensorToTensor
(CVE-2021-29560) - Fixes a
CHECK
-fail inLoadAndRemapMatrix
(CVE-2021-29561) - Fixes a
CHECK
-fail intf.raw_ops.IRFFT
(CVE-2021-29562) - Fixes a
CHECK
-fail intf.raw_ops.RFFT
(CVE-2021-29563) - Fixes a null pointer dereference in
EditDistance
(CVE-2021-29564) - Fixes a null pointer dereference in
SparseFillEmptyRows
(CVE-2021-29565) - Fixes a heap OOB access in
Dilation2DBackpropInput
(CVE-2021-29566) - Fixes a reference binding to null in
ParameterizedTruncatedNormal
(CVE-2021-29568) - Fixes a set of vulnerabilities caused by lack of validation in
SparseDenseCwiseMul
(CVE-2021-29567) - Fixes a heap out of bounds read in
MaxPoolGradWithArgmax
(CVE-2021-29570) - Fixes a heap out of bounds read in
RequantizationRange
(CVE-2021-29569) - Fixes a memory corruption in
DrawBoundingBoxesV2
(CVE-2021-29571) - Fixes a reference binding to nullptr in
SdcaOptimizer
(CVE-2021-29572) - Fixes an overflow and a denial of service in
tf.raw_ops.ReverseSequence
(CVE-2021-29575) - Fixes a division by 0 in
MaxPoolGradWithArgmax
(CVE-2021-29573) - Fixes an undefined behavior in
MaxPool3DGradGrad
(CVE-2021-29574) - Fixes a heap buffer overflow in
MaxPool3DGradGrad
(CVE-2021-29576) - Fixes a heap buffer overflow in
AvgPool3DGrad
(CVE-2021-29577) - Fixes an undefined behavior and a
CHECK
-fail inFractionalMaxPoolGrad
(CVE-2021-29580) - Fixes a heap buffer overflow in
FractionalAvgPoolGrad
(CVE-2021-29578) - Fixes a heap buffer overflow in
MaxPoolGrad
(CVE-2021-29579) - Fixes a segfault in
CTCBeamSearchDecoder
(CVE-2021-29581) - Fixes a heap OOB read in
tf.raw_ops.Dequantize
(CVE-2021-29582) - Fixes a
CHECK
-fail due to integer overflow (CVE-2021-29584) - Fixes a heap buffer overflow and undefined behavior in
FusedBatchNorm
(CVE-2021-29583) - Fixes a division by zero in padding computation in TFLite (CVE-2021-29585)
- Fixes a division by zero in optimized pooling implementations in TFLite (CVE-2021-29586)
- Fixes a division by zero in TFLite's implementation of
SpaceToDepth
(CVE-2021-29587) - Fixes a division by zero in TFLite's implementation of
GatherNd
(CVE-2021-29589) - Fixes a division by zero in TFLite's implementation of
TransposeConv
(CVE-2021-29588) - Fixes a heap OOB read in TFLite's implementation of
Minimum
orMaximum
(CVE-2021-29590) - Fixes a null pointer dereference in TFLite's
Reshape
operator (CVE-2021-29592) - Fixes a stack overflow due to looping TFLite subgraph (CVE-2021-29591)
- Fixes a division by zero in TFLite's implementation of
DepthToSpace
(CVE-2021-29595) - Fixes a division by zero in TFLite's convolution code (CVE-2021-29594)
- Fixes a division by zero in TFLite's implementation of
EmbeddingLookup
(CVE-2021-29596) - Fixes a division by zero in TFLite's implementation of
BatchToSpaceNd
(CVE-2021-29593) - Fixes a division by zero in TFLite's implementation of
SpaceToBatchNd
(CVE-2021-29597) - Fixes a division by zero in TFLite's implementation of
SVDF
(CVE-2021-29598) - Fixes a division by zero in TFLite's implementation of
Split
(CVE-2021-29599) - Fixes a division by zero in TFLite's implementation of
OneHot
(CVE-2021-29600) - Fixes a division by zero in TFLite's implementation of
DepthwiseConv
(CVE-2021-29602) - Fixes a division by zero in TFLite's implementation of hashtable lookup (CVE-2021-29604)
- Fixes a integer overflow in TFLite concatentation (CVE-2021-29601)
- Fixes a integer overflow in TFLite memory allocation (CVE-2021-29605)
- Fixes a heap OOB write in TFLite (CVE-2021-29603)
- Fixes a heap OOB read in TFLite (CVE-2021-29606)
- Fixes a heap OOB and null pointer dereference in
RaggedTensorToTensor
(CVE-2021-29608) - Fixes vulnerabilities caused by incomplete validation in
SparseAdd
(CVE-2021-29609) - Fixes vulnerabilities caused by incomplete validation in
SparseSparseMinimum
(CVE-2021-29607) - Fixes vulnerabilities caused by incomplete validation in
SparseReshape
(CVE-2021-29611) - Fixes vulnerabilities caused by invalid validation in
QuantizeAndDequantizeV2
(CVE-2021-29610) - Fixes a heap buffer overflow in
BandedTriangularSolve
(CVE-2021-29612) - Fixes vulnerabilities caused by incomplete validation in
tf.raw_ops.CTCLoss
(CVE-2021-29613) - Fixes an interpreter crash from vulnerabilities in
tf.io.decode_raw
(CVE-2021-29614) - Fixes a stack overflow in
ParseAttrValue
with nested tensors (CVE-2021-29615) - Fixes a null dereference in Grappler's
TrySimplify
(CVE-2021-29616) - Fixes a crash in
tf.transpose
with complex inputs (CVE-2021-29618) - Fixes a crash in
tf.strings.substr
due toCHECK
-fail (CVE-2021-29617) - Fixes a segfault in
tf.raw_ops.SparseCountSparseOutput
(CVE-2021-29619) - Fixes a segfault in
tf.raw_ops.ImmutableConst
(CVE-2021-29539) - Updates
curl
to7.76.0
to handle CVE-2020-8169, CVE-2020-8177, CVE-2020-8231, CVE-2020-8284, CVE-2020-8285 and CVE-2020-8286.
Release 2.3.3
This release introduces several vulnerability fixes:
- Fixes a heap buffer overflow in
RaggedBinCount
(CVE-2021-29512) - Fixes a heap out of bounds write in
RaggedBinCount
(CVE-2021-29514) - Fixes a type confusion during tensor casts which leads to dereferencing null pointers (CVE-2021-29513)
- Fixes a reference binding to null pointer in
MatrixDiag*
ops (CVE-2021-29515) - Fixes a null pointer dereference via invalid Ragged Tensors (CVE-2021-29516)
- Fixes a division by zero in
Conv3D
(CVE-2021-29517) - Fixes vulnerabilities where session operations in eager mode lead to null pointer dereferences (CVE-2021-29518)
- Fixes a
CHECK
-fail inSparseCross
caused by type confusion (CVE-2021-29519) - Fixes a segfault in
SparseCountSparseOutput
(CVE-2021-29521) - Fixes a heap buffer overflow in
Conv3DBackprop*
(CVE-2021-29520) - Fixes a division by 0 in
Conv3DBackprop*
(CVE-2021-29522) - Fixes a
CHECK
-fail inAddManySparseToTensorsMap
(CVE-2021-29523) - Fixes a division by 0 in
Conv2DBackpropFilter
(CVE-2021-29524) - Fixes a division by 0 in
Conv2DBackpropInput
(CVE-2021-29525) - Fixes a division by 0 in
Conv2D
(CVE-2021-29526) - Fixes a division by 0 in
QuantizedConv2D
(CVE-2021-29527) - Fixes a division by 0 in
QuantizedMul
(CVE-2021-29528) - Fixes vulnerabilities caused by invalid validation in
SparseMatrixSparseCholesky
(CVE-2021-29530) - Fixes a heap buffer overflow caused by rounding (CVE-2021-29529)
- Fixes a
CHECK
-fail intf.raw_ops.EncodePng
(CVE-2021-29531) - Fixes a heap out of bounds read in
RaggedCross
(CVE-2021-29532) - Fixes a
CHECK
-fail inDrawBoundingBoxes
(CVE-2021-29533) - Fixes a heap buffer overflow in
QuantizedMul
(CVE-2021-29535) - Fixes a
CHECK
-fail inSparseConcat
(CVE-2021-29534) - Fixes a heap buffer overflow in
QuantizedResizeBilinear
(CVE-2021-29537) - Fixes a heap buffer overflow in
QuantizedReshape
(CVE-2021-29536) - Fixes a division by zero in
Conv2DBackpropFilter
(CVE-2021-29538) - Fixes a heap buffer overflow in
Conv2DBackpropFilter
(CVE-2021-29540) - Fixes a heap buffer overflow in
StringNGrams
(CVE-2021-29542) - Fixes a null pointer dereference in
StringNGrams
(CVE-2021-29541) - Fixes a
CHECK
-fail inQuantizeAndDequantizeV4Grad
(CVE-2021-29544) - Fixes a
CHECK
-fail inCTCGreedyDecoder
(CVE-2021-29543) - Fixes a heap buffer overflow in
SparseTensorToCSRSparseMatrix
(CVE-2021-29545) - Fixes a division by 0 in
QuantizedBiasAdd
(CVE-2021-29546) - Fixes a heap out of bounds in
QuantizedBatchNormWithGlobalNormalization
(CVE-2021-29547) - Fixes a division by 0 in
QuantizedBatchNormWithGlobalNormalization
(CVE-2021-29548) - Fixes a division by 0 in
QuantizedAdd
(CVE-2021-29549) - Fixes a division by 0 in
FractionalAvgPool
(CVE-2021-29550) - Fixes an OOB read in
MatrixTriangularSolve
(CVE-2021-29551) - Fixes a heap OOB in
QuantizeAndDequantizeV3
(CVE-2021-29553) - Fixes a
CHECK
-failure inUnsortedSegmentJoin
(CVE-2021-29552) - Fixes a division by 0 in
DenseCountSparseOutput
(CVE-2021-29554) - Fixes a division by 0 in
FusedBatchNorm
(CVE-2021-29555) - Fixes a division by 0 in
SparseMatMul
(CVE-2021-29557) - Fixes a division by 0 in
Reverse
(CVE-2021-29556) - Fixes a heap buffer overflow in
SparseSplit
(CVE-2021-29558) - Fixes a heap OOB access in unicode ops (CVE-2021-29559)
- Fixes a heap buffer overflow in
RaggedTensorToTensor
(CVE-2021-29560) - Fixes a
CHECK
-fail inLoadAndRemapMatrix
(CVE-2021-29561) - Fixes a
CHECK
-fail intf.raw_ops.IRFFT
(CVE-2021-29562) - Fixes a
CHECK
-fail intf.raw_ops.RFFT
(CVE-2021-29563) - Fixes a null pointer dereference in
EditDistance
(CVE-2021-29564) - Fixes a null pointer dereference in
SparseFillEmptyRows
(CVE-2021-29565) - Fixes a heap OOB access in
Dilation2DBackpropInput
(CVE-2021-29566) - Fixes a reference binding to null in
ParameterizedTruncatedNormal
(CVE-2021-29568) - Fixes a set of vulnerabilities caused by lack of validation in
SparseDenseCwiseMul
(CVE-2021-29567) - Fixes a heap out of bounds read in
MaxPoolGradWithArgmax
(CVE-2021-29570) - Fixes a heap out of bounds read in
RequantizationRange
(CVE-2021-29569) - Fixes a memory corruption in
DrawBoundingBoxesV2
(CVE-2021-29571) - Fixes a reference binding to nullptr in
SdcaOptimizer
(CVE-2021-29572) - Fixes an overflow and a denial of service in
tf.raw_ops.ReverseSequence
(CVE-2021-29575) - Fixes a division by 0 in
MaxPoolGradWithArgmax
(CVE-2021-29573) - Fixes an undefined behavior in
MaxPool3DGradGrad
(CVE-2021-29574) - Fixes a heap buffer overflow in
MaxPool3DGradGrad
(CVE-2021-29576) - Fixes a heap buffer overflow in
AvgPool3DGrad
(CVE-2021-29577) - Fixes an undefined behavior and a
CHECK
-fail inFractionalMaxPoolGrad
(CVE-2021-29580) - Fixes a heap buffer overflow in
FractionalAvgPoolGrad
(CVE-2021-29578) - Fixes a heap buffer overflow in
MaxPoolGrad
(CVE-2021-29579) - Fixes a segfault in
CTCBeamSearchDecoder
(CVE-2021-29581) - Fixes a heap OOB read in
tf.raw_ops.Dequantize
(CVE-2021-29582) - Fixes a
CHECK
-fail due to integer overflow (CVE-2021-29584) - Fixes a heap buffer overflow and undefined behavior in
FusedBatchNorm
(CVE-2021-29583) - Fixes a division by zero in padding computation in TFLite (CVE-2021-29585)
- Fixes a division by zero in optimized pooling implementations in TFLite (CVE-2021-29586)
- Fixes a division by zero in TFLite's implementation of
SpaceToDepth
(CVE-2021-29587) - Fixes a division by zero in TFLite's implementation of
GatherNd
(CVE-2021-29589) - Fixes a division by zero in TFLite's implementation of
TransposeConv
(CVE-2021-29588) - Fixes a heap OOB read in TFLite's implementation of
Minimum
orMaximum
(CVE-2021-29590) - Fixes a null pointer dereference in TFLite's
Reshape
operator (CVE-2021-29592) - Fixes a stack overflow due to looping TFLite subgraph (CVE-2021-29591)
- Fixes a division by zero in TFLite's implementation of
DepthToSpace
(CVE-2021-29595) - Fixes a division by zero in TFLite's convolution code (CVE-2021-29594)
- Fixes a division by zero in TFLite's implementation of
EmbeddingLookup
(CVE-2021-29596) - Fixes a division by zero in TFLite's implementation of
BatchToSpaceNd
(CVE-2021-29593) - Fixes a division by zero in TFLite's implementation of
SpaceToBatchNd
(CVE-2021-29597) - Fixes a division by zero in TFLite's implementation of
SVDF
(CVE-2021-29598) - Fixes a division by zero in TFLite's implementation of
Split
(CVE-2021-29599) - Fixes a division by zero in TFLite's implementation of
OneHot
(CVE-2021-29600) - Fixes a division by zero in TFLite's implementation of
DepthwiseConv
(CVE-2021-29602) - Fixes a division by zero in TFLite's implementation of hashtable lookup (CVE-2021-29604)
- Fixes a integer overflow in TFLite concatentation (CVE-2021-29601)
- Fixes a integer overflow in TFLite memory allocation (CVE-2021-29605)
- Fixes a heap OOB write in TFLite (CVE-2021-29603)
- Fixes a heap OOB read in TFLite (CVE-2021-29606)
- Fixes a heap OOB and null pointer dereference in
RaggedTensorToTensor
(CVE-2021-29608) - Fixes vulnerabilities caused by incomplete validation in
SparseAdd
(CVE-2021-29609) - Fixes vulnerabilities caused by incomplete validation in
SparseSparseMinimum
(CVE-2021-29607) - Fixes vulnerabilities caused by incomplete validation in
SparseReshape
(CVE-2021-29611) - Fixes vulnerabilities caused by invalid validation in
QuantizeAndDequantizeV2
(CVE-2021-29610) - Fixes a heap buffer overflow in
BandedTriangularSolve
(CVE-2021-29612) - Fixes vulnerabilities caused by incomplete validation in
tf.raw_ops.CTCLoss
(CVE-2021-29613) - Fixes an interpreter crash from vulnerabilities in
tf.io.decode_raw
(CVE-2021-29614) - Fixes a stack overflow in
ParseAttrValue
with nested tensors (CVE-2021-29615) - Fixes a null dereference in Grappler's
TrySimplify
(CVE-2021-29616) - Fixes a crash in
tf.transpose
with complex inputs (CVE-2021-29618) - Fixes a crash in
tf.strings.substr
due toCHECK
-fail (CVE-2021-29617) - Fixes a segfault in
tf.raw_ops.SparseCountSparseOutput
(CVE-2021-29619) - Fixes a segfault in
tf.raw_ops.ImmutableConst
(CVE-2021-29539) - Updates
curl
to7.76.0
to handle CVE-2020-8169, CVE-2020-8177, CVE-2020-8231, CVE-2020-8284, CVE-2020-8285 and CVE-2020-8286.
Release 2.2.3
This release introduces several vulnerability fixes:
- Fixes a heap buffer overflow in
RaggedBinCount
(CVE-2021-29512) - Fixes a heap out of bounds write in
RaggedBinCount
(CVE-2021-29514) - Fixes a type confusion during tensor casts which leads to dereferencing null pointers (CVE-2021-29513)
- Fixes a reference binding to null pointer in
MatrixDiag*
ops (CVE-2021-29515) - Fixes a null pointer dereference via invalid Ragged Tensors (CVE-2021-29516)
- Fixes a division by zero in
Conv3D
(CVE-2021-29517) - Fixes vulnerabilities where session operations in eager mode lead to null pointer dereferences (CVE-2021-29518)
- Fixes a
CHECK
-fail inSparseCross
caused by type confusion (CVE-2021-29519) - Fixes a segfault in
SparseCountSparseOutput
(CVE-2021-29521) - Fixes a heap buffer overflow in
Conv3DBackprop*
(CVE-2021-29520) - Fixes a division by 0 in
Conv3DBackprop*
(CVE-2021-29522) - Fixes a
CHECK
-fail inAddManySparseToTensorsMap
(CVE-2021-29523) - Fixes a division by 0 in
Conv2DBackpropFilter
(CVE-2021-29524) - Fixes a division by 0 in
Conv2DBackpropInput
(CVE-2021-29525) - Fixes a division by 0 in
Conv2D
(CVE-2021-29526) - Fixes a division by 0 in
QuantizedConv2D
(CVE-2021-29527) - Fixes a division by 0 in
QuantizedMul
(CVE-2021-29528) - Fixes vulnerabilities caused by invalid validation in
SparseMatrixSparseCholesky
(CVE-2021-29530) - Fixes a heap buffer overflow caused by rounding (CVE-2021-29529)
- Fixes a
CHECK
-fail intf.raw_ops.EncodePng
(CVE-2021-29531) - Fixes a heap out of bounds read in
RaggedCross
(CVE-2021-29532) - Fixes a
CHECK
-fail inDrawBoundingBoxes
(CVE-2021-29533) - Fixes a heap buffer overflow in
QuantizedMul
(CVE-2021-29535) - Fixes a
CHECK
-fail inSparseConcat
(CVE-2021-29534) - Fixes a heap buffer overflow in
QuantizedResizeBilinear
(CVE-2021-29537) - Fixes a heap buffer overflow in
QuantizedReshape
(CVE-2021-29536) - Fixes a division by zero in
Conv2DBackpropFilter
(CVE-2021-29538) - Fixes a heap buffer overflow in
Conv2DBackpropFilter
(CVE-2021-29540) - Fixes a heap buffer overflow in
StringNGrams
(CVE-2021-29542) - Fixes a null pointer dereference in
StringNGrams
(CVE-2021-29541) - Fixes a
CHECK
-fail inQuantizeAndDequantizeV4Grad
(CVE-2021-29544) - Fixes a
CHECK
-fail inCTCGreedyDecoder
(CVE-2021-29543) - Fixes a heap buffer overflow in
SparseTensorToCSRSparseMatrix
(CVE-2021-29545) - Fixes a division by 0 in
QuantizedBiasAdd
(CVE-2021-29546) - Fixes a heap out of bounds in
QuantizedBatchNormWithGlobalNormalization
(CVE-2021-29547) - Fixes a division by 0 in
QuantizedBatchNormWithGlobalNormalization
(CVE-2021-29548) - Fixes a division by 0 in
QuantizedAdd
(CVE-2021-29549) - Fixes a division by 0 in
FractionalAvgPool
(CVE-2021-29550) - Fixes an OOB read in
MatrixTriangularSolve
(CVE-2021-29551) - Fixes a heap OOB in
QuantizeAndDequantizeV3
(CVE-2021-29553) - Fixes a
CHECK
-failure inUnsortedSegmentJoin
(CVE-2021-29552) - Fixes a division by 0 in
DenseCountSparseOutput
(CVE-2021-29554) - Fixes a division by 0 in
FusedBatchNorm
(CVE-2021-29555) - Fixes a division by 0 in
SparseMatMul
(CVE-2021-29557) - Fixes a division by 0 in
Reverse
(CVE-2021-29556) - Fixes a heap buffer overflow in
SparseSplit
(CVE-2021-29558) - Fixes a heap OOB access in unicode ops (CVE-2021-29559)
- Fixes a heap buffer overflow in
RaggedTensorToTensor
(CVE-2021-29560) - Fixes a
CHECK
-fail inLoadAndRemapMatrix
(CVE-2021-29561) - Fixes a
CHECK
-fail intf.raw_ops.IRFFT
(CVE-2021-29562) - Fixes a
CHECK
-fail intf.raw_ops.RFFT
(CVE-2021-29563) - Fixes a null pointer dereference in
EditDistance
(CVE-2021-29564) - Fixes a null pointer dereference in
SparseFillEmptyRows
(CVE-2021-29565) - Fixes a heap OOB access in
Dilation2DBackpropInput
(CVE-2021-29566) - Fixes a reference binding to null in
ParameterizedTruncatedNormal
(CVE-2021-29568) - Fixes a set of vulnerabilities caused by lack of validation in
SparseDenseCwiseMul
(CVE-2021-29567) - Fixes a heap out of bounds read in
MaxPoolGradWithArgmax
(CVE-2021-29570) - Fixes a heap out of bounds read in
RequantizationRange
(CVE-2021-29569) - Fixes a memory corruption in
DrawBoundingBoxesV2
(CVE-2021-29571) - Fixes a reference binding to nullptr in
SdcaOptimizer
(CVE-2021-29572) - Fixes an overflow and a denial of service in
tf.raw_ops.ReverseSequence
(CVE-2021-29575) - Fixes a division by 0 in
MaxPoolGradWithArgmax
(CVE-2021-29573) - Fixes an undefined behavior in
MaxPool3DGradGrad
(CVE-2021-29574) - Fixes a heap buffer overflow in
MaxPool3DGradGrad
(CVE-2021-29576) - Fixes a heap buffer overflow in
AvgPool3DGrad
(CVE-2021-29577) - Fixes an undefined behavior and a
CHECK
-fail inFractionalMaxPoolGrad
(CVE-2021-29580) - Fixes a heap buffer overflow in
FractionalAvgPoolGrad
(CVE-2021-29578) - Fixes a heap buffer overflow in
MaxPoolGrad
(CVE-2021-29579) - Fixes a segfault in
CTCBeamSearchDecoder
(CVE-2021-29581) - Fixes a heap OOB read in
tf.raw_ops.Dequantize
(CVE-2021-29582) - Fixes a
CHECK
-fail due to integer overflow (CVE-2021-29584) - Fixes a heap buffer overflow and undefined behavior in
FusedBatchNorm
(CVE-2021-29583) - Fixes a division by zero in padding computation in TFLite (CVE-2021-29585)
- Fixes a division by zero in optimized pooling implementations in TFLite (CVE-2021-29586)
- Fixes a division by zero in TFLite's implementation of
SpaceToDepth
(CVE-2021-29587) - Fixes a division by zero in TFLite's implementation of
GatherNd
(CVE-2021-29589) - Fixes a division by zero in TFLite's implementation of
TransposeConv
(CVE-2021-29588) - Fixes a heap OOB read in TFLite's implementation of
Minimum
orMaximum
(CVE-2021-29590) - Fixes a null pointer dereference in TFLite's
Reshape
operator (CVE-2021-29592) - Fixes a stack overflow due to looping TFLite subgraph (CVE-2021-29591)
- Fixes a division by zero in TFLite's implementation of
DepthToSpace
(CVE-2021-29595) - Fixes a division by zero in TFLite's convolution code (CVE-2021-29594)
- Fixes a division by zero in TFLite's implementation of
EmbeddingLookup
(CVE-2021-29596) - Fixes a division by zero in TFLite's implementation of
BatchToSpaceNd
(CVE-2021-29593) - Fixes a division by zero in TFLite's implementation of
SpaceToBatchNd
(CVE-2021-29597) - Fixes a division by zero in TFLite's implementation of
SVDF
(CVE-2021-29598) - Fixes a division by zero in TFLite's implementation of
Split
(CVE-2021-29599) - Fixes a division by zero in TFLite's implementation of
OneHot
(CVE-2021-29600) - Fixes a division by zero in TFLite's implementation of
DepthwiseConv
(CVE-2021-29602) - Fixes a division by zero in TFLite's implementation of hashtable lookup (CVE-2021-29604)
- Fixes a integer overflow in TFLite concatentation (CVE-2021-29601)
- Fixes a integer overflow in TFLite memory allocation (CVE-2021-29605)
- Fixes a heap OOB write in TFLite (CVE-2021-29603)
- Fixes a heap OOB read in TFLite (CVE-2021-29606)
- Fixes a heap OOB and null pointer dereference in
RaggedTensorToTensor
(CVE-2021-29608) - Fixes vulnerabilities caused by incomplete validation in
SparseAdd
(CVE-2021-29609) - Fixes vulnerabilities caused by incomplete validation in
SparseSparseMinimum
(CVE-2021-29607) - Fixes vulnerabilities caused by incomplete validation in
SparseReshape
(CVE-2021-29611) - Fixes vulnerabilities caused by invalid validation in
QuantizeAndDequantizeV2
(CVE-2021-29610) - Fixes a heap buffer overflow in
BandedTriangularSolve
(CVE-2021-29612) - Fixes vulnerabilities caused by incomplete validation in
tf.raw_ops.CTCLoss
(CVE-2021-29613) - Fixes an interpreter crash from vulnerabilities in
tf.io.decode_raw
(CVE-2021-29614) - Fixes a stack overflow in
ParseAttrValue
with nested tensors (CVE-2021-29615) - Fixes a null dereference in Grappler's
TrySimplify
(CVE-2021-29616) - Fixes a crash in
tf.transpose
with complex inputs (CVE-2021-29618) - Fixes a crash in
tf.strings.substr
due toCHECK
-fail (CVE-2021-29617) - Fixes a segfault in
tf.raw_ops.SparseCountSparseOutput
(CVE-2021-29619) - Fixes a segfault in
tf.raw_ops.ImmutableConst
(CVE-2021-29539) - Updates
curl
to7.76.0
to handle CVE-2020-8169, CVE-2020-8177, CVE-2020-8231, CVE-2020-8284, CVE-2020-8285 and CVE-2020-8286.
Release 2.1.4
This release introduces several vulnerability fixes:
- Fixes a heap buffer overflow in
RaggedBinCount
(CVE-2021-29512) - Fixes a heap out of bounds write in
RaggedBinCount
(CVE-2021-29514) - Fixes a type confusion during tensor casts which leads to dereferencing null pointers (CVE-2021-29513)
- Fixes a reference binding to null pointer in
MatrixDiag*
ops (CVE-2021-29515) - Fixes a null pointer dereference via invalid Ragged Tensors (CVE-2021-29516)
- Fixes a division by zero in
Conv3D
(CVE-2021-29517) - Fixes vulnerabilities where session operations in eager mode lead to null pointer dereferences (CVE-2021-29518)
- Fixes a
CHECK
-fail inSparseCross
caused by type confusion (CVE-2021-29519) - Fixes a segfault in
SparseCountSparseOutput
(CVE-2021-29521) - Fixes a heap buffer overflow in
Conv3DBackprop*
(CVE-2021-29520) - Fixes a division by 0 in
Conv3DBackprop*
(CVE-2021-29522) - Fixes a
CHECK
-fail inAddManySparseToTensorsMap
(CVE-2021-29523) - Fixes a division by 0 in
Conv2DBackpropFilter
(CVE-2021-29524) - Fixes a division by 0 in
Conv2DBackpropInput
(CVE-2021-29525) - Fixes a division by 0 in
Conv2D
(CVE-2021-29526) - Fixes a division by 0 in
QuantizedConv2D
(CVE-2021-29527) - Fixes a division by 0 in
QuantizedMul
(CVE-2021-29528) - Fixes vulnerabilities caused by invalid validation in
SparseMatrixSparseCholesky
(CVE-2021-29530) - Fixes a heap buffer overflow caused by rounding (CVE-2021-29529)
- Fixes a
CHECK
-fail intf.raw_ops.EncodePng
(CVE-2021-29531) - Fixes a heap out of bounds read in
RaggedCross
(CVE-2021-29532) - Fixes a
CHECK
-fail inDrawBoundingBoxes
(CVE-2021-29533) - Fixes a heap buffer overflow in
QuantizedMul
(CVE-2021-29535) - Fixes a
CHECK
-fail inSparseConcat
(CVE-2021-29534) - Fixes a heap buffer overflow in
QuantizedResizeBilinear
(CVE-2021-29537) - Fixes a heap buffer overflow in
QuantizedReshape
(CVE-2021-29536) - Fixes a division by zero in
Conv2DBackpropFilter
(CVE-2021-29538) - Fixes a heap buffer overflow in
Conv2DBackpropFilter
(CVE-2021-29540) - Fixes a heap buffer overflow in
StringNGrams
(CVE-2021-29542) - Fixes a null pointer dereference in
StringNGrams
(CVE-2021-29541) - Fixes a
CHECK
-fail inQuantizeAndDequantizeV4Grad
(CVE-2021-29544) - Fixes a
CHECK
-fail inCTCGreedyDecoder
(CVE-2021-29543) - Fixes a heap buffer overflow in
SparseTensorToCSRSparseMatrix
(CVE-2021-29545) - Fixes a division by 0 in
QuantizedBiasAdd
(CVE-2021-29546) - Fixes a heap out of bounds in
QuantizedBatchNormWithGlobalNormalization
(CVE-2021-29547) - Fixes a division by 0 in
QuantizedBatchNormWithGlobalNormalization
(CVE-2021-29548) - Fixes a division by 0 in
QuantizedAdd
(CVE-2021-29549) - Fixes a division by 0 in
FractionalAvgPool
(CVE-2021-29550) - Fixes an OOB read in
MatrixTriangularSolve
(CVE-2021-29551) - Fixes a heap OOB in
QuantizeAndDequantizeV3
(CVE-2021-29553) - Fixes a
CHECK
-failure inUnsortedSegmentJoin
(CVE-2021-29552) - Fixes a division by 0 in
DenseCountSparseOutput
(CVE-2021-29554) - Fixes a division by 0 in
FusedBatchNorm
(CVE-2021-29555) - Fixes a division by 0 in
SparseMatMul
(CVE-2021-29557) - Fixes a division by 0 in
Reverse
(CVE-2021-29556) - Fixes a heap buffer overflow in
SparseSplit
(CVE-2021-29558) - Fixes a heap OOB access in unicode ops (CVE-2021-29559)
- Fixes a heap buffer overflow in
RaggedTensorToTensor
(CVE-2021-29560) - Fixes a
CHECK
-fail inLoadAndRemapMatrix
(CVE-2021-29561) - Fixes a
CHECK
-fail intf.raw_ops.IRFFT
(CVE-2021-29562) - Fixes a
CHECK
-fail intf.raw_ops.RFFT
(CVE-2021-29563) - Fixes a null pointer dereference in
EditDistance
(CVE-2021-29564) - Fixes a null pointer dereference in
SparseFillEmptyRows
(CVE-2021-29565) - Fixes a heap OOB access in
Dilation2DBackpropInput
(CVE-2021-29566) - Fixes a reference binding to null in
ParameterizedTruncatedNormal
(CVE-2021-29568) - Fixes a set of vulnerabilities caused by lack of validation in
SparseDenseCwiseMul
(CVE-2021-29567) - Fixes a heap out of bounds read in
MaxPoolGradWithArgmax
(CVE-2021-29570) - Fixes a heap out of bounds read in
RequantizationRange
(CVE-2021-29569) - Fixes a memory corruption in
DrawBoundingBoxesV2
(CVE-2021-29571) - Fixes a reference binding to nullptr in
SdcaOptimizer
(CVE-2021-29572) - Fixes an overflow and a denial of service in
tf.raw_ops.ReverseSequence
(CVE-2021-29575) - Fixes a division by 0 in
MaxPoolGradWithArgmax
(CVE-2021-29573) - Fixes an undefined behavior in
MaxPool3DGradGrad
(CVE-2021-29574) - Fixes a heap buffer overflow in
MaxPool3DGradGrad
(CVE-2021-29576) - Fixes a heap buffer overflow in
AvgPool3DGrad
(CVE-2021-29577) - Fixes an undefined behavior and a
CHECK
-fail inFractionalMaxPoolGrad
(CVE-2021-29580) - Fixes a heap buffer overflow in
FractionalAvgPoolGrad
(CVE-2021-29578) - Fixes a heap buffer overflow in
MaxPoolGrad
(CVE-2021-29579) - Fixes a segfault in
CTCBeamSearchDecoder
(CVE-2021-29581) - Fixes a heap OOB read in
tf.raw_ops.Dequantize
(CVE-2021-29582) - Fixes a
CHECK
-fail due to integer overflow (CVE-2021-29584) - Fixes a heap buffer overflow and undefined behavior in
FusedBatchNorm
(CVE-2021-29583) - Fixes a division by zero in padding computation in TFLite (CVE-2021-29585)
- Fixes a division by zero in optimized pooling implementations in TFLite (CVE-2021-29586)
- Fixes a division by zero in TFLite's implementation of
SpaceToDepth
(CVE-2021-29587) - Fixes a division by zero in TFLite's implementation of
GatherNd
(CVE-2021-29589) - Fixes a division by zero in TFLite's implementation of
TransposeConv
(CVE-2021-29588) - Fixes a heap OOB read in TFLite's implementation of
Minimum
orMaximum
(CVE-2021-29590) - Fixes a null pointer dereference in TFLite's
Reshape
operator (CVE-2021-29592) - Fixes a stack overflow due to looping TFLite subgraph (CVE-2021-29591)
- Fixes a division by zero in TFLite's implementation of
DepthToSpace
(CVE-2021-29595) - Fixes a division by zero in TFLite's convolution code (CVE-2021-29594)
- Fixes a division by zero in TFLite's implementation of
EmbeddingLookup
(CVE-2021-29596) - Fixes a division by zero in TFLite's implementation of
BatchToSpaceNd
(CVE-2021-29593) - Fixes a division by zero in TFLite's implementation of
SpaceToBatchNd
(CVE-2021-29597) - Fixes a division by zero in TFLite's implementation of
SVDF
(CVE-2021-29598) - Fixes a division by zero in TFLite's implementation of
Split
(CVE-2021-29599) - Fixes a division by zero in TFLite's implementation of
OneHot
(CVE-2021-29600) - Fixes a division by zero in TFLite's implementation of
DepthwiseConv
(CVE-2021-29602) - Fixes a division by zero in TFLite's implementation of hashtable lookup (CVE-2021-29604)
- Fixes a integer overflow in TFLite concatentation (CVE-2021-29601)
- Fixes a integer overflow in TFLite memory allocation (CVE-2021-29605)
- Fixes a heap OOB write in TFLite (CVE-2021-29603)
- Fixes a heap OOB read in TFLite (CVE-2021-29606)
- Fixes a heap OOB and null pointer dereference in
RaggedTensorToTensor
(CVE-2021-29608) - Fixes vulnerabilities caused by incomplete validation in
SparseAdd
(CVE-2021-29609) - Fixes vulnerabilities caused by incomplete validation in
SparseSparseMinimum
(CVE-2021-29607) - Fixes vulnerabilities caused by incomplete validation in
SparseReshape
(CVE-2021-29611) - Fixes vulnerabilities caused by invalid validation in
QuantizeAndDequantizeV2
(CVE-2021-29610) - Fixes a heap buffer overflow in
BandedTriangularSolve
(CVE-2021-29612) - Fixes vulnerabilities caused by incomplete validation in
tf.raw_ops.CTCLoss
(CVE-2021-29613) - Fixes an interpreter crash from vulnerabilities in
tf.io.decode_raw
(CVE-2021-29614) - Fixes a stack overflow in
ParseAttrValue
with nested tensors (CVE-2021-29615) - Fixes a null dereference in Grappler's
TrySimplify
(CVE-2021-29616) - Fixes a crash in
tf.transpose
with complex inputs (CVE-2021-29618) - Fixes a crash in
tf.strings.substr
due toCHECK
-fail (CVE-2021-29617) - Fixes a segfault in
tf.raw_ops.SparseCountSparseOutput
(CVE-2021-29619) - Fixes a segfault in
tf.raw_ops.ImmutableConst
(CVE-2021-29539) - Updates
curl
to7.76.0
to handle CVE-2020-8169, CVE-2020-8177, CVE-2020-8231, CVE-2020-8284, CVE-2020-8285 and CVE-2020-8286.
Release 2.5.0
Major Features and Improvements
- Support for Python3.9 has been added.
tf.data
:tf.data
service now supports strict round-robin reads, which is useful for synchronous training workloads where example sizes vary. With strict round robin reads, users can guarantee that consumers get similar-sized examples in the same step.- tf.data service now supports optional compression. Previously data would
always be compressed, but now you can disable compression by passing
compression=None
totf.data.experimental.service.distribute(...)
. tf.data.Dataset.batch()
now supportsnum_parallel_calls
anddeterministic
arguments.num_parallel_calls
is used to indicate that multiple input batches should be computed in parallel. Withnum_parallel_calls
set,deterministic
is used to indicate that outputs can be obtained in the non-deterministic order.- Options returned by
tf.data.Dataset.options()
are no longer mutable. - tf.data input pipelines can now be executed in debug mode, which
disables any asynchrony, parallelism, or non-determinism and forces
Python execution (as opposed to trace-compiled graph execution) of
user-defined functions passed into transformations such as
map
. The debug mode can be enabled throughtf.data.experimental.enable_debug_mode()
.
tf.lite
- Enabled the new MLIR-based quantization backend by default
- The new backend is used for 8 bits full integer post-training quantization
- The new backend removes the redundant rescales and fixes some bugs (shared weight/bias, extremely small scales, etc)
- Set
experimental_new_quantizer
in tf.lite.TFLiteConverter to False to disable this change
- Enabled the new MLIR-based quantization backend by default
tf.keras
tf.keras.metrics.AUC
now support logit predictions.- Enabled a new supported input type in
Model.fit
,tf.keras.utils.experimental.DatasetCreator
, which takes a callable,dataset_fn
.DatasetCreator
is intended to work across alltf.distribute
strategies, and is the only input type supported for Parameter Server strategy.
tf.distribute
tf.distribute.experimental.ParameterServerStrategy
now supports training with KerasModel.fit
when used withDatasetCreator
.- Creating
tf.random.Generator
undertf.distribute.Strategy
scopes is now allowed (except fortf.distribute.experimental.CentralStorageStrategy
andtf.distribute.experimental.ParameterServerStrategy
). Different replicas will get different random-number streams.
- TPU embedding support
- Added
profile_data_directory
toEmbeddingConfigSpec
in_tpu_estimator_embedding.py
. This allows embedding lookup statistics gathered at runtime to be used in embedding layer partitioning decisions.
- Added
- PluggableDevice
- Third-party devices can now connect to TensorFlow as plug-ins through
StreamExecutor C API.
and
PluggableDevice
interface.
- Add custom ops and kernels through kernel and op registration C API.
- Register custom graph optimization passes with graph optimization C API.
- Third-party devices can now connect to TensorFlow as plug-ins through
StreamExecutor C API.
and
PluggableDevice
interface.
- oneAPI Deep Neural Network Library (oneDNN)
CPU performance optimizations from
Intel-optimized TensorFlow
are now available in the official x86-64 Linux and Windows builds.
- They are off by default. Enable them by setting the environment variable
TF_ENABLE_ONEDNN_OPTS=1
. - We do not recommend using them in GPU systems, as they have not been sufficiently tested with GPUs yet.
- They are off by default. Enable them by setting the environment variable
- TensorFlow pip packages are now built with CUDA11.2 and cuDNN 8.1.0
Breaking Changes
- The
TF_CPP_MIN_VLOG_LEVEL
environment variable has been renamed toTF_CPP_MAX_VLOG_LEVEL
which correctly describes its effect.
Bug Fixes and Other Changes
-
tf.keras
:- Preprocessing layers API consistency changes:
StringLookup
addedoutput_mode
,sparse
, andpad_to_max_tokens
arguments with same semantics asTextVectorization
.IntegerLookup
addedoutput_mode
,sparse
, andpad_to_max_tokens
arguments with same semantics asTextVectorization
. Renamedmax_values
,oov_value
andmask_value
tomax_tokens
,oov_token
andmask_token
to align withStringLookup
andTextVectorization
.TextVectorization
default forpad_to_max_tokens
switched to False.CategoryEncoding
no longer supportsadapt
,IntegerLookup
now supports equivalent functionality.max_tokens
argument renamed tonum_tokens
.Discretization
addednum_bins
argument for learning bins boundaries through callingadapt
on a dataset. Renamedbins
argument tobin_boundaries
for specifying bins withoutadapt
.
- Improvements to model saving/loading:
model.load_weights
now accepts paths to saved models.
- Keras inputs can now be created directly from arbitrary
tf.TypeSpecs
. - Two new learning rate schedules added:
tf.keras.optimizers.schedules.CosineDecay
andtf.keras.optimizers.schedules.CosineDecayRestarts
.
- Preprocessing layers API consistency changes:
-
tf.data
:- Exposing
tf.data.experimental.ExternalStatePolicy
, which can be used to control how external state should be handled during dataset serialization or iterator checkpointing. - Changing
tf.data.experimental.save
to store the type specification of the dataset elements. This avoids the need for explicitly specifying theelement_spec
argument oftf.data.experimental.load
when loading the previously saved dataset. - Add
.element_spec
property totf.data.DatasetSpec
to access the inner spec. This can be used to extract the structure of nested datasets. - Add
tf.data.experimental.AutoShardingPolicy.HINT
which can be used to provide hints to tf.distribute-based auto-sharding as to where in the input pipeline to insert sharding transformations. - Make tf.data.Options persistent across
tf.function
andGraphDef
boundaries.
- Exposing
-
XLA compilation:
tf.function(experimental_compile=True)
has become a stable API, renamedtf.function(jit_compile=True)
.- XLA can now compile MirroredStrategy: the step function passed
to
strategy.run
can now be annoted withjit_compile=True
.
-
tf.distribute
:- Rename
experimental_prefetch_to_device
intf.distribute.InputOptions
toexperimental_fetch_to_device
to better reflect the purpose.
- Rename
-
tf.lite
:- class
tflite::Subgraph
:- Removed the
tensors()
method and the non-const overload of thenodes_and_registration()
method, both of which were previously documented as temporary and to be removed.- Uses of
tensors()
can be replaced by calling the existing methodstensors_size()
andtensor(int)
. - Uses of the non-const overload of
nodes_and_registration
can be replaced by calling the existing methodsnodes_size()
andcontext()
, and then calling theGetNodeAndRegistration
method in theTfLiteContext
returned bycontext()
.
- Uses of
- Removed the
- NNAPI
- Removed deprecated
Interpreter::UseNNAPI(bool)
C++ API.- Use
NnApiDelegate()
and related delegate configuration methods directly.
- Use
- Replaced the model cache key for models computation algorithm with one guaranteed to be stable across runs.
- Removed deprecated
- 16 bits quantization
- Added int16x8 support for ABS, REDUCE_MAX and REDUCE_MIN operators.
- Additional tests and fixes for ADD and SUB operators.
- Added support for saved model's session initializer through
TFLiteConverter.from_saved_model
. - Added DEPTH_TO_SPACE support in Post training quantization.
- Added dynamic range quantization support for the BatchMatMul op.
- Both symmetric and asymmetric quantized input tensor are supported.
- Add
RFFT2D
as builtin op. (RFFT2D
also supportsRFFTD
.) Currently only supports float32 input. - Add 5D support to
SLICE
op. - TFLite Supports SingatureDef:
- TFLiteConverter exports models with SignatureDef
- Interpreter supports getting a list of signatures and getting callable function for a given signaturedef.
- Add int8 support for
ReshapeV2
. - Add experimental support for optimization with sparsity.
- Add nominal support for unsigned 32-bit integer tensor types. Note that very few TFLite kernels support this type natively, so its use in mobile ML authoring is generally discouraged.
- Add support for static hash tables through
TFLiteConverter.from_saved_model
. - The Python TF Lite Interpreter bindings now has an option
experimental_preserve_all_tensors
to aid in debugging conversion. - Quantized x86 execution defaults to Ruy GEMM library for platforms with AVX support.
- Deprecate
tf.compat.v1.lite.experimental.get_potentially_supported_ops
. Usetf.lite.TFLiteConverter
directly to check whether a model is convertible. - Add support to select one of three different built-in op resolvers
- Enabled post training with calibrations for models that require user
provided TensorFlow Lite custom op libraries via
converter.target_spec._experimental_custom_op_registerers
. used in Python Interpreter API.
- class
-
TF Core:
- Corrected higher-order gradients of control flow constructs (
tf.cond
,tf.while_loop
, and compositions liketf.foldl
) computed withtf.GradientTape
inside atf.function
. - Changed the default step size in
gradient_checker_v2.compute_gradients
to be exactly representable as a binary floating point numbers. This avoids poluting gradient approximations needlessly, which is some cases leads to false negatives in op gradient tests. - Added
tf.config.experimental.get_memory_info
, returning a dict with the current and peak memory usage. Deprecatedtf.config.experimental.get_memory_usage
in favor of this new function. - Extended
tf.config.experimental.enable_tensor_float_32_execution
to control Tensor-Float-32 evaluation in RNNs. - Added a 'experimental_payloads' field to tf.errors.OpError and its subclasses to support more detailed error reporting. This is inspired from Abseil Status payloads: https://github.com/abseil/abseil-cpp/blob/master/absl/status/status.h
- Corrected higher-order gradients of control flow constructs (
-
tf.summary
:- New
tf.summary.graph
allows manual write of TensorFlow graph (tf.Graph
ortf.compat.v1.GraphDef
) as a summary. This is not a replacement for the trace-based API.
- New
-
Set
/d2ReducedOptimizeHugeFunctions
by default for Windows builds. This provides a big compile-time speedup, and effectively raises the minimum supported MSVC version to 16.4 (current: 16.8). -
TensorRT
- Removed the deprecated
session_config
parameter for the TF1-TRT converterTrtGraphConverter
. Previously, we issued a warning when the value of the parameter is not None. - The TF2-TRT converter
TrtGraphConverterV2
takes an object of class TrtConversionParams as a parameter. Removed three deprecated fields from this class:rewriter_config_template
,is_dynamic_op
, andmax_batch_size
. Previously, we issued a warning when the value ofrewriter_config_template
is not None. We issued an error when the value ofis_dynamic_op
is not True. We didn't use the value formax_batch_size
for building TensorRT engines. Add parametersuse_dynamic_shape
to enable dynamic shape support. The default is to disable dynamic shape support. Adddynamic_shape_profile_strategy
for selecting a dynamic shape profile strategy. The default is profile strategy isRange
. - Issue a warning when function get_tensorrt_rewriter_config is used.
- Removed the deprecated
-
TF XLA
- Add new enum value
MLIR_BRIDGE_ROLLOUT_SAFE_MODE_ENABLED
totf.config.experimental.mlir_bridge_rollout
to enable a "safe" mode. This runs the MLIR bridge only when an analysis of the graph only when an analysis of the graph determines that it is safe to run. - Add new enum value
MLIR_BRIDGE_ROLLOUT_SAFE_MODE_FALLBACK_ENABLED' to
tf.config.experimental.mlir_bridge_rollout` to enable a fallback for the MLIR bridge in a "safe" mode. This runs the MLIR bridge in a FallbackEnabled mode when an analysis of the graph determines that the graph does not have unsupported features.
- Add new enum value
-
Deterministic Op Functionality:
- Add determinism-unimplemented exception-throwing to the segment-sum ops.
When the environment variable
TF_DETERMINISTIC_OPS
is set to"true"
or"1"
(when op-determinism is expected), an attempt to run the following ops on a GPU will throwtf.errors.UnimplementedError
(with an understandable message) whendata
is a floating-point type, including complex types (if supported):tf.math.segment_prod
,tf.math.segment_sum
,tf.math.unsorted_segment_mean
,tf.math.unsorted_segment_sqrt_n
,tf.math.unsorted_segment_prod
,tf.math.unsorted_segment_sum
, and therefore alsotf.convert_to_tensor
whenvalue
is of typetf.IndexedSlices
(such as in the back prop thoughtf.gather
into a dense embedding). See issue 39751 which this change addresses, but does not solve. This exception-throwing behavior can be disabled by setting the environment variableTF_DISABLE_SEGMENT_REDUCTION_OP_DETERMINISM_EXCEPTIONS
to"true"
or"1"
. For more information about these changes, see the description in pull request 47772. - In previous versions of TensorFlow, when a GPU was available,
tf.sparse.sparse_dense_matmul
introduced truly random noise in the forward path for data of typetf.float32
but not for data of typetf.float64
(for which there was no GPU implementation). In this current release, GPU support for other floating-point types (tf.float16
,tf.float64
,tf.complex64
, andtf.complex128
) has been added for this op. If you were relying on the determinism of thetf.float64
CPU implementation being automatically selected because of the absence of thetf.float64
GPU implementation, you with either need to force the op to run on the CPU or use a different data type.
- Add determinism-unimplemented exception-throwing to the segment-sum ops.
When the environment variable
-
Security
- Fixes a heap buffer overflow in
RaggedBinCount
(CVE-2021-29512) - Fixes a heap out of bounds write in
RaggedBinCount
(CVE-2021-29514) - Fixes a type confusion during tensor casts which leads to dereferencing null pointers (CVE-2021-29513)
- Fixes a reference binding to null pointer in
MatrixDiag*
ops (CVE-2021-29515) - Fixes a null pointer dereference via invalid Ragged Tensors (CVE-2021-29516)
- Fixes a division by zero in
Conv3D
(CVE-2021-29517) - Fixes vulnerabilities where session operations in eager mode lead to null pointer dereferences (CVE-2021-29518)
- Fixes a
CHECK
-fail inSparseCross
caused by type confusion (CVE-2021-29519) - Fixes a segfault in
SparseCountSparseOutput
(CVE-2021-29521) - Fixes a heap buffer overflow in
Conv3DBackprop*
(CVE-2021-29520) - Fixes a division by 0 in
Conv3DBackprop*
(CVE-2021-29522) - Fixes a
CHECK
-fail inAddManySparseToTensorsMap
(CVE-2021-29523) - Fixes a division by 0 in
Conv2DBackpropFilter
(CVE-2021-29524) - Fixes a division by 0 in
Conv2DBackpropInput
(CVE-2021-29525) - Fixes a division by 0 in
Conv2D
(CVE-2021-29526) - Fixes a division by 0 in
QuantizedConv2D
(CVE-2021-29527) - Fixes a division by 0 in
QuantizedMul
(CVE-2021-29528) - Fixes vulnerabilities caused by invalid validation in
SparseMatrixSparseCholesky
(CVE-2021-29530) - Fixes a heap buffer overflow caused by rounding (CVE-2021-29529)
- Fixes a
CHECK
-fail intf.raw_ops.EncodePng
(CVE-2021-29531) - Fixes a heap out of bounds read in
RaggedCross
(CVE-2021-29532) - Fixes a
CHECK
-fail inDrawBoundingBoxes
(CVE-2021-29533) - Fixes a heap buffer overflow in
QuantizedMul
(CVE-2021-29535) - Fixes a
CHECK
-fail inSparseConcat
(CVE-2021-29534) - Fixes a heap buffer overflow in
QuantizedResizeBilinear
(CVE-2021-29537) - Fixes a heap buffer overflow in
QuantizedReshape
(CVE-2021-29536) - Fixes a division by zero in
Conv2DBackpropFilter
(CVE-2021-29538) - Fixes a heap buffer overflow in
Conv2DBackpropFilter
(CVE-2021-29540) - Fixes a heap buffer overflow in
StringNGrams
(CVE-2021-29542) - Fixes a null pointer dereference in
StringNGrams
(CVE-2021-29541) - Fixes a
CHECK
-fail inQuantizeAndDequantizeV4Grad
(CVE-2021-29544) - Fixes a
CHECK
-fail inCTCGreedyDecoder
(CVE-2021-29543) - Fixes a heap buffer overflow in
SparseTensorToCSRSparseMatrix
(CVE-2021-29545) - Fixes a division by 0 in
QuantizedBiasAdd
(CVE-2021-29546) - Fixes a heap out of bounds in
QuantizedBatchNormWithGlobalNormalization
(CVE-2021-29547) - Fixes a division by 0 in
QuantizedBatchNormWithGlobalNormalization
(CVE-2021-29548) - Fixes a division by 0 in
QuantizedAdd
(CVE-2021-29549) - Fixes a division by 0 in
FractionalAvgPool
(CVE-2021-29550) - Fixes an OOB read in
MatrixTriangularSolve
(CVE-2021-29551) - Fixes a heap OOB in
QuantizeAndDequantizeV3
(CVE-2021-29553) - Fixes a
CHECK
-failure inUnsortedSegmentJoin
(CVE-2021-29552) - Fixes a division by 0 in
DenseCountSparseOutput
(CVE-2021-29554) - Fixes a division by 0 in
FusedBatchNorm
(CVE-2021-29555) - Fixes a division by 0 in
SparseMatMul
(CVE-2021-29557) - Fixes a division by 0 in
Reverse
(CVE-2021-29556) - Fixes a heap buffer overflow in
SparseSplit
(CVE-2021-29558) - Fixes a heap OOB access in unicode ops (CVE-2021-29559)
- Fixes a heap buffer overflow in
RaggedTensorToTensor
(CVE-2021-29560) - Fixes a
CHECK
-fail inLoadAndRemapMatrix
(CVE-2021-29561) - Fixes a
CHECK
-fail intf.raw_ops.IRFFT
(CVE-2021-29562) - Fixes a
CHECK
-fail intf.raw_ops.RFFT
(CVE-2021-29563) - Fixes a null pointer dereference in
EditDistance
(CVE-2021-29564) - Fixes a null pointer dereference in
SparseFillEmptyRows
(CVE-2021-29565) - Fixes a heap OOB access in
Dilation2DBackpropInput
(CVE-2021-29566) - Fixes a reference binding to null in
ParameterizedTruncatedNormal
(CVE-2021-29568) - Fixes a set of vulnerabilities caused by lack of validation in
SparseDenseCwiseMul
(CVE-2021-29567) - Fixes a heap out of bounds read in
MaxPoolGradWithArgmax
(CVE-2021-29570) - Fixes a heap out of bounds read in
RequantizationRange
(CVE-2021-29569) - Fixes a memory corruption in
DrawBoundingBoxesV2
(CVE-2021-29571) - Fixes a reference binding to nullptr in
SdcaOptimizer
(CVE-2021-29572) - Fixes an overflow and a denial of service in
tf.raw_ops.ReverseSequence
(CVE-2021-29575) - Fixes a division by 0 in
MaxPoolGradWithArgmax
(CVE-2021-29573) - Fixes an undefined behavior in
MaxPool3DGradGrad
(CVE-2021-29574) - Fixes a heap buffer overflow in
MaxPool3DGradGrad
(CVE-2021-29576) - Fixes a heap buffer overflow in
AvgPool3DGrad
(CVE-2021-29577) - Fixes an undefined behavior and a
CHECK
-fail inFractionalMaxPoolGrad
(CVE-2021-29580) - Fixes a heap buffer overflow in
FractionalAvgPoolGrad
(CVE-2021-29578) - Fixes a heap buffer overflow in
MaxPoolGrad
(CVE-2021-29579) - Fixes a segfault in
CTCBeamSearchDecoder
(CVE-2021-29581) - Fixes a heap OOB read in
tf.raw_ops.Dequantize
(CVE-2021-29582) - Fixes a
CHECK
-fail due to integer overflow (CVE-2021-29584) - Fixes a heap buffer overflow and undefined behavior in
FusedBatchNorm
(CVE-2021-29583) - Fixes a division by zero in padding computation in TFLite (CVE-2021-29585)
- Fixes a division by zero in optimized pooling implementations in TFLite (CVE-2021-29586)
- Fixes a division by zero in TFLite's implementation of
SpaceToDepth
(CVE-2021-29587) - Fixes a division by zero in TFLite's implementation of
GatherNd
(CVE-2021-29589) - Fixes a division by zero in TFLite's implementation of
TransposeConv
(CVE-2021-29588) - Fixes a heap OOB read in TFLite's implementation of
Minimum
orMaximum
(CVE-2021-29590) - Fixes a null pointer dereference in TFLite's
Reshape
operator (CVE-2021-29592) - Fixes a stack overflow due to looping TFLite subgraph (CVE-2021-29591)
- Fixes a division by zero in TFLite's implementation of
DepthToSpace
(CVE-2021-29595) - Fixes a division by zero in TFLite's convolution code (CVE-2021-29594)
- Fixes a division by zero in TFLite's implementation of
EmbeddingLookup
(CVE-2021-29596) - Fixes a division by zero in TFLite's implementation of
BatchToSpaceNd
(CVE-2021-29593) - Fixes a division by zero in TFLite's implementation of
SpaceToBatchNd
(CVE-2021-29597) - Fixes a division by zero in TFLite's implementation of
SVDF
(CVE-2021-29598) - Fixes a division by zero in TFLite's implementation of
Split
(CVE-2021-29599) - Fixes a division by zero in TFLite's implementation of
OneHot
(CVE-2021-29600) - Fixes a division by zero in TFLite's implementation of
DepthwiseConv
(CVE-2021-29602) - Fixes a division by zero in TFLite's implementation of hashtable lookup (CVE-2021-29604)
- Fixes a integer overflow in TFLite concatentation (CVE-2021-29601)
- Fixes a integer overflow in TFLite memory allocation (CVE-2021-29605)
- Fixes a heap OOB write in TFLite (CVE-2021-29603)
- Fixes a heap OOB read in TFLite (CVE-2021-29606)
- Fixes a heap OOB and null pointer dereference in
RaggedTensorToTensor
(CVE-2021-29608) - Fixes vulnerabilities caused by incomplete validation in
SparseAdd
(CVE-2021-29609) - Fixes vulnerabilities caused by incomplete validation in
SparseSparseMinimum
(CVE-2021-29607) - Fixes vulnerabilities caused by incomplete validation in
SparseReshape
(CVE-2021-29611) - Fixes vulnerabilities caused by invalid validation in
QuantizeAndDequantizeV2
(CVE-2021-29610) - Fixes a heap buffer overflow in
BandedTriangularSolve
(CVE-2021-29612) - Fixes vulnerabilities caused by incomplete validation in
tf.raw_ops.CTCLoss
(CVE-2021-29613) - Fixes an interpreter crash from vulnerabilities in
tf.io.decode_raw
(CVE-2021-29614) - Fixes a stack overflow in
ParseAttrValue
with nested tensors (CVE-2021-29615) - Fixes a null dereference in Grappler's
TrySimplify
(CVE-2021-29616) - Fixes a crash in
tf.transpose
with complex inputs (CVE-2021-29618) - Fixes a crash in
tf.strings.substr
due toCHECK
-fail (CVE-2021-29617) - Fixes a segfault in
tf.raw_ops.SparseCountSparseOutput
(CVE-2021-29619) - Fixes a segfault in
tf.raw_ops.ImmutableConst
(CVE-2021-29539) - Updates
curl
to7.76.0
to handle CVE-2020-8169, CVE-2020-8177, CVE-2020-8231, CVE-2020-8284, CVE-2020-8285 and CVE-2020-8286.
- Fixes a heap buffer overflow in
-
Other
- Added
show_debug_info
tomlir.convert_graph_def
andmlir.convert_function
. - Added
Arm Compute Library (ACL)
support to
--config=mkl_aarch64
build.
- Added
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
8bitmp3, Aaron S. Mondal, Abhilash Mahendrakar, Abhinav Upadhyay, Abhishek Kulkarni, Abolfazl Shahbazi, Adam Hillier, Aditya Kane, Ag Ramesh, ahmedsabie, Albert Villanova Del Moral, Aleksey Vitebskiy, Alex Hoffman, Alexander Bayandin, Alfie Edwards, Aman Kishore, Amogh Joshi, andreABbauer, Andrew Goodbody, Andrzej Pomirski, Artemiy Ryabinkov, Ashish Jha, ather, Ayan Moitra, Bairen Yi, Bart Ribbers, Bas Aarts, Behzad Abghari, Ben Arnao, Ben Barsdell, Benjamin Klimczak, bhack, Brendan Collins, Can Wang, Cheng Ren, Chris Leary, Chris Olivier, Clemens Giuliani, Cloud Han, Corey Cole, Cui, Yifeng, Cuong V. Nguyen, Daniel Moore, Dawid Wojciechowski, Ddavis-2015, Dean Wyatte, Denisa Roberts, dependabot[bot], Dmitry Volodin, Dominic Jack, Duncan Riach, dushuai, Elena Zhelezina, Eli Osherovich, Erik Smistad, ewsn1593, Felix Fent, fo40225, François Chollet, Frederic Bastien, Freedom" Koan-Sin Tan, fsx950223, ganand1, gbaned, Georgiy Manuilov, gerbauz, Guillaume Klein, Guozhong Zhuang, Harry Slatyer, Harsh188, henri, Henri Woodcock, Hiran Sarkar, Hollow Man, Håkon Sandsmark, I Wayan Dharmana, icysapphire, Ikko Ashimine, Jab Hofmeier, Jack Hessel, Jacob Valdez, Jakub Jatczak, James Bernardi, Jared Smolens, Jason Zaman, jedlimlx, Jenny Plunkett, Jens Elofsson, Jerry Shih, jgehw, Jia Fu Low, Jim Fisher, jpodivin, Julien Stephan, Jungsub Lim, Junha Park, Junhyuk So, justkw, Kaixi Hou, kashyapraval, Kasra Bigdeli, Kazuaki Ishizaki, Keith Mok, Kevin Cheng, kopytjuk, Kristian Hartikainen, ksood12345, Kulin Seth, kushanam, latyas, Lequn Chen, Leslie-Fang, Long M. Lưu, Lukas Geiger, machineko, Mahmoud Abuzaina, Manish, Mao Yunfei, Maozhou, Ge, Marcin Juszkiewicz, Marcin Owsiany, Marconi Jiang, Marcos Pereira, Maria Romanenko Vexlard, Maria Vexlard, Marius Brehler, marload, Martin Kubovčík, Matej, Mateusz Holenko, Maxiwell S. Garcia, Mazhar, mazharul, mbhuiyan, mdfaijul, Michael Gielda, Michael Kuchnik, Michal Szutenberg, Mikhail Stepanov, Milan Straka, Mitchel Humpherys, Mohamed Moselhy, Mohamed Nour Abouelseoud, Måns Bermell, Måns Nilsson, Nathan Luehr, Nico Jahn, Niroop Ammbashankar, Oceania2018, Omri Steiner, Orivej Desh, Oskar Flordal, oujiafan, Patrik Laurell, Paul B. Isaac'S, Paul Klinger, Pawel Piskorski, Pedro Marques, Phat Tran, Piotr Zierhoffer, piyushdatta, Pnikam-Cad, Prashant Kumar, Prateek Gupta, PratsBhatt, Pravin Karandikar, qqq.jq, QQ喵, Quintin, Rama Ketineni, ravikyram, Rehan Guha, rhdong, rmothukuru, Roger Cheng, Rohit Santhanam, rposts, Rsanthanam-Amd, rsun, Rsun-Bdti, Ryan Kuester, ryanking13, Saduf2019, Sami Kama, Samuel Marks, Scott Tseng, Sean Moriarity, Sergey Popov, Sergii Khomenko, Sheng, Yang, shwetaoj, Sidong-Wei, Simon Maurer, Simrit Kaur, Srini511, Srinivasan Narayanamoorthy, Stephan, Stephen Matthews, Sungmann Cho, Sunoru, Suraj Sudhir, Suraj Upadhyay, Taebum Kim, Takayoshi Koizumi, Tamas Bela Feher, Teng Lu, Thibaut Goetghebuer-Planchon, Tomwildenhain-Microsoft, Tony, Traun Leyden, Trent Lo, TVLIgnacy, Tzu-Wei Sung, vaibhav, Vignesh Kothapalli, Vikram Dattu, viktprog, Vinayaka Bandishti, Vincent Abriou, Vishakha Agrawal, Vivek Panyam, Vladimir Silyaev, Võ Văn Nghĩa, wamuir, Wang, Yanzhang, wangsiyu, Waqar Hameed, wxinix, Xiao Yang, xiaohong1031, Xiaoming (Jason) Cui, Xinan Jiang, Yair Ehrenwald, Yajush Vyas, Yasir Modak, Yimei Sun, Yong Tang, Yosshi999, youshenmebutuo, yqtianust, Yuan Tang, yuanbopeng, Yuriy Chernyshov, Yuta Fukasawa, Zachary Deane-Mayer, Zeno Gantner, Zhoulong Jiang, zhuyie, zilinzhu, 彭震东
Release 2.4.1
- This release removes the AVX2 requirement from TF 2.4.0.
Release 2.3.2
Bug Fixes and Other Changes
- Fixes an access to unitialized memory in Eigen code (CVE-2020-26266)
- Fixes a security vulnerability caused by lack of validation in
tf.raw_ops.DataFormatVecPermute
andtf.raw_ops.DataFormatDimMap
(CVE-2020-26267) - Fixes a vulnerability caused by attempting to write to immutable memory
region in
tf.raw_ops.ImmutableConst
(CVE-2020-26268 - Fixes a
CHECK
-fail in LSTM with zero-length input (CVE-2020-26270) - Fixes a security vulnerability caused by accessing heap data outside of
bounds when loading a specially crafted
SavedModel
(CVE-2020-26271) - Solves an OOM issue on TPUs when XLA contexts use fused average updates
- Updates
libjpeg-turbo
to2.0.5
to handle CVE-2020-13790. - Updates
junit
to4.13.1
to handle CVE-2020-15250. - Updates
PCRE
to8.44
to handle CVE-2019-20838 and CVE-2020-14155. - Updates
sqlite3
to3.44.0
to keep in sync with master branch.
Release 2.2.2
Bug Fixes and Other Changes
- Fixes an access to unitialized memory in Eigen code (CVE-2020-26266)
- Fixes a security vulnerability caused by lack of validation in
tf.raw_ops.DataFormatVecPermute
andtf.raw_ops.DataFormatDimMap
(CVE-2020-26267) - Fixes a vulnerability caused by attempting to write to immutable memory
region in
tf.raw_ops.ImmutableConst
(CVE-2020-26268 - Fixes a
CHECK
-fail in LSTM with zero-length input (CVE-2020-26270) - Fixes a security vulnerability caused by accessing heap data outside of
bounds when loading a specially crafted
SavedModel
(CVE-2020-26271) - Prevents memory leaks in loading
SavedModel
s that import functions - Updates
libjpeg-turbo
to2.0.5
to handle CVE-2020-13790. - Updates
junit
to4.13.1
to handle CVE-2020-15250. - Updates
PCRE
to8.44
to handle CVE-2019-20838 and CVE-2020-14155. - Updates
sqlite3
to3.44.0
to keep in sync with master branch.
Release 2.1.3
Bug Fixes and Other Changes
- Fixes an access to unitialized memory in Eigen code (CVE-2020-26266)
- Fixes a security vulnerability caused by lack of validation in
tf.raw_ops.DataFormatVecPermute
andtf.raw_ops.DataFormatDimMap
(CVE-2020-26267) - Fixes a vulnerability caused by attempting to write to immutable memory
region in
tf.raw_ops.ImmutableConst
(CVE-2020-26268 - Fixes a
CHECK
-fail in LSTM with zero-length input (CVE-2020-26270) - Fixes a security vulnerability caused by accessing heap data outside of
bounds when loading a specially crafted
SavedModel
(CVE-2020-26271) - Updates
libjpeg-turbo
to2.0.5
to handle CVE-2020-13790. - Updates
junit
to4.13.1
to handle CVE-2020-15250. - Updates
PCRE
to8.44
to handle CVE-2019-20838 and CVE-2020-14155. - Updates
sqlite3
to3.44.0
to keep in sync with master branch. - Newer ROCm versions are supported on the 2.1 branch.
Release 2.0.4
Note that this is the last patch release for the TensorFlow 2.0.x series.
Bug Fixes and Other Changes
- Fixes an access to unitialized memory in Eigen code (CVE-2020-26266)
- Fixes a security vulnerability caused by lack of validation in
tf.raw_ops.DataFormatVecPermute
andtf.raw_ops.DataFormatDimMap
(CVE-2020-26267) - Fixes a vulnerability caused by attempting to write to immutable memory
region in
tf.raw_ops.ImmutableConst
(CVE-2020-26268 - Fixes a
CHECK
-fail in LSTM with zero-length input (CVE-2020-26270) - Fixes a security vulnerability caused by accessing heap data outside of
bounds when loading a specially crafted
SavedModel
(CVE-2020-26271) - Updates
libjpeg-turbo
to2.0.5
to handle CVE-2020-13790. - Updates
junit
to4.13.1
to handle CVE-2020-15250. - Updates
PCRE
to8.44
to handle CVE-2019-20838 and CVE-2020-14155. - Updates
sqlite3
to3.44.0
to keep in sync with master branch.
Release 1.15.5
Note that this is the last patch release for the TensorFlow 1.x series.
Bug Fixes and Other Changes
- Fixes an access to unitialized memory in Eigen code (CVE-2020-26266)
- Fixes a security vulnerability caused by lack of validation in
tf.raw_ops.DataFormatVecPermute
andtf.raw_ops.DataFormatDimMap
(CVE-2020-26267) - Fixes a vulnerability caused by attempting to write to immutable memory
region in
tf.raw_ops.ImmutableConst
(CVE-2020-26268 - Fixes a
CHECK
-fail in LSTM with zero-length input (CVE-2020-26270) - Fixes a security vulnerability caused by accessing heap data outside of
bounds when loading a specially crafted
SavedModel
(CVE-2020-26271) - Updates
libjpeg-turbo
to2.0.5
to handle CVE-2020-13790. - Updates
junit
to4.13.1
to handle CVE-2020-15250. - Updates
PCRE
to8.44
to handle CVE-2019-20838 and CVE-2020-14155. - Updates
sqlite3
to3.44.0
to keep in sync with master branch.
Release 2.4.0
## Major Features and Improvements
-
tf.distribute
introduces experimental support for asynchronous training of models via thetf.distribute.experimental.ParameterServerStrategy
API. Please see the tutorial to learn more. -
MultiWorkerMirroredStrategy
is now a stable API and is no longer considered experimental. Some of the major improvements involve handling peer failure and many bug fixes. Please check out the detailed tutorial on Multi-worker training with Keras. -
Introduces experimental support for a new module named
tf.experimental.numpy
which is a NumPy-compatible API for writing TF programs. See the detailed guide to learn more. Additional details below. -
Adds Support for TensorFloat-32 on Ampere based GPUs. TensorFloat-32, or TF32 for short, is a math mode for NVIDIA Ampere based GPUs and is enabled by default.
-
A major refactoring of the internals of the Keras Functional API has been completed, that should improve the reliability, stability, and performance of constructing Functional models.
-
Keras mixed precision API
tf.keras.mixed_precision
is no longer experimental and allows the use of 16-bit floating point formats during training, improving performance by up to 3x on GPUs and 60% on TPUs. Please see below for additional details. -
TensorFlow Profiler now supports profiling
MultiWorkerMirroredStrategy
and tracing multiple workers using the sampling mode API. -
TFLite Profiler for Android is available. See the detailed guide to learn more.
-
TensorFlow pip packages are now built with CUDA11 and cuDNN 8.0.2.
Breaking Changes
-
TF Core:
- Certain float32 ops run in lower precision on Ampere based GPUs,
including matmuls and convolutions, due to the use of
TensorFloat-32.
Specifically, inputs to such ops are rounded from 23 bits of precision
to 10 bits of precision. This is unlikely to cause issues in practice
for deep learning models. In some cases, TensorFloat-32 is also used for
complex64 ops. TensorFloat-32 can be disabled by running
tf.config.experimental.enable_tensor_float_32_execution(False)
. - The byte layout for string tensors across the C-API has been updated to
match TF Core/C++; i.e., a contiguous array of
tensorflow::tstring
/TF_TString
s. - C-API functions
TF_StringDecode
,TF_StringEncode
, andTF_StringEncodedSize
are no longer relevant and have been removed; seecore/platform/ctstring.h
for string access/modification in C. tensorflow.python
,tensorflow.core
andtensorflow.compiler
modules are now hidden. These modules are not part of TensorFlow public API.tf.raw_ops.Max
andtf.raw_ops.Min
no longer accept inputs of typetf.complex64
ortf.complex128
, because the behavior of these ops is not well defined for complex types.- XLA:CPU and XLA:GPU devices are no longer registered by default. Use
TF_XLA_FLAGS=--tf_xla_enable_xla_devices
if you really need them, but this flag will eventually be removed in subsequent releases.
- Certain float32 ops run in lower precision on Ampere based GPUs,
including matmuls and convolutions, due to the use of
TensorFloat-32.
Specifically, inputs to such ops are rounded from 23 bits of precision
to 10 bits of precision. This is unlikely to cause issues in practice
for deep learning models. In some cases, TensorFloat-32 is also used for
complex64 ops. TensorFloat-32 can be disabled by running
-
tf.keras
:- The
steps_per_execution
argument inmodel.compile()
is no longer experimental; if you were passingexperimental_steps_per_execution
, rename it tosteps_per_execution
in your code. This argument controls the number of batches to run during eachtf.function
call when callingmodel.fit()
. Running multiple batches inside a singletf.function
call can greatly improve performance on TPUs or small models with a large Python overhead. - A major refactoring of the internals of the Keras Functional API may affect code that is relying on certain internal details:
- Code that uses
isinstance(x, tf.Tensor)
instead oftf.is_tensor
when checking Keras symbolic inputs/outputs should switch to usingtf.is_tensor
. - Code that is overly dependent on the exact names attached to symbolic
tensors (e.g. assumes there will be ":0" at the end of the inputs,
treats names as unique identifiers instead of using
tensor.ref()
, etc.) may break. - Code that uses full path for
get_concrete_function
to trace Keras symbolic inputs directly should switch to building matchingtf.TensorSpec
s directly and tracing theTensorSpec
objects. - Code that relies on the exact number and names of the op layers that TensorFlow operations were converted into may have changed.
- Code that uses
tf.map_fn
/tf.cond
/tf.while_loop
/control flow as op layers and happens to work before TF 2.4. These will explicitly be unsupported now. Converting these ops to Functional API op layers was unreliable before TF 2.4, and prone to erroring incomprehensibly or being silently buggy. - Code that directly asserts on a Keras symbolic value in cases where ops
like
tf.rank
used to return a static or symbolic value depending on if the input had a fully static shape or not. Now these ops always return symbolic values. - Code already susceptible to leaking tensors outside of graphs becomes slightly more likely to do so now.
- Code that tries directly getting gradients with respect to symbolic
Keras inputs/outputs. Use
GradientTape
on the actual Tensors passed to the already-constructed model instead. - Code that requires very tricky shape manipulation via converted op layers in order to work, where the Keras symbolic shape inference proves insufficient.
- Code that tries manually walking a
tf.keras.Model
layer by layer and assumes layers only ever have one positional argument. This assumption doesn't hold true before TF 2.4 either, but is more likely to cause issues now. - Code that manually enters
keras.backend.get_graph()
before building a functional model is no longer needed. - Start enforcing input shape assumptions when calling Functional API
Keras models. This may potentially break some users, in case there is a
mismatch between the shape used when creating
Input
objects in a Functional model, and the shape of the data passed to that model. You can fix this mismatch by either calling the model with correctly-shaped data, or by relaxingInput
shape assumptions (note that you can pass shapes withNone
entries for axes that are meant to be dynamic). You can also disable the input checking entirely by settingmodel.input_spec = None
. - Several changes have been made to
tf.keras.mixed_precision.experimental
. Note that it is now recommended to use the non-experimentaltf.keras.mixed_precision
API. AutoCastVariable.dtype
now refers to the actual variable dtype, not the dtype it will be casted to.- When mixed precision is enabled,
tf.keras.layers.Embedding
now outputs a float16 or bfloat16 tensor instead of a float32 tensor. - The property
tf.keras.mixed_precision.experimental.LossScaleOptimizer.loss_scale
is now a tensor, not aLossScale
object. This means to get a loss scale of aLossScaleOptimizer
as a tensor, you must now callopt.loss_scale
instead ofopt.loss_scale()
. - The property
should_cast_variables
has been removed fromtf.keras.mixed_precision.experimental.Policy
- When passing a
tf.mixed_precision.experimental.DynamicLossScale
totf.keras.mixed_precision.experimental.LossScaleOptimizer
, theDynamicLossScale
's multiplier must be 2. - When passing a
tf.mixed_precision.experimental.DynamicLossScale
totf.keras.mixed_precision.experimental.LossScaleOptimizer
, the weights of theDynanmicLossScale
are copied into theLossScaleOptimizer
instead of being reused. This means modifying the weights of theDynamicLossScale
will no longer affect the weights of the LossScaleOptimizer, and vice versa. - The global policy can no longer be set to a non-floating point policy in
tf.keras.mixed_precision.experimental.set_policy
- In
Layer.call
,AutoCastVariable
s will no longer be casted withinMirroredStrategy.run
orReplicaContext.merge_call
. This is because a thread local variable is used to determine whetherAutoCastVariable
s are casted, and those two functions run with a different thread. Note this only applies if one of these two functions is called withinLayer.call
; if one of those two functions callsLayer.call
,AutoCastVariable
s will still be casted.
- The
-
tf.data
:tf.data.experimental.service.DispatchServer
now takes a config tuple instead of individual arguments. Usages should be updated totf.data.experimental.service.DispatchServer(dispatcher_config)
.tf.data.experimental.service.WorkerServer
now takes a config tuple instead of individual arguments. Usages should be updated totf.data.experimental.service.WorkerServer(worker_config)
.
-
tf.distribute
:- Removes
tf.distribute.Strategy.experimental_make_numpy_dataset
. Please usetf.data.Dataset.from_tensor_slices
instead. - Renames
experimental_hints
intf.distribute.StrategyExtended.reduce_to
,tf.distribute.StrategyExtended.batch_reduce_to
,tf.distribute.ReplicaContext.all_reduce
tooptions
. - Renames
tf.distribute.experimental.CollectiveHints
totf.distribute.experimental.CommunicationOptions
. - Renames
tf.distribute.experimental.CollectiveCommunication
totf.distribute.experimental.CommunicationImplementation
. - Renames
tf.distribute.Strategy.experimental_distribute_datasets_from_function
todistribute_datasets_from_function
as it is no longer experimental. - Removes
tf.distribute.Strategy.experimental_run_v2
method, which was deprecated in TF 2.2.
- Removes
-
tf.lite
:tf.quantization.quantize_and_dequantize_v2
has been introduced, which updates the gradient definition for quantization which is outside the range to be 0. To simulate the V1 the behavior oftf.quantization.quantize_and_dequantize(...)
usetf.grad_pass_through(tf.quantization.quantize_and_dequantize_v2)(...)
.
-
Building TensorFlow:
- Windows platform builds: TensorFlow on Windows under MSVC is now built
with
--copt=/experimental:preprocessor --host_copt=/experimental:preprocessor
(see.bazelrc
for more details). Builds including TensorFlow may fail with unexpected syntax errors if these flags are absent. See also this thread on SIG Build.
- Windows platform builds: TensorFlow on Windows under MSVC is now built
with
Known Caveats
tf.keras.mixed_precision
- When using mixed precision, calling
RMSprop.apply_gradients
orNadam.apply_gradients
outside atf.function
does not work and will raise the AttributeError "Tensor.op is meaningless when eager execution is enabled". See this issue for details and a workaround.
- When using mixed precision, calling
Bug Fixes and Other Changes
TF Core:
- Introduces experimental support for a new module named
tf.experimental.numpy
, which is a NumPy-compatible API for writing TF programs. This module provides classndarray
, which mimics thendarray
class in NumPy, and wraps an immutabletf.Tensor
under the hood. A subset of NumPy functions (e.g.numpy.add
) are provided. Their inter-operation with TF facilities is seamless in most cases. See tensorflow/python/ops/numpy_ops/README.md for details of what operations are supported and what are the differences from NumPy. tf.types.experimental.TensorLike
is a newUnion
type that can be used as type annotation for variables representing a Tensor or a value that can be converted to Tensor bytf.convert_to_tensor
.- Calling ops with a python constants or numpy values is now consistent with tf.convert_to_tensor behavior. This avoids operations like tf.reshape truncating inputs such as from int64 to int32.
- Adds
tf.sparse.map_values
to apply a function to the.value
s ofSparseTensor
arguments. - The Python bitwise operators for
Tensor
(__and__
,__or__
,__xor__
and__invert__
now support non-bool
arguments and apply the corresponding bitwise ops.bool
arguments continue to be supported and dispatch to logical ops. This brings them more in line with Python and NumPy behavior. - Adds
tf.SparseTensor.with_values
. This returns a new SparseTensor with the same sparsity pattern, but with new provided values. It is similar to thewith_values
function ofRaggedTensor
. - Adds
StatelessCase
op, and uses it if none of case branches has stateful ops. - Adds
tf.config.experimental.get_memory_usage
to return total memory usage of the device. - Adds gradients for
RaggedTensorToVariant
andRaggedTensorFromVariant
. - Improve shape inference of nested function calls by supporting constant folding across Arg nodes which makes more static values available to shape inference functions.
tf.debugging
:tf.debugging.assert_shapes()
now works onSparseTensor
s (Fixes #36268).
- GPU
- Adds Support for
TensorFloat-32
on Ampere based GPUs.TensorFloat-32, or TF32 for short, is a math mode
for NVIDIA Ampere based GPUs which causes certain float32 ops, such as
matrix multiplications and convolutions, to run much faster on Ampere
GPUs but with reduced precision. This reduced precision has not been
found to effect convergence quality of deep learning models in practice.
TensorFloat-32 is enabled by default, but can be disabled with
tf.config.experimental.enable_tensor_float_32_execution
.
- Adds Support for
TensorFloat-32
on Ampere based GPUs.TensorFloat-32, or TF32 for short, is a math mode
for NVIDIA Ampere based GPUs which causes certain float32 ops, such as
matrix multiplications and convolutions, to run much faster on Ampere
GPUs but with reduced precision. This reduced precision has not been
found to effect convergence quality of deep learning models in practice.
TensorFloat-32 is enabled by default, but can be disabled with
tf.math
:- Adds
tf.math.erfcinv
, the inverse totf.math.erfc
.
- Adds
tf.nn
:tf.nn.max_pool2d
now supports explicit padding.
tf.image
:- Adds deterministic
tf.image.stateless_random_*
functions for eachtf.image.random_*
function. Added a new opstateless_sample_distorted_bounding_box
which is a deterministic version ofsample_distorted_bounding_box
op. Given the same seed, these stateless functions/ops produce the same results independent of how many times the function is called, and independent of global seed settings. - Adds deterministic
tf.image.resize
backprop CUDA kernels formethod=ResizeMethod.BILINEAR
(the default method). Enable by setting the environment variableTF_DETERMINISTIC_OPS
to"true"
or"1"
.
- Adds deterministic
tf.print
:- Bug fix in
tf.print()
withOrderedDict
where if anOrderedDict
didn't have the keys sorted, the keys and values were not being printed in accordance with their correct mapping.
- Bug fix in
tf.train.Checkpoint
:- Now accepts a
root
argument in the initialization, which generates a checkpoint with a root object. This allows users to create aCheckpoint
object that is compatible with Kerasmodel.save_weights()
andmodel.load_weights
. The checkpoint is also compatible with the checkpoint saved in thevariables/
folder in the SavedModel. - When restoring,
save_path
can be a path to a SavedModel. The function will automatically find the checkpoint in the SavedModel.
- Now accepts a
tf.data
:
- Adds new
tf.data.experimental.service.register_dataset
andtf.data.experimental.service.from_dataset_id
APIs to enable one process to register a dataset with the tf.data service, and another process to consume data from the dataset. - Adds support for dispatcher fault tolerance. To enable fault tolerance,
configure a
work_dir
when running your dispatcher server and setdispatcher_fault_tolerance=True
. The dispatcher will store its state towork_dir
, so that on restart it can continue from its previous state after restart. - Adds support for sharing dataset graphs via shared filesystem instead of
over RPC. This reduces load on the dispatcher, improving performance of
distributing datasets. For this to work, the dispatcher's
work_dir
must be accessible from workers. If the worker fails to read from thework_dir
, it falls back to using RPC for dataset graph transfer. - Adds support for a new "distributed_epoch" processing mode. This processing mode distributes a dataset across all tf.data workers, instead of having each worker process the full dataset. See the tf.data service docs to learn more.
- Adds optional
exclude_cols
parameter to CsvDataset. This parameter is the complement ofselect_cols
; at most one of these should be specified. - We have implemented an optimization which reorders data-discarding
transformations such as
take
andshard
to happen earlier in the dataset when it is safe to do so. The optimization can be disabled via theexperimental_optimization.reorder_data_discarding_ops
dataset option. tf.data.Options
were previously immutable and can now be overridden.tf.data.Dataset.from_generator
now supports Ragged and Sparse tensors with a newoutput_signature
argument, which allowsfrom_generator
to produce any type describable by atf.TypeSpec
.tf.data.experimental.AUTOTUNE
is now available in the core API astf.data.AUTOTUNE
.
tf.distribute
:
- Introduces experimental support for asynchronous training of models via
tf.distribute.experimental.ParameterServerStrategy
:- Replaces the existing
tf.distribute.experimental.ParameterServerStrategy
symbol with a new class that is for parameter server training in TF2. Usage of the old symbol, usually with Estimator API, should be replaced with [tf.compat.v1.distribute.experimental.ParameterServerStrategy
]. - Added
tf.distribute.experimental.coordinator.*
namespace, including the main APIClusterCoordinator
for coordinating the training cluster, the related data structureRemoteValue
andPerWorkerValue
.
- Replaces the existing
MultiWorkerMirroredStrategy
](https://www.tensorflow.org/api_docs/python/tf/distribute/MultiWorkerMirroredStrategy) is now a stable API and is no longer considered experimental. Some of the major improvements involve handling peer failure and many bug fixes. Please check out the detailed tutorial on Multi-worer training with Keras.- Adds
tf.distribute.Strategy.gather
andtf.distribute.ReplicaContext.all_gather
APIs to support gathering dense distributed values. - Fixes various issues with saving a distributed model.
tf.keras
:
- Improvements from the Functional API refactoring:
- Functional model construction does not need to maintain a global workspace graph, removing memory leaks especially when building many models or very large models.
- Functional model construction should be ~8-10% faster on average.
- Functional models can now contain non-symbolic values in their call inputs inside of the first positional argument.
- Several classes of TF ops that were not reliably converted to Keras
layers during functional API construction should now work,
e.g.
tf.image.ssim_multiscale
- Error messages when Functional API construction goes wrong (and when ops cannot be converted to Keras layers automatically) should be clearer and easier to understand.
Optimizer.minimize
can now accept a lossTensor
and aGradientTape
as an alternative to accepting acallable
loss.- Adds
beta
hyperparameter to FTRL optimizer classes (Keras and others) to match FTRL paper. Optimizer.__init__
now accepts agradient_aggregator
to allow for customization of how gradients are aggregated across devices, as well asgradients_transformers
to allow for custom gradient transformations (such as gradient clipping).- Improvements to Keras preprocessing layers:
- TextVectorization can now accept a vocabulary list or file as an init arg.
- Normalization can now accept mean and variance values as init args.
- In
Attention
andAdditiveAttention
layers, thecall()
method now accepts areturn_attention_scores
argument. When set to True, the layer returns the attention scores as an additional output argument. - Adds
tf.metrics.log_cosh
andtf.metrics.logcosh
API entrypoints with the same implementation as theirtf.losses
equivalent. - For Keras model, the individual call of
Model.evaluate
uses no cached data for evaluation, whileModel.fit
uses cached data whenvalidation_data
arg is provided for better performance. - Adds a
save_traces
argument tomodel.save
/tf.keras.models.save_model
which determines whether the SavedModel format stores the Keras model/layer call functions. The traced functions allow Keras to revive custom models and layers without the original class definition, but if this isn't required the tracing can be disabled with the added option. - The
tf.keras.mixed_precision
API is now non-experimental. The non-experimental API differs from the experimental API in several ways.tf.keras.mixed_precision.Policy
no longer takes in atf.mixed_precision. experimental.LossScale
in the constructor, and no longer has aLossScale
associated with it. Instead,Model.compile
will automatically wrap the optimizer with aLossScaleOptimizer
using dynamic loss scaling ifPolicy.name
is "mixed_float16".tf.keras.mixed_precision.LossScaleOptimizer
's constructor takes in different arguments. In particular, it no longer takes in aLossScale
, and there is no longer aLossScale
associated with theLossScaleOptimizer
. Instead,LossScaleOptimizer
directly implements fixed or dynamic loss scaling. See the documentation oftf.keras.mixed_precision.experimental.LossScaleOptimizer
for details on the differences between the experimentalLossScaleOptimizer
and the new non-experimentalLossScaleOptimizer
.tf.mixed_precision.experimental.LossScale
and its subclasses are deprecated, as all of its functionality now exists withintf.keras.mixed_precision.LossScaleOptimizer
tf.lite
:
TFLiteConverter
:- Support optional flags
inference_input_type
andinference_output_type
for full integer quantized models. This allows users to modify the model input and output type to integer types (tf.int8
,tf.uint8
) instead of defaulting to float type (tf.float32
).
- Support optional flags
- NNAPI
- Adds NNAPI Delegation support for requantization use cases by converting the operation into a dequantize-quantize pair.
- Removes deprecated
Interpreter.setUseNNAPI(boolean)
Java API. UseInterpreter.Options.setUseNNAPI
instead. - Deprecates
Interpreter::UseNNAPI(bool)
C++ API. UseNnApiDelegate()
and related delegate configuration methods directly. - Deprecates
Interpreter::SetAllowFp16PrecisionForFp32(bool)
C++ API. Prefer controlling this via delegate options, e.g.tflite::StatefulNnApiDelegate::Options::allow_fp16' or
TfLiteGpuDelegateOptionsV2::is_precision_loss_allowed`.
- GPU
- GPU acceleration now supports quantized models by default
DynamicBuffer::AddJoinedString()
will now add a separator if the first string to be joined is empty.- Adds support for cumulative sum (cumsum), both as builtin op and MLIR conversion.
TensorRT
- Issues a warning when the
session_config
parameter for the TF1 converter is used or therewrite_config_template
field in the TF2 converter parameter object is used.
TPU Enhancements:
- Adds support for the
beta
parameter of the FTRL optimizer for TPU embeddings. Users of other TensorFlow platforms can implement equivalent behavior by adjusting thel2
parameter.
XLA Support:
- xla.experimental.compile is deprecated, use
tf.function(experimental_compile=True)
instead. - Adds
tf.function.experimental_get_compiler_ir
which returns compiler IR (currently 'hlo' and 'optimized_hlo') for given input for given function.
Security:
- Fixes an undefined behavior causing a segfault in
tf.raw_ops.Switch
, (CVE-2020-15190) - Fixes three vulnerabilities in conversion to DLPack format
- Fixes two vulnerabilities in
SparseFillEmptyRowsGrad
- Fixes several vulnerabilities in
RaggedCountSparseOutput
andSparseCountSparseOutput
operations - Fixes an integer truncation vulnerability in code using the work sharder API, (CVE-2020-15202)
- Fixes a format string vulnerability in
tf.strings.as_string
, (CVE-2020-15203) - Fixes segfault raised by calling session-only ops in eager mode, (CVE-2020-15204)
- Fixes data leak and potential ASLR violation from
tf.raw_ops.StringNGrams
, (CVE-2020-15205) - Fixes segfaults caused by incomplete
SavedModel
validation, (CVE-2020-15206) - Fixes a data corruption due to a bug in negative indexing support in TFLite, (CVE-2020-15207)
- Fixes a data corruption due to dimension mismatch in TFLite, (CVE-2020-15208)
- Fixes several vulnerabilities in TFLite saved model format
- Fixes several vulnerabilities in TFLite implementation of segment sum
- Fixes a segfault in
tf.quantization.quantize_and_dequantize
, (CVE-2020-15265) - Fixes an undefined behavior float cast causing a crash, (CVE-2020-15266)
- Fixes a lack of validation in
tf.raw_ops.DataFormatVecPermute
andtf.raw_ops.DataFormatDimMap
which can cause uninitialized memory access, read outside bounds of arrays, data corruption and segmentation faults (CVE-2020-26267) - Fixes a crash caused by writing to read only memory region (CVE-2020-26268)
- Fixes a heap out of bounds access in filesystem globbing implementation (CVE-2020-26269)
Other:
- We have replaced uses of "whitelist" and "blacklist" with "allowlist" and "denylist" where possible. Please see this list for more context.
- Adds
tf.config.experimental.mlir_bridge_rollout
which will help us rollout the new MLIR TPU bridge. - Adds
tf.experimental.register_filesystem_plugin
to load modular filesystem plugins from Python
Thanks to our Contributors
This release contains contributions from many people at Google as well as the following external contributors:
8bitmp3, aaa.jq, Abhineet Choudhary, Abolfazl Shahbazi, acxz, Adam Hillier, Adrian Garcia Badaracco, Ag Ramesh, ahmedsabie, Alan Anderson, Alexander Grund, Alexandre Lissy, Alexey Ivanov, Amedeo Cavallo, anencore94, Aniket Kumar Singh, Anthony Platanios, Ashwin Phadke, Balint Cristian, Basit Ayantunde, bbbboom, Ben Barsdell, Benjamin Chetioui, Benjamin Peterson, bhack, Bhanu Prakash Bandaru Venkata, Biagio Montaruli, Brent M. Spell, bubblebooy, bzhao, cfRod, Cheng Chen, Cheng(Kit) Chen, Chris Tessum, Christian, chuanqiw, codeadmin_peritiae, COTASPAR, CuiYifeng, danielknobe, danielyou0230, dannyfriar, daria, DarrenZhang01, Denisa Roberts, dependabot[bot], Deven Desai, Dmitry Volodin, Dmitry Zakharov, drebain, Duncan Riach, Eduard Feicho, Ehsan Toosi, Elena Zhelezina, emlaprise2358, Eugene Kuznetsov, Evaderan-Lab, Evgeniy Polyakov, Fausto Morales, Felix Johnny, fo40225, Frederic Bastien, Fredrik Knutsson, fsx950223, Gaurav Singh, Gauri1 Deshpande, George Grzegorz Pawelczak, gerbauz, Gianluca Baratti, Giorgio Arena, Gmc2, Guozhong Zhuang, Hannes Achleitner, Harirai, HarisWang, Harsh188, hedgehog91, Hemal Mamtora, Hideto Ueno, Hugh Ku, Ian Beauregard, Ilya Persky, jacco, Jakub Beránek, Jan Jongboom, Javier Montalt Tordera, Jens Elofsson, Jerry Shih, jerryyin, jgehw, Jinjing Zhou, jma, jmsmdy, Johan Nordström, John Poole, Jonah Kohn, Jonathan Dekhtiar, jpodivin, Jung Daun, Kai Katsumata, Kaixi Hou, Kamil Rakoczy, Kaustubh Maske Patil, Kazuaki Ishizaki, Kedar Sovani, Koan-Sin Tan, Koki Ibukuro, Krzysztof Laskowski, Kushagra Sharma, Kushan Ahmadian, Lakshay Tokas, Leicong Li, levinxo, Lukas Geiger, Maderator, Mahmoud Abuzaina, Mao Yunfei, Marius Brehler, markf, Martin Hwasser, Martin Kubovčík, Matt Conley, Matthias, mazharul, mdfaijul, Michael137, MichelBr, Mikhail Startsev, Milan Straka, Ml-0, Myung-Hyun Kim, Måns Nilsson, Nathan Luehr, ngc92, nikochiko, Niranjan Hasabnis, nyagato_00, Oceania2018, Oleg Guba, Ongun Kanat, OscarVanL, Patrik Laurell, Paul Tanger, Peter Sobot, Phil Pearl, PlusPlusUltra, Poedator, Prasad Nikam, Rahul-Kamat, Rajeshwar Reddy T, redwrasse, Rickard, Robert Szczepanski, Rohan Lekhwani, Sam Holt, Sami Kama, Samuel Holt, Sandeep Giri, sboshin, Sean Settle, settle, Sharada Shiddibhavi, Shawn Presser, ShengYang1, Shi,Guangyong, Shuxiang Gao, Sicong Li, Sidong-Wei, Srihari Humbarwadi, Srinivasan Narayanamoorthy, Steenu Johnson, Steven Clarkson, stjohnso98, Tamas Bela Feher, Tamas Nyiri, Tarandeep Singh, Teng Lu, Thibaut Goetghebuer-Planchon, Tim Bradley, Tomasz Strejczek, Tongzhou Wang, Torsten Rudolf, Trent Lo, Ty Mick, Tzu-Wei Sung, Varghese, Jojimon, Vignesh Kothapalli, Vishakha Agrawal, Vividha, Vladimir Menshakov, Vladimir Silyaev, VoVAllen, Võ Văn Nghĩa, wondertx, xiaohong1031, Xiaoming (Jason) Cui, Xinan Jiang, Yair Ehrenwald, Yasir Modak, Yasuhiro Matsumoto, Yimei Sun, Yiwen Li, Yixing, Yoav Ramon, Yong Tang, Yong Wu, yuanbopeng, Yunmo Koo, Zhangqiang, Zhou Peng, ZhuBaohe, zilinzhu, zmx
Release 2.3.1
Bug Fixes and Other Changes
- Fixes an undefined behavior causing a segfault in
tf.raw_ops.Switch
(CVE-2020-15190) - Fixes three vulnerabilities in conversion to DLPack format (CVE-2020-15191, CVE-2020-15192, CVE-2020-15193)
- Fixes two vulnerabilities in
SparseFillEmptyRowsGrad
(CVE-2020-15194, CVE-2020-15195) - Fixes several vulnerabilities in
RaggedCountSparseOutput
andSparseCountSparseOutput
operations (CVE-2020-15196, CVE-2020-15197, CVE-2020-15198, CVE-2020-15199, CVE-2020-15200, CVE-2020-15201) - Fixes an integer truncation vulnerability in code using the work sharder API (CVE-2020-15202)
- Fixes a format string vulnerability in
tf.strings.as_string
(CVE-2020-15203) - Fixes segfault raised by calling session-only ops in eager mode (CVE-2020-15204)
- Fixes data leak and potential ASLR violation from
tf.raw_ops.StringNGrams
(CVE-2020-15205) - Fixes segfaults caused by incomplete
SavedModel
validation (CVE-2020-15206) - Fixes a data corruption due to a bug in negative indexing support in TFLite (CVE-2020-15207)
- Fixes a data corruption due to dimension mismatch in TFLite (CVE-2020-15208)
- Fixes several vulnerabilities in TFLite saved model format (CVE-2020-15209, CVE-2020-15210, CVE-2020-15211)
- Fixes several vulnerabilities in TFLite implementation of segment sum (CVE-2020-15212, CVE-2020-15213, CVE-2020-15214)
- Updates
sqlite3
to3.33.00
to handle CVE-2020-15358. - Fixes deprecated usage of
collections
API - Removes
scipy
dependency fromsetup.py
since TensorFlow does not need it to install the pip package
Release 2.2.1
Bug Fixes and Other Changes
- Fixes an undefined behavior causing a segfault in
tf.raw_ops.Switch
(CVE-2020-15190) - Fixes three vulnerabilities in conversion to DLPack format (CVE-2020-15191, CVE-2020-15192, CVE-2020-15193)
- Fixes two vulnerabilities in
SparseFillEmptyRowsGrad
(CVE-2020-15194, CVE-2020-15195) - Fixes an integer truncation vulnerability in code using the work sharder API (CVE-2020-15202)
- Fixes a format string vulnerability in
tf.strings.as_string
(CVE-2020-15203) - Fixes segfault raised by calling session-only ops in eager mode (CVE-2020-15204)
- Fixes data leak and potential ASLR violation from
tf.raw_ops.StringNGrams
(CVE-2020-15205) - Fixes segfaults caused by incomplete
SavedModel
validation (CVE-2020-15206) - Fixes a data corruption due to a bug in negative indexing support in TFLite (CVE-2020-15207)
- Fixes a data corruption due to dimension mismatch in TFLite (CVE-2020-15208)
- Fixes several vulnerabilities in TFLite saved model format (CVE-2020-15209, CVE-2020-15210, CVE-2020-15211)
- Fixes several vulnerabilities in TFLite implementation of segment sum (CVE-2020-15212, CVE-2020-15213, CVE-2020-15214)
- Updates
sqlite3
to3.33.00
to handle CVE-2020-9327, CVE-2020-11655, CVE-2020-11656, CVE-2020-13434, CVE-2020-13435, CVE-2020-13630, CVE-2020-13631, CVE-2020-13871, and CVE-2020-15358. - Fixes deprecated usage of
collections
API - Removes
scipy
dependency fromsetup.py
since TensorFlow does not need it to install the pip package
Release 2.1.2
Bug Fixes and Other Changes
- Fixes an undefined behavior causing a segfault in
tf.raw_ops.Switch
(CVE-2020-15190) - Fixes three vulnerabilities in conversion to DLPack format (CVE-2020-15191, CVE-2020-15192, CVE-2020-15193)
- Fixes two vulnerabilities in
SparseFillEmptyRowsGrad
(CVE-2020-15194, CVE-2020-15195) - Fixes an integer truncation vulnerability in code using the work sharder API (CVE-2020-15202)
- Fixes a format string vulnerability in
tf.strings.as_string
(CVE-2020-15203) - Fixes segfault raised by calling session-only ops in eager mode (CVE-2020-15204)
- Fixes data leak and potential ASLR violation from
tf.raw_ops.StringNGrams
(CVE-2020-15205) - Fixes segfaults caused by incomplete
SavedModel
validation (CVE-2020-15206) - Fixes a data corruption due to a bug in negative indexing support in TFLite (CVE-2020-15207)
- Fixes a data corruption due to dimension mismatch in TFLite (CVE-2020-15208)
- Fixes several vulnerabilities in TFLite saved model format (CVE-2020-15209, CVE-2020-15210, CVE-2020-15211)
- Updates
sqlite3
to3.33.00
to handle CVE-2020-9327, CVE-2020-11655, CVE-2020-11656, CVE-2020-13434, CVE-2020-13435, CVE-2020-13630, CVE-2020-13631, CVE-2020-13871, and CVE-2020-15358. - Removes
scipy
dependency fromsetup.py
since TensorFlow does not need it to install the pip package - Switches ROCM builds to use ROCM 3.7
Release 2.0.3
Bug Fixes and Other Changes
- Fixes an undefined behavior causing a segfault in
tf.raw_ops.Switch
(CVE-2020-15190) - Fixes three vulnerabilities in conversion to DLPack format (CVE-2020-15191, CVE-2020-15192, CVE-2020-15193)
- Fixes two vulnerabilities in
SparseFillEmptyRowsGrad
(CVE-2020-15194, CVE-2020-15195) - Fixes an integer truncation vulnerability in code using the work sharder API (CVE-2020-15202)
- Fixes a format string vulnerability in
tf.strings.as_string
(CVE-2020-15203) - Fixes segfault raised by calling session-only ops in eager mode (CVE-2020-15204)
- Fixes data leak and potential ASLR violation from
tf.raw_ops.StringNGrams
(CVE-2020-15205) - Fixes segfaults caused by incomplete
SavedModel
validation (CVE-2020-15206) - Fixes a data corruption due to a bug in negative indexing support in TFLite (CVE-2020-15207)
- Fixes a data corruption due to dimension mismatch in TFLite (CVE-2020-15208)
- Fixes several vulnerabilities in TFLite saved model format (CVE-2020-15209, CVE-2020-15210, CVE-2020-15211)
- Updates
sqlite3
to3.33.00
to handle CVE-2020-9327, CVE-2020-11655, CVE-2020-11656, CVE-2020-13434, CVE-2020-13435, CVE-2020-13630, CVE-2020-13631, CVE-2020-13871, and CVE-2020-15358. - Pins
numpy
to 1.18.5 to prevent ABI breakage when compiling code that uses both NumPy and TensorFlow headers.
Release 1.15.4
Bug Fixes and Other Changes
- Fixes an undefined behavior causing a segfault in
tf.raw_ops.Switch
(CVE-2020-15190) - Fixes three vulnerabilities in conversion to DLPack format (CVE-2020-15191, CVE-2020-15192, CVE-2020-15193)
- Fixes two vulnerabilities in
SparseFillEmptyRowsGrad
(CVE-2020-15194, CVE-2020-15195) - Fixes an integer truncation vulnerability in code using the work sharder API (CVE-2020-15202)
- Fixes a format string vulnerability in
tf.strings.as_string
(CVE-2020-15203) - Fixes segfault raised by calling session-only ops in eager mode (CVE-2020-15204)
- Fixes data leak and potential ASLR violation from
tf.raw_ops.StringNGrams
(CVE-2020-15205) - Fixes segfaults caused by incomplete
SavedModel
validation (CVE-2020-15206) - Fixes a data corruption due to a bug in negative indexing support in TFLite (CVE-2020-15207)
- Fixes a data corruption due to dimension mismatch in TFLite (CVE-2020-15208)
- Fixes several vulnerabilities in TFLite saved model format (CVE-2020-15209, CVE-2020-15210, CVE-2020-15211)
- Updates
sqlite3
to3.33.00
to handle CVE-2020-9327, CVE-2020-11655, CVE-2020-11656, CVE-2020-13434, CVE-2020-13435, CVE-2020-13630, CVE-2020-13631, CVE-2020-13871, and CVE-2020-15358. - Fixes #41630 by including
max_seq_length
in CuDNN descriptor cache key - Pins
numpy
to 1.18.5 to prevent ABI breakage when compiling code that uses both NumPy and TensorFlow headers.
Release 2.3.0
Major Features and Improvements
-
tf.data
adds two new mechanisms to solve input pipeline bottlenecks and save resources:In addition checkout the detailed guide for analyzing input pipeline performance with TF Profiler.
-
tf.distribute.TPUStrategy
is now a stable API and no longer considered experimental for TensorFlow. (earliertf.distribute.experimental.TPUStrategy
). -
TF Profiler introduces two new tools: a memory profiler to visualize your model’s memory usage over time and a python tracer which allows you to trace python function calls in your model. Usability improvements include better diagnostic messages and profile options to customize the host and device trace verbosity level.
-
Introduces experimental support for Keras Preprocessing Layers API (
tf.keras.layers.experimental.preprocessing.*
) to handle data preprocessing operations, with support for composite tensor inputs. Please see below for additional details on these layers. -
TFLite now properly supports dynamic shapes during conversion and inference. We’ve also added opt-in support on Android and iOS for XNNPACK, a highly optimized set of CPU kernels, as well as opt-in support for executing quantized models on the GPU.
-
Libtensorflow packages are available in GCS starting this release. We have also started to release a nightly version of these packages.
-
The experimental Python API
tf.debugging.experimental.enable_dump_debug_info()
now allows you to instrument a TensorFlow program and dump debugging information to a directory on the file system. The directory can be read and visualized by a new interactive dashboard in TensorBoard 2.3 called Debugger V2, which reveals the details of the TensorFlow program including graph structures, history of op executions at the Python (eager) and intra-graph levels, the runtime dtype, shape, and numerical composition of tensors, as well as their code locations.
Breaking Changes
- Increases the minimum bazel version required to build TF to 3.1.0.
tf.data
- Makes the following (breaking) changes to the
tf.data
. - C++ API: -
IteratorBase::RestoreInternal
,IteratorBase::SaveInternal
, andDatasetBase::CheckExternalState
become pure-virtual and subclasses are now expected to provide an implementation. - The deprecated
DatasetBase::IsStateful
method is removed in favor ofDatasetBase::CheckExternalState
. - Deprecated overrides of
DatasetBase::MakeIterator
andMakeIteratorFromInputElement
are removed. - The signature of
tensorflow::data::IteratorBase::SaveInternal
andtensorflow::data::IteratorBase::SaveInput
has been extended withSerializationContext
argument to enable overriding the default policy for the handling external state during iterator checkpointing. This is not a backwards compatible change and all subclasses ofIteratorBase
need to be updated accordingly.
- Makes the following (breaking) changes to the
tf.keras
- Add a new
BackupAndRestore
callback for handling distributed training failures & restarts. Please take a look at this tutorial for details on how to use the callback.
- Add a new
tf.image.extract_glimpse
has been updated to correctly process the case wherecentered=False
andnormalized=False
. This is a breaking change as the output is different from (incorrect) previous versions. Note this breaking change only impactstf.image.extract_glimpse
andtf.compat.v2.image.extract_glimpse
API endpoints. The behavior oftf.compat.v1.image.extract_glimpse
does not change. The behavior of existing C++ kernelExtractGlimpse
does not change either, so saved models usingtf.raw_ops.ExtractGlimpse
will not be impacted.
Known Caveats
tf.lite
- Keras-based LSTM models must be converted with an explicit batch size in the input layer.
Bug Fixes and Other Changes
TF Core:
- Set
tf2_behavior
to 1 to enable V2 for early loading cases. - Add
execute_fn_for_device function
to dynamically choose the implementation based on underlying device placement. - Eager:
- Add
reduce_logsumexp
benchmark with experiment compile. - Give
EagerTensor
s a meaningful__array__
implementation. - Add another version of defun matmul for performance analysis.
- Add
tf.function
/AutoGraph:AutoGraph
now includes into TensorFlow loops any variables that are closed over by local functions. Previously, such variables were sometimes incorrectly ignored.- functions returned by the
get_concrete_function
method oftf.function
objects can now be called with arguments consistent with the original arguments or type specs passed toget_concrete_function
. This calling convention is now the preferred way to use concrete functions with nested values and composite tensors. Please check the guide for more details onconcrete_ function
. - Update
tf.function
'sexperimental_relax_shapes
to handle composite tensors appropriately. - Optimize
tf.function
invocation, by removing redundant list converter. tf.function
will retrace when called with a different variable instead of simply using thedtype
&shape
.- Improve support
for dynamically-sized TensorArray inside
tf.function
.
tf.math
:- Narrow down
argmin
/argmax
contract to always return the smallest index for ties. tf.math.reduce_variance
andtf.math.reduce_std
return correct computation for complex types and no longer support integer types.- Add Bessel functions of order 0,1 to
tf.math.special
. tf.divide
now always returns a tensor to be consistent with documentation and other APIs.
- Narrow down
tf.image
:- Replaced
tf.image.non_max_suppression_padded
with a new implementation that supports batched inputs, which is considerably faster on TPUs and GPUs. Boxes with area=0 will be ignored. Existing usage with single inputs should still work as before.
- Replaced
tf.linalg
- Add
tf.linalg.banded_triangular_solve
.
- Add
tf.random
:- Add
tf.random.stateless_parameterized_truncated_normal
.
- Add
tf.ragged
:- Add
tf.ragged.cross
andtf.ragged.cross_hashed
operations.
- Add
tf.RaggedTensor
:RaggedTensor.to_tensor()
now preserves static shape.- Add
tf.strings.format()
andtf.print()
to support RaggedTensors.
tf.saved_model
:@tf.function
from SavedModel no longer ignores args after aRaggedTensor
when selecting the concrete function to run.- Fix save model issue for ops with a list of functions.
- Add
tf.saved_model.LoadOptions
withexperimental_io_device
as arg with default valueNone
to choose the I/O device for loading models and weights. - Update
tf.saved_model.SaveOptions
withexperimental_io_device
as arg with default valueNone
to choose the I/O device for saving models and weights. - Mutable tables now restore checkpointed values when loaded from SavedModel.
- The user object metadata field in the SavedModel proto has been deprecated as part of the updates to Keras SavedModel. Keras was the only consumer of this field prior to the update.
- GPU
- TF 2.3 includes PTX kernels only for compute capability 7.0 to reduce the TF pip binary size. Earlier releases included PTX for a variety of older compute capabilities.
- Remove environmental variable
TF_USE_CUDNN
.
- Others
- Retain parent namescope for ops added inside
tf.while_loop
/tf.cond
/tf.switch_case
. - Update
tf.vectorized_map
to support vectorizingtf.while_loop
and TensorList operations. tf.custom_gradient
can now be applied to functions that accept nested structures oftensors
as inputs (instead of just a list of tensors). Note that Python structures such as tuples and lists now won't be treated as tensors, so if you still want them to be treated that way, you need to wrap them withtf.convert_to_tensor
.- No lowering on gradient case op when input is
DeviceIndex
op. - Extend the ragged version of
tf.gather
to supportbatch_dims
andaxis
args. - Update
tf.map_fn
to support RaggedTensors and SparseTensors. - Deprecate
tf.group
. It is not useful in eager mode. - Add CPU and GPU implementation of modified variation of
FTRL
/FTRLV2
that can triggerred bymultiply_linear_by_lr
allowing a learning rate of zero.
- Retain parent namescope for ops added inside
tf.data
:
tf.data.experimental.dense_to_ragged_batch
works correctly with tuples.tf.data.experimental.dense_to_ragged_batch
to output variable ragged rank.tf.data.experimental.cardinality
is now a method ontf.data.Dataset
.tf.data.Dataset
now supportslen(Dataset)
when the cardinality is finite.
tf.distribute
:
- Expose experimental
tf.distribute.DistributedDataset
andtf.distribute.DistributedIterator
to distribute input data when usingtf.distribute
to scale training on multiple devices.- Added a
get_next_as_optional
method fortf.distribute.DistributedIterator
class to return atf.experimental.Optional
instance that contains the next value for all replicas or none instead of raising an out of range error. Also see new guide on input distribution.
- Added a
- Allow var.assign on MirroredVariables with aggregation=NONE in replica
context. Previously this would raise an error. We now allow this because
many users and library writers find using
.assign
in replica context to be more convenient, instead of having to useStrategy.extended.update
which was the previous way of updating variables in this situation. tf.distribute.experimental.MultiWorkerMirroredStrategy
adds support for partial batches. Workers running out of data now continue to participate in the training with empty inputs, instead of raising an error. Learn more about partial batches here.- Improve the performance of reading metrics eagerly under
tf.distribute.experimental.MultiWorkerMirroredStrategy
. - Fix the issue that
strategy.reduce()
insidetf.function
may raise exceptions when the values to reduce are from loops or if-clauses. - Fix the issue that
tf.distribute.MirroredStrategy
cannot be used together withtf.distribute.experimental.MultiWorkerMirroredStrategy
. - Add a
tf.distribute.cluster_resolver.TPUClusterResolver.connect
API to simplify TPU initialization. - Add
tf.distribute.Strategy.gather
andtf.distribute.ReplicaContext.all_gather
methods to gather and concatenatetf.distribute.DistributedValues
across workers and devices.
tf.keras
:
- Introduces experimental preprocessing layers API
(
tf.keras.layers.experimental.preprocessing
) to handle data preprocessing operations such as categorical feature encoding, text vectorization, data normalization, and data discretization (binning). The newly added layers provide a replacement for the legacy feature column API, and support composite tensor inputs. - Added categorical data processing layers:
IntegerLookup
&StringLookup
: build an index of categorical feature valuesCategoryEncoding
: turn integer-encoded categories into one-hot, multi-hot, or tf-idf encoded representationsCategoryCrossing
: create new categorical features representing co-occurrences of previous categorical feature valuesHashing
: the hashing trick, for large-vocabulary categorical featuresDiscretization
: turn continuous numerical features into categorical features by binning their values
- Improved image preprocessing layers:
CenterCrop
,Rescaling
- Improved image augmentation layers:
RandomCrop
,RandomFlip
,RandomTranslation
,RandomRotation
,RandomHeight
,RandomWidth
,RandomZoom
,RandomContrast
- Improved
TextVectorization
layer, which handles string tokenization, n-gram generation, and token encoding- The
TextVectorization
layer now accounts for the mask_token as part of the vocabulary size when output_mode='int'. This means that, if you have a max_tokens value of 5000, your output will have 5000 unique values (not 5001 as before). - Change the return value of
TextVectorization.get_vocabulary()
frombyte
tostring
. Users who previously were calling 'decode' on the output of this method should no longer need to do so.
- The
- Introduce new Keras dataset generation utilities :
image_dataset_from_directory
is a utility based ontf.data.Dataset
, meant to replace the legacyImageDataGenerator
. It takes you from a structured directory of images to a labeled dataset, in one function call. Note that it doesn't perform image data augmentation (which is meant to be done using preprocessing layers).text_dataset_from_directory
takes you from a structured directory of text files to a labeled dataset, in one function call.timeseries_dataset_from_array
is atf.data.Dataset
-based replacement of the legacyTimeseriesGenerator
. It takes you from an array of timeseries data to a dataset of shifting windows with their targets.
- Added
experimental_steps_per_execution
arg tomodel.compile
to indicate the number of batches to run pertf.function
call. This can speed up Keras Models on TPUs up to 3x. - Extends
tf.keras.layers.Lambda
layers to support multi-argument lambdas, and keyword arguments when calling the layer. - Functional models now get constructed if any tensor in a layer call's arguments/keyword arguments comes from a keras input. Previously the functional api would only work if all of the elements in the first argument to the layer came from a keras input.
- Clean up
BatchNormalization
layer'strainable
property to act like standard python state when it's used insidetf.functions
(frozen at tracing time), instead of acting like a pseudo-variable whose updates kind of sometimes get reflected in already-tracedtf.function
traces. - Add the
Conv1DTranspose
layer. - Refine the semantics of
SensitivitySpecificityBase
derived metrics. See the updated API docstrings fortf.keras.metrics.SensitivityAtSpecificity
andtf.keras.metrics.SpecificityAtSensitivty
.
tf.lite
:
- Converter
- Restored
inference_input_type
andinference_output_type
flags in TF 2.x TFLiteConverter (backward compatible with TF 1.x) to support integer (tf.int8, tf.uint8) input and output types in post training full integer quantized models. - Added support for converting and resizing models with dynamic
(placeholder) dimensions. Previously, there was only limited support for
dynamic batch size, and even that did not guarantee that the model could
be properly resized at runtime.
- Enabled experimental support for a new quantization mode with 16-bit
activations and 8-bit weights. See
lite.OpsSet.EXPERIMENTAL_TFLITE_BUILTINS_ACTIVATIONS_INT16_WEIGHTS_INT8
.
- Enabled experimental support for a new quantization mode with 16-bit
activations and 8-bit weights. See
- Restored
- CPU
- Fix an issue w/ dynamic weights and
Conv2D
on x86. - Add a runtime Android flag for enabling
XNNPACK
for optimized CPU performance. - Add a runtime iOS flag for enabling
XNNPACK
for optimized CPU performance. - Add a compiler flag to enable building a TFLite library that applies
XNNPACK
delegate automatically when the model has afp32
operation.
- Fix an issue w/ dynamic weights and
- GPU
- Allow GPU acceleration starting with internal graph nodes
- Experimental support for quantized models with the Android GPU delegate
- Add GPU delegate whitelist.
- Rename GPU whitelist -> compatibility (list).
- Improve GPU compatibility list entries from crash reports.
- NNAPI
- Set default value for
StatefulNnApiDelegate::Options::max_number_delegated_partitions
to 3. - Add capability to disable
NNAPI
CPU and checkNNAPI
Errno. - Fix crashes when using
NNAPI
with target accelerator specified with model containing Conv2d or FullyConnected or LSTM nodes with quantized weights. - Fix
ANEURALNETWORKS_BAD_DATA
execution failures withsum
/max
/min
/reduce
operations withscalar
inputs.
- Set default value for
- Hexagon
- TFLite Hexagon Delegate out of experimental.
- Experimental
int8
support for most hexagon ops. - Experimental per-channel quant support for
conv
in Hexagon delegate. - Support dynamic batch size in C++ API.
- CoreML
- Opensource CoreML delegate
- Misc
- Enable building Android TFLite targets on Windows
- Add support for
BatchMatMul
. - Add support for
half_pixel_centers
withResizeNearestNeighbor
. - Add 3D support for
BatchToSpaceND
. - Add 5D support for
BroadcastSub
,Maximum
,Minimum
,Transpose
andBroadcastDiv
. - Rename
kTfLiteActRelu1
tokTfLiteActReluN1To1
. - Enable flex delegate on tensorflow.lite.Interpreter Python package.
- Add
Buckettize
,SparseCross
andBoostedTreesBucketize
to the flex whitelist. - Add support for selective registration of flex ops.
- Add missing kernels for flex delegate whitelisted ops.
- Fix issue when using direct
ByteBuffer
inputs with graphs that have dynamic shapes. - Fix error checking supported operations in a model containing
HardSwish
.
Packaging Support
- Added
tf.sysconfig.get_build_info()
. Returns a dict that describes the build environment of the currently installed TensorFlow package, e.g. the NVIDIA CUDA and NVIDIA CuDNN versions used when TensorFlow was built.
Profiler
- Fix a subtle use-after-free issue in
XStatVisitor::RefValue()
.
TPU Enhancements
- Adds 3D mesh support in TPU configurations ops.
- Added TPU code for
FTRL
withmultiply_linear_by_lr
. - Silently adds a new file system registry at
gstpu
. - Support
restartType
in cloud tpu client. - Depend on a specific version of google-api-python-client.
- Fixes apiclient import.
Tracing and Debugging
- Add a
TFE_Py_Execute
traceme.
XLA Support
- Implement stable
argmin
andargmax
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
902449@58880@bigcat_chen@ASIC, Abdul Baseer Khan, Abhineet Choudhary, Abolfazl Shahbazi, Adam Hillier, ag.ramesh, Agoniii, Ajay P, Alex Hoffman, Alexander Bayandin, Alexander Grund, Alexandre Abadie, Alexey Rogachevskiy, amoitra, Andrew Stevens, Angus-Luo, Anshuman Tripathy, Anush Elangovan, Artem Mavrin, Ashutosh Hathidara, autoih, Ayushman Kumar, ayushmankumar7, Bairen Yi, Bas Aarts, Bastian Eichenberger, Ben Barsdell, bhack, Bharat Raghunathan, Biagio Montaruli, Bigcat-Himax, blueyi, Bryan Cutler, Byambaa, Carlos Hernandez-Vaquero, Chen Lei, Chris Knorowski, Christian Clauss, chuanqiw, CuiYifeng, Daniel Situnayake, Daria Zhuravleva, Dayananda-V, Deven Desai, Devi Sandeep Endluri, Dmitry Zakharov, Dominic Jack, Duncan Riach, Edgar Liberis, Ehsan Toosi, ekuznetsov139, Elena Zhelezina, Eugene Kuznetsov, Eugene Mikhantiev, Evgenii Zheltonozhskii, Fabio Di Domenico, Fausto Morales, Fei Sun, feihugis, Felix E. Klee, flyingcat, Frederic Bastien, Fredrik Knutsson, frreiss, fsx950223, ganler, Gaurav Singh, Georgios Pinitas, Gian Marco Iodice, Giorgio Arena, Giuseppe Rossini, Gregory Keith, Guozhong Zhuang, gurushantj, Hahn Anselm, Harald Husum, Harjyot Bagga, Hristo Vrigazov, Ilya Persky, Ir1d, Itamar Turner-Trauring, jacco, Jake Tae, Janosh Riebesell, Jason Zaman, jayanth, Jeff Daily, Jens Elofsson, Jinzhe Zeng, JLZ, Jonas Skog, Jonathan Dekhtiar, Josh Meyer, Joshua Chia, Judd, justkw, Kaixi Hou, Kam D Kasravi, Kamil Rakoczy, Karol Gugala, Kayou, Kazuaki Ishizaki, Keith Smiley, Khaled Besrour, Kilaru Yasaswi Sri Chandra Gandhi, Kim, Young Soo, Kristian Hartikainen, Kwabena W. Agyeman, Leslie-Fang, Leslie-Fang-Intel, Li, Guizi, Lukas Geiger, Lutz Roeder, M\U00E5Ns Nilsson, Mahmoud Abuzaina, Manish, Marcel Koester, Marcin Sielski, marload, Martin Jul, Matt Conley, mdfaijul, Meng, Peng, Meteorix, Michael Käufl, Michael137, Milan Straka, Mitchell Vitez, Ml-0, Mokke Meguru, Mshr-H, nammbash, Nathan Luehr, naumkin, Neeraj Bhadani, ngc92, Nick Morgan, nihui, Niranjan Hasabnis, Niranjan Yadla, Nishidha Panpaliya, Oceania2018, oclyke, Ouyang Jin, OverLordGoldDragon, Owen Lyke, Patrick Hemmer, Paul Andrey, Peng Sun, periannath, Phil Pearl, Prashant Dandriyal, Prashant Kumar, Rahul Huilgol, Rajan Singh, Rajeshwar Reddy T, rangjiaheng, Rishit Dagli, Rohan Reddy, rpalakkal, rposts, Ruan Kunliang, Rushabh Vasani, Ryohei Ikegami, Semun Lee, Seo-Inyoung, Sergey Mironov, Sharada Shiddibhavi, ShengYang1, Shraiysh Vaishay, Shunya Ueta, shwetaoj, Siyavash Najafzade, Srinivasan Narayanamoorthy, Stephan Uphoff, storypku, sunchenggen, sunway513, Sven-Hendrik Haase, Swapnil Parekh, Tamas Bela Feher, Teng Lu, tigertang, tomas, Tomohiro Ubukata, tongxuan.ltx, Tony Tonev, Tzu-Wei Huang, Téo Bouvard, Uday Bondhugula, Vaibhav Jade, Vijay Tadikamalla, Vikram Dattu, Vincent Abriou, Vishnuvardhan Janapati, Vo Van Nghia, VoVAllen, Will Battel, William D. Irons, wyzhao, Xiaoming (Jason) Cui, Xiaoquan Kong, Xinan Jiang, xutianming, Yair Ehrenwald, Yasir Modak, Yasuhiro Matsumoto, Yixing Fu, Yong Tang, Yuan Tang, zhaozheng09, Zilin Zhu, zilinzhu, 张志豪
Release 2.1.1
Bug Fixes and Other Changes
- Updates
sqlite3
to3.31.01
to handle CVE-2019-19880, CVE-2019-19244 and CVE-2019-19645 - Updates
curl
to7.69.1
to handle CVE-2019-15601 - Updates
libjpeg-turbo
to2.0.4
to handle CVE-2018-19664, CVE-2018-20330 and CVE-2019-13960 - Updates Apache Spark to
2.4.5
to handle CVE-2019-10099, CVE-2018-17190 and CVE-2018-11770 - Fixes a versioning bug which causes Keras layers from TF 1.x to be used instead of those from TF 2.x
Release 2.0.2
Bug Fixes and Other Changes
- Updates
sqlite3
to3.31.01
to handle CVE-2019-19880, CVE-2019-19244 and CVE-2019-19645 - Updates
curl
to7.69.1
to handle CVE-2019-15601 - Updates
libjpeg-turbo
to2.0.4
to handle CVE-2018-19664, CVE-2018-20330 and CVE-2019-13960 - Updates Apache Spark to
2.4.5
to handle CVE-2019-10099, CVE-2018-17190 and CVE-2018-11770
Release 1.15.3
Bug Fixes and Other Changes
- Updates
sqlite3
to3.31.01
to handle CVE-2019-19880, CVE-2019-19244 and CVE-2019-19645 - Updates
curl
to7.69.1
to handle CVE-2019-15601 - Updates
libjpeg-turbo
to2.0.4
to handle CVE-2018-19664, CVE-2018-20330 and CVE-2019-13960 - Updates Apache Spark to
2.4.5
to handle CVE-2019-10099, CVE-2018-17190 and CVE-2018-11770
Release 2.2.0
TensorFlow 2.2 discontinues support for Python 2, previously announced as following Python 2's EOL on January 1, 2020.
Coinciding with this change, new releases of
TensorFlow's Docker images
provide Python 3 exclusively. Because all images now use Python 3, Docker tags
containing -py3
will no longer be provided and existing -py3
tags like
latest-py3
will not be updated.
Major Features and Improvements
-
Replaced the scalar type for string tensors from
std::string
totensorflow::tstring
which is now ABI stable. -
A new Profiler for TF 2 for CPU/GPU/TPU. It offers both device and host performance analysis, including input pipeline and TF Ops. Optimization advisory is provided whenever possible. Please see this tutorial and guide for usage guidelines.
-
Export C++ functions to Python using
pybind11
as opposed toSWIG
as a part of our deprecation of swig efforts. -
tf.distribute
:- Support added for global sync
BatchNormalization
by using the newly addedtf.keras.layers.experimental.SyncBatchNormalization
layer. This layer will syncBatchNormalization
statistics every step across all replicas taking part in sync training. - Performance improvements for GPU multi-worker distributed training using
tf.distribute.experimental.MultiWorkerMirroredStrategy
- Update NVIDIA
NCCL
to2.5.7-1
for better performance and performance tuning. Please see nccl developer guide for more information on this. - Support gradient
allreduce
infloat16
. See this example usage. - Experimental support of all reduce gradient packing to allow overlapping gradient aggregation with backward path computation.
- Deprecated
experimental_run_v2
method for distribution strategies and renamed the methodrun
as it is no longer experimental. - Add CompositeTensor support for DistributedIterators. This should help prevent unnecessary function retracing and memory leaks.
- Support added for global sync
-
tf.keras
:Model.fit
major improvements:- You can now use custom training logic with
Model.fit
by overridingModel.train_step
. - Easily write state-of-the-art training loops without worrying about
all of the features
Model.fit
handles for you (distribution strategies, callbacks, data formats, looping logic, etc) - See the default
Model.train_step
for an example of what this function should look like. Same applies for validation and inference viaModel.test_step
andModel.predict_step
. - SavedModel uses its own
Model._saved_model_inputs_spec
attr now instead of relying onModel.inputs
andModel.input_names
, which are no longer set for subclass Models. This attr is set in eager,tf.function
, and graph modes. This gets rid of the need for users to manually callModel._set_inputs
when using Custom Training Loops(CTLs). - Dynamic shapes are supported for generators by calling the Model on
the first batch we "peek" from the generator. This used to happen
implicitly in
Model._standardize_user_data
. Long-term, a solution where theDataAdapter
doesn't need to call the Model is probably preferable.
- You can now use custom training logic with
- The SavedModel format now supports all Keras built-in layers (including metrics, preprocessing layers, and stateful RNN layers)
- Update Keras batch normalization layer to use the running mean and
average computation in the
fused_batch_norm
. You should see significant performance improvements when usingfused_batch_norm
in Eager mode.
-
tf.lite
:- Enable TFLite experimental new converter by default.
-
XLA
- XLA now builds and works on windows. All prebuilt packages come with XLA available.
- XLA can be
enabled for a
tf.function
with “compile or throw exception” semantics on CPU and GPU.
Breaking Changes
tf.keras
:- In
tf.keras.applications
the name of the "top" layer has been standardized to "predictions". This is only a problem if your code relies on the exact name of the layer. - Huber loss function has been updated to be consistent with other Keras losses. It now computes mean over the last axis of per-sample losses before applying the reduction function.
- In
- AutoGraph no longer converts functions passed to
tf.py_function
,tf.py_func
andtf.numpy_function
. - Deprecating
XLA_CPU
andXLA_GPU
devices with this release. - Increasing the minimum bazel version to build TF to 2.0.0 to use Bazel's
cc_experimental_shared_library
. - Keras compile/fit behavior for functional and subclassed models have been
unified. Model properties such as
metrics
,metrics_names
will now be available only after training/evaluating the model on actual data for functional models.metrics
will now include modelloss
and output losses.loss_functions
property has been removed from the model. This was an undocumented property that was accidentally public and has now been removed.
Known Caveats
- The current TensorFlow release now requires gast version 0.3.3.
Bug Fixes and Other Changes
tf.data
:- Removed
autotune_algorithm
from experimental optimization options.
- Removed
- TF Core:
tf.constant
always creates CPU tensors irrespective of the current device context.- Eager
TensorHandles
maintain a list of mirrors for any copies to local or remote devices. This avoids any redundant copies due to op execution. - For
tf.Tensor
&tf.Variable
,.experimental_ref()
is no longer experimental and is available as simply.ref()
. pfor/vectorized_map
: Added support for vectorizing 56 more ops. Vectorizingtf.cond
is also supported now.- Set as much partial shape as we can infer statically within the gradient impl of the gather op.
- Gradient of
tf.while_loop
emitsStatelessWhile
op ifcond
and body functions are stateless. This allows multiple gradients while ops to run in parallel under distribution strategy. - Speed up
GradientTape
in eager mode by auto-generating list of op inputs/outputs which are unused and hence not cached for gradient functions. - Support
back_prop=False
inwhile_v2
but mark it as deprecated. - Improve error message when attempting to use
None
in data-dependent control flow. - Add
RaggedTensor.numpy()
. - Update
RaggedTensor.__getitem__
to preserve uniform dimensions & allow indexing into uniform dimensions. - Update
tf.expand_dims
to always insert the new dimension as a non-ragged dimension. - Update
tf.embedding_lookup
to usepartition_strategy
andmax_norm
whenids
is ragged. - Allow
batch_dims==rank(indices)
intf.gather
. - Add support for bfloat16 in
tf.print
.
tf.distribute
:- Support
embedding_column
with variable-length input features forMultiWorkerMirroredStrategy
.
- Support
tf.keras
:- Added
experimental_aggregate_gradients
argument totf.keras.optimizer.Optimizer.apply_gradients
. This allows custom gradient aggregation and processing aggregated gradients in custom training loop. - Allow
pathlib.Path
paths for loading models via Keras API.
- Added
tf.function
/AutoGraph:- AutoGraph is now available in
ReplicaContext.merge_call
,Strategy.extended.update
andStrategy.extended.update_non_slot
. - Experimental support for shape invariants has been enabled in
tf.function
. See the API docs fortf.autograph.experimental.set_loop_options
for additional info. - AutoGraph error messages now exclude frames corresponding to APIs internal to AutoGraph.
- Improve shape inference for
tf.function
input arguments to unlock more Grappler optimizations in TensorFlow 2.x. - Improve automatic control dependency management of resources by allowing resource reads to occur in parallel and synchronizing only on writes.
- Fix execution order of multiple stateful calls to
experimental_run_v2
intf.function
. - You can now iterate over
RaggedTensors
using a for loop insidetf.function
.
- AutoGraph is now available in
tf.lite
:- Migrated the
tf.lite
C inference API out of experimental into lite/c. - Add an option to disallow
NNAPI
CPU / partial acceleration on Android 10 - TFLite Android AARs now include the C headers and APIs are required to use TFLite from native code.
- Refactors the delegate and delegate kernel sources to allow usage in the linter.
- Limit delegated ops to actually supported ones if a device name is
specified or
NNAPI
CPU Fallback is disabled. - TFLite now supports
tf.math.reciprocal1
op by lowering totf.div op
. - TFLite's unpack op now supports boolean tensor inputs.
- Microcontroller and embedded code moved from experimental to main TensorFlow Lite folder
- Check for large TFLite tensors.
- Fix GPU delegate crash with C++17.
- Add 5D support to TFLite
strided_slice
. - Fix error in delegation of
DEPTH_TO_SPACE
toNNAPI
causing op not to be accelerated. - Fix segmentation fault when running a model with LSTM nodes using
NNAPI
Delegate - Fix
NNAPI
delegate failure when an operand for Maximum/Minimum operation is a scalar. - Fix
NNAPI
delegate failure when Axis input for reduce operation is a scalar. - Expose option to limit the number of partitions that will be delegated
to
NNAPI
. - If a target accelerator is specified, use its feature level to determine operations to delegate instead of SDK version.
- Migrated the
tf.random
:- Various random number generation improvements:
- Add a fast path for default
random_uniform
random_seed
documentation improvement.RandomBinomial
broadcasts and appends the sample shape to the left rather than the right.- Added
tf.random.stateless_binomial
,tf.random.stateless_gamma
,tf.random.stateless_poisson
tf.random.stateless_uniform
now supports unbounded sampling ofint
types.
- Math and Linear Algebra:
- Add
tf.linalg.LinearOperatorTridiag
. - Add
LinearOperatorBlockLowerTriangular
- Add broadcasting support to tf.linalg.triangular_solve#26204, tf.math.invert_permutation.
- Add
tf.math.sobol_sample
op. - Add
tf.math.xlog1py
. - Add
tf.math.special.{dawsn,expi,fresnel_cos,fresnel_sin,spence}
. - Add a Modified Discrete Cosine Transform (MDCT) and its inverse to
tf.signal
.
- Add
- TPU Enhancements:
- Refactor
TpuClusterResolver
to move shared logic to a separate pip package. - Support configuring TPU software version from cloud tpu client.
- Allowed TPU embedding weight decay factor to be multiplied by learning rate.
- Refactor
- XLA Support:
- Add standalone XLA AOT runtime target + relevant .cc sources to pip package.
- Add check for memory alignment to MemoryAllocation::MemoryAllocation() on 32-bit ARM. This ensures a deterministic early exit instead of a hard to debug bus error later.
saved_model_cli aot_compile_cpu
allows you to compile saved models to XLA header+object files and include them in your C++ programs.- Enable
Igamma
,Igammac
for XLA.
- Deterministic Op Functionality:
- XLA reduction emitter is deterministic when the environment variable
TF_DETERMINISTIC_OPS
is set to "true" or "1". This extends deterministictf.nn.bias_add
back-prop functionality (and therefore also deterministic back-prop of bias-addition in Keras layers) to include when XLA JIT compilation is enabled. - Fix problem, when running on a CUDA GPU and when either environment
variable
TF_DETERMINISTIC_OPS
or environment variableTF_CUDNN_DETERMINISTIC
is set to "true" or "1", in which some layer configurations led to an exception with the message "No algorithm worked!"
- XLA reduction emitter is deterministic when the environment variable
- Tracing and Debugging:
- Add source, destination name to
_send
traceme to allow easier debugging. - Add traceme event to
fastpathexecute
.
- Add source, destination name to
- Other:
- Fix an issue with AUC.reset_states for multi-label AUC #35852
- Fix the TF upgrade script to not delete files when there is a parsing
error and the output mode is
in-place
. - Move
tensorflow/core:framework/*_pyclif
rules totensorflow/core/framework:*_pyclif
.
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
372046933, 8bitmp3, aaronhma, Abin Shahab, Aditya Patwardhan, Agoniii, Ahti Kitsik, Alan Yee, Albin Joy, Alex Hoffman, Alexander Grund, Alexandre E. Eichenberger, Amit Kumar Jaiswal, amoitra, Andrew Anderson, Angus-Luo, Anthony Barbier, Anton Kachatkou, Anuj Rawat, archis, Arpan-Dhatt, Arvind Sundararajan, Ashutosh Hathidara, autoih, Bairen Yi, Balint Cristian, Bas Aarts, BashirSbaiti, Basit Ayantunde, Ben Barsdell, Benjamin Gaillard, boron, Brett Koonce, Bryan Cutler, Christian Goll, Christian Sachs, Clayne Robison, comet, Daniel Falbel, Daria Zhuravleva, darsh8200, David Truby, Dayananda-V, deepakm, Denis Khalikov, Devansh Singh, Dheeraj R Reddy, Diederik Van Liere, Diego Caballero, Dominic Jack, dothinking, Douman, Drake Gens, Duncan Riach, Ehsan Toosi, ekuznetsov139, Elena Zhelezina, elzino, Ending2015a, Eric Schweitz, Erik Zettel, Ethan Saadia, Eugene Kuznetsov, Evgeniy Zheltonozhskiy, Ewout Ter Hoeven, exfalso, FAIJUL, Fangjun Kuang, Fei Hu, Frank Laub, Frederic Bastien, Fredrik Knutsson, frreiss, Frédéric Rechtenstein, fsx950223, Gaurav Singh, gbaned, George Grzegorz Pawelczak, George Sterpu, Gian Marco Iodice, Giorgio Arena, Hans Gaiser, Hans Pabst, Haoyu Wu, Harry Slatyer, hsahovic, Hugo, Hugo Sjöberg, IrinaM21, jacco, Jake Tae, Jean-Denis Lesage, Jean-Michel Gorius, Jeff Daily, Jens Elofsson, Jerry Shih, jerryyin, Jin Mingjian, Jinjing Zhou, JKIsaacLee, jojimonv, Jonathan Dekhtiar, Jose Ignacio Gomez, Joseph-Rance, Judd, Julian Gross, Kaixi Hou, Kaustubh Maske Patil, Keunwoo Choi, Kevin Hanselman, Khor Chean Wei, Kilaru Yasaswi Sri Chandra Gandhi, Koan-Sin Tan, Koki Ibukuro, Kristian Holsheimer, kurileo, Lakshay Tokas, Lee Netherton, leike666666, Leslie-Fang-Intel, Li, Guizi, LIUJIAN435, Lukas Geiger, Lyo Nguyen, madisetti, Maher Jendoubi, Mahmoud Abuzaina, Manuel Freiberger, Marcel Koester, Marco Jacopo Ferrarotti, Markus Franke, marload, Mbah-Javis, mbhuiyan, Meng Zhang, Michael Liao, MichaelKonobeev, Michal Tarnowski, Milan Straka, minoring, Mohamed Nour Abouelseoud, MoussaMM, Mrinal Jain, mrTsjolder, Måns Nilsson, Namrata Bhave, Nicholas Gao, Niels Ole Salscheider, nikochiko, Niranjan Hasabnis, Nishidha Panpaliya, nmostafa, Noah Trenaman, nuka137, Officium, Owen L - Sfe, Pallavi G, Paul Andrey, Peng Sun, Peng Wu, Phil Pearl, PhilipMay, pingsutw, Pooya Davoodi, PragmaTwice, pshiko, Qwerty71, R Gomathi, Rahul Huilgol, Richard Xiao, Rick Wierenga, Roberto Rosmaninho, ruchit2801, Rushabh Vasani, Sami, Sana Damani, Sarvesh Dubey, Sasan Jafarnejad, Sergii Khomenko, Shane Smiskol, Shaochen Shi, sharkdtu, Shawn Presser, ShengYang1, Shreyash Patodia, Shyam Sundar Dhanabalan, Siju Samuel, Somyajit Chakraborty Sam, Srihari Humbarwadi, srinivasan.narayanamoorthy, Srishti Yadav, Steph-En-M, Stephan Uphoff, Stephen Mugisha, SumanSudhir, Taehun Kim, Tamas Bela Feher, TengLu, Tetragramm, Thierry Herrmann, Tian Jin, tigertang, Tom Carchrae, Tom Forbes, Trent Lo, Victor Peng, vijayphoenix, Vincent Abriou, Vishal Bhola, Vishnuvardhan Janapati, vladbataev, VoVAllen, Wallyss Lima, Wen-Heng (Jack) Chung, wenxizhu, William D. Irons, William Zhang, Xiaoming (Jason) Cui, Xiaoquan Kong, Xinan Jiang, Yasir Modak, Yasuhiro Matsumoto, Yaxun (Sam) Liu, Yong Tang, Ytyt-Yt, yuan, Yuan Mingshuai, Yuan Tang, Yuki Ueda, Yusup, zhangshijin, zhuwenxi
Release 2.0.1
Bug Fixes and Other Changes
- Fixes a security vulnerability where converting a Python string to a
tf.float16
value produces a segmentation fault (CVE-2020-5215) - Updates
curl
to7.66.0
to handle CVE-2019-5482 and CVE-2019-5481 - Updates
sqlite3
to3.30.01
to handle CVE-2019-19646, CVE-2019-19645 and CVE-2019-16168
Release 1.15.2
Bug Fixes and Other Changes
- Fixes a security vulnerability where converting a Python string to a
tf.float16
value produces a segmentation fault (CVE-2020-5215) - Updates
curl
to7.66.0
to handle CVE-2019-5482 and CVE-2019-5481 - Updates
sqlite3
to3.30.01
to handle CVE-2019-19646, CVE-2019-19645 and CVE-2019-16168
Release 2.1.0
TensorFlow 2.1 will be the last TF release supporting Python 2. Python 2 support officially ends an January 1, 2020. As announced earlier, TensorFlow will also stop supporting Python 2 starting January 1, 2020, and no more releases are expected in 2019.
Major Features and Improvements
- The
tensorflow
pip package now includes GPU support by default (same astensorflow-gpu
) for both Linux and Windows. This runs on machines with and without NVIDIA GPUs.tensorflow-gpu
is still available, and CPU-only packages can be downloaded attensorflow-cpu
for users who are concerned about package size. - Windows users: Officially-released
tensorflow
Pip packages are now built with Visual Studio 2019 version 16.4 in order to take advantage of the new/d2ReducedOptimizeHugeFunctions
compiler flag. To use these new packages, you must install "Microsoft Visual C++ Redistributable for Visual Studio 2015, 2017 and 2019", available from Microsoft's website here.- This does not change the minimum required version for building
TensorFlow from source on Windows, but builds enabling
EIGEN_STRONG_INLINE
can take over 48 hours to compile without this flag. Refer toconfigure.py
for more information aboutEIGEN_STRONG_INLINE
and/d2ReducedOptimizeHugeFunctions
. - If either of the required DLLs,
msvcp140.dll
(old) ormsvcp140_1.dll
(new), are missing on your machine,import tensorflow
will print a warning message.
- This does not change the minimum required version for building
TensorFlow from source on Windows, but builds enabling
- The
tensorflow
pip package is built with CUDA 10.1 and cuDNN 7.6. tf.keras
- Experimental support for mixed precision is available on GPUs and Cloud TPUs. See usage guide.
- Introduced the
TextVectorization
layer, which takes as input raw strings and takes care of text standardization, tokenization, n-gram generation, and vocabulary indexing. See this end-to-end text classification example. - Keras
.compile
.fit
.evaluate
and.predict
are allowed to be outside of the DistributionStrategy scope, as long as the model was constructed inside of a scope. - Experimental support for Keras
.compile
,.fit
,.evaluate
, and.predict
is available for Cloud TPUs, Cloud TPU, for all types of Keras models (sequential, functional and subclassing models). - Automatic outside compilation is now enabled for Cloud TPUs. This allows
tf.summary
to be used more conveniently with Cloud TPUs. - Dynamic batch sizes with DistributionStrategy and Keras are supported on Cloud TPUs.
- Support for
.fit
,.evaluate
,.predict
on TPU using numpy data, in addition totf.data.Dataset
. - Keras reference implementations for many popular models are available in the TensorFlow Model Garden.
tf.data
- Changes rebatching for
tf.data datasets
+ DistributionStrategy for better performance. Note that the dataset also behaves slightly differently, in that the rebatched dataset cardinality will always be a multiple of the number of replicas. tf.data.Dataset
now supports automatic data distribution and sharding in distributed environments, including on TPU pods.- Distribution policies for
tf.data.Dataset
can now be tuned with 1.tf.data.experimental.AutoShardPolicy(OFF, AUTO, FILE, DATA)
2.tf.data.experimental.ExternalStatePolicy(WARN, IGNORE, FAIL)
- Changes rebatching for
tf.debugging
- Add
tf.debugging.enable_check_numerics()
andtf.debugging.disable_check_numerics()
to help debugging the root causes of issues involving infinities andNaN
s.
- Add
tf.distribute
- Custom training loop support on TPUs and TPU pods is available through
strategy.experimental_distribute_dataset
,strategy.experimental_distribute_datasets_from_function
,strategy.experimental_run_v2
,strategy.reduce
. - Support for a global distribution strategy through
tf.distribute.experimental_set_strategy(),
in addition tostrategy.scope()
.
- Custom training loop support on TPUs and TPU pods is available through
TensorRT
- TensorRT 6.0
is now supported and enabled by default. This adds support for more
TensorFlow ops including Conv3D, Conv3DBackpropInputV2, AvgPool3D,
MaxPool3D, ResizeBilinear, and ResizeNearestNeighbor. In addition, the
TensorFlow-TensorRT python conversion API is exported as
tf.experimental.tensorrt.Converter
.
- TensorRT 6.0
is now supported and enabled by default. This adds support for more
TensorFlow ops including Conv3D, Conv3DBackpropInputV2, AvgPool3D,
MaxPool3D, ResizeBilinear, and ResizeNearestNeighbor. In addition, the
TensorFlow-TensorRT python conversion API is exported as
- Environment variable
TF_DETERMINISTIC_OPS
has been added. When set to "true" or "1", this environment variable makestf.nn.bias_add
operate deterministically (i.e. reproducibly), but currently only when XLA JIT compilation is not enabled. SettingTF_DETERMINISTIC_OPS
to "true" or "1" also makes cuDNN convolution and max-pooling operate deterministically. This makes Keras Conv*D and MaxPool*D layers operate deterministically in both the forward and backward directions when running on a CUDA-enabled GPU.
Breaking Changes
- Deletes
Operation.traceback_with_start_lines
for which we know of no usages. - Removed
id
fromtf.Tensor.__repr__()
asid
is not useful other than internal debugging. - Some
tf.assert_*
methods now raise assertions at operation creation time if the input tensors' values are known at that time, not during thesession.run()
. This only changes behavior when the graph execution would have resulted in an error. When this happens, a noop is returned and the input tensors are marked non-feedable. In other words, if they are used as keys infeed_dict
argument tosession.run()
, an error will be raised. Also, because some assert ops don't make it into the graph, the graph structure changes. A different graph can result in different per-op random seeds when they are not given explicitly (most often). - The following APIs are not longer experimental:
tf.config.list_logical_devices
,tf.config.list_physical_devices
,tf.config.get_visible_devices
,tf.config.set_visible_devices
,tf.config.get_logical_device_configuration
,tf.config.set_logical_device_configuration
. tf.config.experimentalVirtualDeviceConfiguration
has been renamed totf.config.LogicalDeviceConfiguration
.tf.config.experimental_list_devices
has been removed, please usetf.config.list_logical_devices
.
Bug Fixes and Other Changes
tf.data
- Fixes concurrency issue with
tf.data.experimental.parallel_interleave
withsloppy=True
. - Add
tf.data.experimental.dense_to_ragged_batch()
. - Extend
tf.data
parsing ops to supportRaggedTensors
.
- Fixes concurrency issue with
tf.distribute
- Fix issue where GRU would crash or give incorrect output when a
tf.distribute.Strategy
was used.
- Fix issue where GRU would crash or give incorrect output when a
tf.estimator
- Added option in
tf.estimator.CheckpointSaverHook
to not save theGraphDef
. - Moving the checkpoint reader from swig to pybind11.
- Added option in
tf.keras
- Export
depthwise_conv2d
intf.keras.backend
. - In Keras Layers and Models, Variables in
trainable_weights
,non_trainable_weights
, andweights
are explicitly deduplicated. - Keras
model.load_weights
now acceptsskip_mismatch
as an argument. This was available in external Keras, and has now been copied over totf.keras
. - Fix the input shape caching behavior of Keras convolutional layers.
Model.fit_generator
,Model.evaluate_generator
,Model.predict_generator
,Model.train_on_batch
,Model.test_on_batch
, andModel.predict_on_batch
methods now respect therun_eagerly
property, and will correctly run usingtf.function
by default. Note thatModel.fit_generator
,Model.evaluate_generator
, andModel.predict_generator
are deprecated endpoints. They are subsumed byModel.fit
,Model.evaluate
, andModel.predict
which now support generators and Sequences.
- Export
tf.lite
- Legalization for
NMS
ops in TFLite. - add
narrow_range
andaxis
toquantize_v2
anddequantize
ops. - Added support for
FusedBatchNormV3
in converter. - Add an
errno
-like field toNNAPI
delegate for detectingNNAPI
errors for fallback behaviour. - Refactors
NNAPI
Delegate to support detailed reason why an operation is not accelerated. - Converts hardswish subgraphs into atomic ops.
- Legalization for
- Other
- Critical stability updates for TPUs, especially in cases where the XLA compiler produces compilation errors.
- TPUs can now be re-initialized multiple times, using
tf.tpu.experimental.initialize_tpu_system
. - Add
RaggedTensor.merge_dims()
. - Added new
uniform_row_length
row-partitioning tensor toRaggedTensor
. - Add
shape
arg toRaggedTensor.to_tensor
; Improve speed ofRaggedTensor.to_tensor
. tf.io.parse_sequence_example
andtf.io.parse_single_sequence_example
now support ragged features.- Fix
while_v2
with variables in custom gradient. - Support taking gradients of V2
tf.cond
andtf.while_loop
usingLookupTable
. - Fix bug where
vectorized_map
failed on inputs with unknown static shape. - Add preliminary support for sparse CSR matrices.
- Tensor equality with
None
now behaves as expected. - Make calls to
tf.function(f)()
,tf.function(f).get_concrete_function
andtf.function(f).get_initialization_function
thread-safe. - Extend
tf.identity
to work with CompositeTensors (such as SparseTensor) - Added more
dtypes
and zero-sized inputs toEinsum
Op and improved its performance - Enable multi-worker
NCCL
all-reduce
inside functions executing eagerly. - Added complex128 support to
RFFT
,RFFT2D
,RFFT3D
,IRFFT
,IRFFT2D
, andIRFFT3D
. - Add
pfor
converter forSelfAdjointEigV2
. - Add
tf.math.ndtri
andtf.math.erfinv
. - Add
tf.config.experimental.enable_mlir_bridge
to allow using MLIR compiler bridge in eager model. - Added support for MatrixSolve on Cloud TPU / XLA.
- Added
tf.autodiff.ForwardAccumulator
for forward-mode autodiff - Add
LinearOperatorPermutation
. - A few performance optimizations on
tf.reduce_logsumexp
. - Added multilabel handling to
AUC
metric - Optimization on
zeros_like
. - Dimension constructor now requires
None
or types with an__index__
method. - Add
tf.random.uniform
microbenchmark. - Use
_protogen
suffix for proto library targets instead of_cc_protogen
suffix. - Moving the checkpoint reader from
swig
topybind11
. tf.device
&MirroredStrategy
now supports passing in atf.config.LogicalDevice
- If you're building Tensorflow from source, consider using
bazelisk to automatically
download and use the correct Bazel version. Bazelisk reads the
.bazelversion
file at the root of the project directory.
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
8bitmp3, Aaron Ma, AbdüLhamit Yilmaz, Abhai Kollara, aflc, Ag Ramesh, Albert Z. Guo, Alex Torres, amoitra, Andrii Prymostka, angeliand, Anshuman Tripathy, Anthony Barbier, Anton Kachatkou, Anubh-V, Anuja Jakhade, Artem Ryabov, autoih, Bairen Yi, Bas Aarts, Basit Ayantunde, Ben Barsdell, Bhavani Subramanian, Brett Koonce, candy.dc, Captain-Pool, caster, cathy, Chong Yan, Choong Yin Thong, Clayne Robison, Colle, Dan Ganea, David Norman, David Refaeli, dengziming, Diego Caballero, Divyanshu, djshen, Douman, Duncan Riach, EFanZh, Elena Zhelezina, Eric Schweitz, Evgenii Zheltonozhskii, Fei Hu, fo40225, Fred Reiss, Frederic Bastien, Fredrik Knutsson, fsx950223, fwcore, George Grzegorz Pawelczak, George Sterpu, Gian Marco Iodice, Giorgio Arena, giuros01, Gomathi Ramamurthy, Guozhong Zhuang, Haifeng Jin, Haoyu Wu, HarikrishnanBalagopal, HJYOO, Huang Chen-Yi, Ilham Firdausi Putra, Imran Salam, Jared Nielsen, Jason Zaman, Jasper Vicenti, Jeff Daily, Jeff Poznanovic, Jens Elofsson, Jerry Shih, jerryyin, Jesper Dramsch, jim.meyer, Jongwon Lee, Jun Wan, Junyuan Xie, Kaixi Hou, kamalkraj, Kan Chen, Karthik Muthuraman, Keiji Ariyama, Kevin Rose, Kevin Wang, Koan-Sin Tan, kstuedem, Kwabena W. Agyeman, Lakshay Tokas, latyas, Leslie-Fang-Intel, Li, Guizi, Luciano Resende, Lukas Folle, Lukas Geiger, Mahmoud Abuzaina, Manuel Freiberger, Mark Ryan, Martin Mlostek, Masaki Kozuki, Matthew Bentham, Matthew Denton, mbhuiyan, mdfaijul, Muhwan Kim, Nagy Mostafa, nammbash, Nathan Luehr, Nathan Wells, Niranjan Hasabnis, Oleksii Volkovskyi, Olivier Moindrot, olramde, Ouyang Jin, OverLordGoldDragon, Pallavi G, Paul Andrey, Paul Wais, pkanwar23, Pooya Davoodi, Prabindh Sundareson, Rajeshwar Reddy T, Ralovich, Kristof, Refraction-Ray, Richard Barnes, richardbrks, Robert Herbig, Romeo Kienzler, Ryan Mccormick, saishruthi, Saket Khandelwal, Sami Kama, Sana Damani, Satoshi Tanaka, Sergey Mironov, Sergii Khomenko, Shahid, Shawn Presser, ShengYang1, Siddhartha Bagaria, Simon Plovyt, skeydan, srinivasan.narayanamoorthy, Stephen Mugisha, sunway513, Takeshi Watanabe, Taylor Jakobson, TengLu, TheMindVirus, ThisIsIsaac, Tim Gates, Timothy Liu, Tomer Gafner, Trent Lo, Trevor Hickey, Trevor Morris, vcarpani, Wei Wang, Wen-Heng (Jack) Chung, wenshuai, Wenshuai-Xiaomi, wenxizhu, william, William D. Irons, Xinan Jiang, Yannic, Yasir Modak, Yasuhiro Matsumoto, Yong Tang, Yongfeng Gu, Youwei Song, Zaccharie Ramzi, Zhang, Zhenyu Guo, 王振华 (Zhenhua Wang), 韩董, 이중건 Isaac Lee
Release 1.15.0
This is the last 1.x release for TensorFlow. We do not expect to update the 1.x branch with features, although we will issue patch releases to fix vulnerabilities for at least one year.
Major Features and Improvements
- As
announced,
tensorflow
pip package will by default include GPU support (same astensorflow-gpu
now) for the platforms we currently have GPU support (Linux and Windows). It will work on machines with and without Nvidia GPUs.tensorflow-gpu
will still be available, and CPU-only packages can be downloaded attensorflow-cpu
for users who are concerned about package size. - TensorFlow 1.15 contains a complete implementation of the 2.0 API in its
compat.v2
module. It contains a copy of the 1.15 main module (withoutcontrib
) in thecompat.v1
module. TensorFlow 1.15 is able to emulate 2.0 behavior using theenable_v2_behavior()
function. This enables writing forward compatible code: by explicitly importing eithertensorflow.compat.v1
ortensorflow.compat.v2
, you can ensure that your code works without modifications against an installation of 1.15 or 2.0. - EagerTensor now supports numpy buffer interface for tensors.
- Add toggles
tf.enable_control_flow_v2()
andtf.disable_control_flow_v2()
for enabling/disabling v2 control flow. - Enable v2 control flow as part of
tf.enable_v2_behavior()
andTF2_BEHAVIOR=1
. - AutoGraph translates Python control flow into TensorFlow expressions,
allowing users to write regular Python inside
tf.function
-decorated functions. AutoGraph is also applied in functions used withtf.data
,tf.distribute
andtf.keras
APIS. - Adds
enable_tensor_equality()
, which switches the behavior such that:- Tensors are no longer hashable.
- Tensors can be compared with
==
and!=
, yielding a Boolean Tensor with element-wise comparison results. This will be the default behavior in 2.0.
Breaking Changes
- Tensorflow code now produces 2 different pip packages:
tensorflow_core
containing all the code (in the future it will contain only the private implementation) andtensorflow
which is a virtual pip package doing forwarding totensorflow_core
(and in the future will contain only the public API of tensorflow). We don't expect this to be breaking, unless you were importing directly from the implementation. - TensorFlow 1.15 is built using devtoolset7 (GCC7) on Ubuntu 16. This may lead to ABI incompatibilities with extensions built against earlier versions of TensorFlow.
- Deprecated the use of
constraint=
and.constraint
with ResourceVariable. tf.keras
:OMP_NUM_THREADS
is no longer used by the default Keras config. To configure the number of threads, usetf.config.threading
APIs.tf.keras.model.save_model
andmodel.save
now defaults to saving a TensorFlow SavedModel.keras.backend.resize_images
(and consequently,keras.layers.Upsampling2D
) behavior has changed, a bug in the resizing implementation was fixed.- Layers now default to
float32
, and automatically cast their inputs to the layer's dtype. If you had a model that usedfloat64
, it will probably silently usefloat32
in TensorFlow2, and a warning will be issued that starts with Layer "layer-name" is casting an input tensor from dtype float64 to the layer's dtype of float32. To fix, either set the default dtype to float64 withtf.keras.backend.set_floatx('float64')
, or passdtype='float64'
to each of the Layer constructors. Seetf.keras.layers.Layer
for more information. - Some
tf.assert_*
methods now raise assertions at operation creation time (i.e. when this Python line executes) if the input tensors' values are known at that time, not during the session.run(). When this happens, a noop is returned and the input tensors are marked non-feedable. In other words, if they are used as keys infeed_dict
argument tosession.run()
, an error will be raised. Also, because some assert ops don't make it into the graph, the graph structure changes. A different graph can result in different per-op random seeds when they are not given explicitly (most often).
Bug Fixes and Other Changes
tf.estimator
:tf.keras.estimator.model_to_estimator
now supports exporting totf.train.Checkpoint
format, which allows the saved checkpoints to be compatible withmodel.load_weights
.- Fix tests in canned estimators.
- Expose Head as public API.
- Fixes critical bugs that help with
DenseFeatures
usability in TF2
tf.data
:- Promoting
unbatch
from experimental to core API. - Adding support for datasets as inputs to
from_tensors
andfrom_tensor_slices
and batching and unbatching of nested datasets.
- Promoting
tf.keras
:tf.keras.estimator.model_to_estimator
now supports exporting to tf.train.Checkpoint format, which allows the saved checkpoints to be compatible withmodel.load_weights
.- Saving a Keras Model using
tf.saved_model.save
now saves the list of variables, trainable variables, regularization losses, and the call function. - Deprecated
tf.keras.experimental.export_saved_model
andtf.keras.experimental.function
. Please usetf.keras.models.save_model(..., save_format='tf')
andtf.keras.models.load_model
instead. - Add an
implementation=3
mode fortf.keras.layers.LocallyConnected2D
andtf.keras.layers.LocallyConnected1D
layers usingtf.SparseTensor
to store weights, allowing a dramatic speedup for large sparse models. - Enable the Keras compile API
experimental_run_tf_function
flag by default. This flag enables single training/eval/predict execution path. With this 1. All input types are converted toDataset
. 2. When distribution strategy is not specified this goes through the no-op distribution strategy path. 3. Execution is wrapped in tf.function unlessrun_eagerly=True
is set in compile. - Raise error if
batch_size
argument is used when input is dataset/generator/keras sequence.
tf.lite
- Add
GATHER
support to NN API delegate. - tflite object detection script has a debug mode.
- Add delegate support for
QUANTIZE
. - Added evaluation script for COCO minival.
- Add delegate support for
QUANTIZED_16BIT_LSTM
. - Converts hardswish subgraphs into atomic ops.
- Add
- Add support for defaulting the value of
cycle_length
argument oftf.data.Dataset.interleave
to the number of schedulable CPU cores. parallel_for
: Add converter forMatrixDiag
.- Add
narrow_range
attribute toQuantizeAndDequantizeV2
and V3. - Added new op:
tf.strings.unsorted_segment_join
. - Add HW acceleration support for
topK_v2
. - Add new
TypeSpec
classes. - CloudBigtable version updated to v0.10.0.
- Expose
Head
as public API. - Update docstring for gather to properly describe the non-empty
batch_dims
case. - Added
tf.sparse.from_dense
utility function. - Improved ragged tensor support in
TensorFlowTestCase
. - Makes the a-normal form transformation in Pyct configurable as to which nodes are converted to variables and which are not.
ResizeInputTensor
now works for all delegates.- Add
EXPAND_DIMS
support to NN API delegate TEST: expand_dims_test tf.cond
emits a StatelessIf op if the branch functions are stateless and do not touch any resources.tf.cond
,tf.while
andif
andwhile
in AutoGraph now accept a nonscalar predicate if has a single element. This does not affect non-V2 control flow.tf.while_loop
emits a StatelessWhile op if the cond and body functions are stateless and do not touch any resources.- Refactors code in Quant8 LSTM support to reduce TFLite binary size.
- Add support of local soft device placement for eager op.
- Add HW acceleration support for
LogSoftMax
. - Added a function
nested_value_rowids
for ragged tensors. - Add guard to avoid acceleration of L2 Normalization with input rank != 4
- Add
tf.math.cumulative_logsumexp operation
. - Add
tf.ragged.stack
. - Fix memory allocation problem when calling
AddNewInputConstantTensor
. - Delegate application failure leaves interpreter in valid state.
- Add check for correct memory alignment to
MemoryAllocation::MemoryAllocation()
. - Extracts
NNAPIDelegateKernel
from nnapi_delegate.cc - Added support for
FusedBatchNormV3
in converter. - A ragged to dense op for directly calculating tensors.
- Fix accidental quadratic graph construction cost in graph-mode
tf.gradients()
.
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
a6802739, Aaron Ma, Abdullah Selek, Abolfazl Shahbazi, Ag Ramesh, Albert Z. Guo, Albin Joy, Alex Itkes, Alex Sergeev, Alexander Pivovarov, Alexey Romanov, alhkad, Amit Srivastava, amoitra, Andrew Lihonosov, Andrii Prymostka, Anuj Rawat, Astropeak, Ayush Agrawal, Bairen Yi, Bas Aarts, Bastian Eichenberger, Ben Barsdell, Benjamin Peterson, bhack, Bharat Raghunathan, Bhavani Subramanian, Bryan Cutler, candy.dc, Cao Zongyan, Captain-Pool, Casper Da Costa-Luis, Chen Guoyin, Cheng Chang, chengchingwen, Chong Yan, Choong Yin Thong, Christopher Yeh, Clayne Robison, Coady, Patrick, Dan Ganea, David Norman, Denis Khalikov, Deven Desai, Diego Caballero, Duncan Dean, Duncan Riach, Dwight J Lyle, Eamon Ito-Fisher, eashtian3, EFanZh, ejot, Elroy Ashtian Jr, Eric Schweitz, Fangjun Kuang, Fei Hu, fo40225, formath, Fred Reiss, Frederic Bastien, Fredrik Knutsson, G. Hussain Chinoy, Gabriel, gehring, George Grzegorz Pawelczak, Gianluca Varisco, Gleb Popov, Greg Peatfield, Guillaume Klein, Gurpreet Singh, Gustavo Lima Chaves, haison, Haraldur TóMas HallgríMsson, HarikrishnanBalagopal, HåKon Sandsmark, I-Hong, Ilham Firdausi Putra, Imran Salam, Jason Zaman, Jason Zavaglia, jayhpark530, jefby, Jeff Daily, Jeffrey Poznanovic, Jekyll Lai, Jeroen BéDorf, Jerry Shih, jerryyin, jiakai, JiangXIAO, Joe Bowser, Joel Shapiro, Johan Gunnarsson, Jojimon Varghese, Joon, Josh Beal, Julian Niedermeier, Jun Wan, Junqin Zhang, Junyuan Xie, Justin Tunis, Kaixi Hou, Karl Lessard, Karthik Muthuraman, Kbhute-Ibm, khanhlvg, Koock Yoon, kstuedem, Kyuwon Kim, Lakshay Tokas, leike666666, leonard951, Leslie-Fang, Leslie-Fang-Intel, Li, Guizi, Lukas Folle, Lukas Geiger, Mahmoud Abuzaina, Manraj Singh Grover, Margaret Maynard-Reid, Mark Ryan, Matt Conley, Matthew Bentham, Matthew Denton, mbhuiyan, mdfaijul, Mei Jie, merturl, MichaelKonobeev, Michal W. Tarnowski, minds, mpppk, musikisomorphie, Nagy Mostafa, Nayana Thorat, Neil, Niels Ole Salscheider, Niklas SilfverströM, Niranjan Hasabnis, ocjosen, olramde, Pariksheet Pinjari, Patrick J. Lopresti, Patrik Gustavsson, per1234, PeterLee, Phan Van Nguyen Duc, Phillip Kravtsov, Pooya Davoodi, Pranav Marathe, Putra Manggala, Qingqing Cao, Rajeshwar Reddy T, Ramon ViñAs, Rasmus Diederichsen, Reuben Morais, richardbrks, robert, RonLek, Ryan Jiang, saishruthi, Saket Khandelwal, Saleem Abdulrasool, Sami Kama, Sana-Damani, Sergii Khomenko, Severen Redwood, Shubham Goyal, Sigrid Keydana, Siju Samuel, sleighsoft, smilu97, Son Tran, Srini511, srinivasan.narayanamoorthy, Sumesh Udayakumaran, Sungmann Cho, Tae-Hwan Jung, Taehoon Lee, Takeshi Watanabe, TengLu, terryky, TheMindVirus, ThisIsIsaac, Till Hoffmann, Timothy Liu, Tomer Gafner, Tongxuan Liu, Trent Lo, Trevor Morris, Uday Bondhugula, Vasileios Lioutas, vbvg2008, Vishnuvardhan Janapati, Vivek Suryamurthy, Wei Wang, Wen-Heng (Jack) Chung, wenxizhu, William D. Irons, winstonq, wyzhao, Xiaoming (Jason) Cui, Xinan Jiang, Xinping Wang, Yann-Yy, Yasir Modak, Yong Tang, Yongfeng Gu, Yuchen Ying, Yuxin Wu, zyeric, 王振华 (Zhenhua Wang)
Release 2.0.0
Major Features and Improvements
TensorFlow 2.0 focuses on simplicity and ease of use, featuring updates like:
- Easy model building with Keras and eager execution.
- Robust model deployment in production on any platform.
- Powerful experimentation for research.
- API simplification by reducing duplication and removing deprecated endpoints.
For details on best practices with 2.0, see the Effective 2.0 guide
For information on upgrading your