Releases: tensorflow/tensorflow
TensorFlow 2.14.0
Release 2.14.0
Tensorflow
Breaking Changes
-
Support for Python 3.8 has been removed starting with TF 2.14. The TensorFlow 2.13.1 patch release will still have Python 3.8 support.
-
tf.Tensor- The class hierarchy for
tf.Tensorhas changed, and there are now explicitEagerTensorandSymbolicTensorclasses for eager and tf.function respectively. Users who relied on the exact type of Tensor (e.g.type(t) == tf.Tensor) will need to update their code to useisinstance(t, tf.Tensor). Thetf.is_symbolic_tensorhelper added in 2.13 may be used when it is necessary to determine if a value is specifically a symbolic tensor.
- The class hierarchy for
-
tf.compat.v1.Sessiontf.compat.v1.Session.partial_runandtf.compat.v1.Session.partial_run_setupwill be deprecated in the next release.
Known Caveats
tf.lite- when converter flag "_experimenal_use_buffer_offset" is enabled, additional metadata is automatically excluded from the generated model. The behaviour is the same as "exclude_conversion_metadata" is set
- If the model is larger than 2GB, then we also require "exclude_conversion_metadata" flag to be set
Major Features and Improvements
-
The
tensorflowpip package has a new, optional installation method for Linux that installs necessary Nvidia CUDA libraries through pip. As long as the Nvidia driver is already installed on the system, you may now runpip install tensorflow[and-cuda]to install TensorFlow's Nvidia CUDA library dependencies in the Python environment. Aside from the Nvidia driver, no other pre-existing Nvidia CUDA packages are necessary. -
Enable JIT-compiled i64-indexed kernels on GPU for large tensors with more than 2**32 elements.
- Unary GPU kernels: Abs, Atanh, Acos, Acosh, Asin, Asinh, Atan, Cos, Cosh, Sin, Sinh, Tan, Tanh.
- Binary GPU kernels: AddV2, Sub, Div, DivNoNan, Mul, MulNoNan, FloorDiv, Equal, NotEqual, Greater, GreaterEqual, LessEqual, Less.
-
tf.lite- Add experimental supports conversion of models that may be larger than 2GB before buffer deduplication
Bug Fixes and Other Changes
-
tf.py_functionandtf.numpy_functioncan now be used as function decorators for clearer code:@tf.py_function(Tout=tf.float32) def my_fun(x): print("This always executes eagerly.") return x+1 -
tf.lite- Strided_Slice now supports
UINT32.
- Strided_Slice now supports
-
tf.config.experimental.enable_tensor_float_32_execution- Disabling TensorFloat-32 execution now causes TPUs to use float32 precision for float32 matmuls and other ops. TPUs have always used bfloat16 precision for certain ops, like matmul, when such ops had float32 inputs. Now, disabling TensorFloat-32 by calling
tf.config.experimental.enable_tensor_float_32_execution(False)will cause TPUs to use float32 precision for such ops instead of bfloat16.
- Disabling TensorFloat-32 execution now causes TPUs to use float32 precision for float32 matmuls and other ops. TPUs have always used bfloat16 precision for certain ops, like matmul, when such ops had float32 inputs. Now, disabling TensorFloat-32 by calling
-
tf.experimental.dtensor- API changes for Relayout. Added a new API,
dtensor.relayout_like, for relayouting a tensor according to the layout of another tensor. - Added
dtensor.get_default_mesh, for retrieving the current default mesh under the dtensor context. - *fft* ops now support dtensors with any layout. Fixed bug in 'fft2d/fft3d', 'ifft2d/ifft3d', 'rfft2d/rfft3d', and 'irfft2d/irfft3d' for sharde input. Refer to this blog post for details.
- API changes for Relayout. Added a new API,
-
tf.experimental.strict_mode- Added a new API,
strict_mode, which converts all deprecation warnings into runtime errors with instructions on switching to a recommended substitute.
- Added a new API,
-
TensorFlow Debugger (tfdbg) CLI: ncurses-based CLI for tfdbg v1 was removed.
-
TensorFlow now supports C++ RTTI on mobile and Android. To enable this feature, pass the flag
--define=tf_force_rtti=trueto Bazel when building TensorFlow. This may be needed when linking TensorFlow into RTTI-enabled programs since mixing RTTI and non-RTTI code can cause ABI issues. -
tf.ones,tf.zeros,tf.fill,tf.ones_like,tf.zeros_likenow take an additional Layout argument that controls the output layout of their results. -
tf.nestandtf.datanow support user defined classes implementing__tf_flatten__and__tf_unflatten__methods. See nest_util code examples
for an example. -
TensorFlow IO support is now available for Apple Silicon packages.
-
Refactor CpuExecutable to propagate LLVM errors.
Keras
Keras is a framework built on top of the TensorFlow. See more details on the Keras website.
Major Features and Improvements
tf.kerasModel.compilenow supportsteps_per_execution='auto'as a parameter, allowing automatic tuning of steps per execution duringModel.fit,
Model.predict, andModel.evaluatefor a significant performance boost.
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
Aakar Dwivedi, Adrian Popescu, ag.ramesh, Akhil Goel, Albert Zeyer, Alex Rosen, Alexey Vishnyakov, Andrew Goodbody, angerson, Ashiq Imran, Ayan Moitra, Ben Barsdell, Bhavani Subramanian, Boian Petkantchin, BrianWieder, Chris Mc, cloudhan, Connor Flanagan, Daniel Lang, Daniel Yudelevich, Darya Parygina, David Korczynski, David Svantesson, dingyuqing05, Dragan Mladjenovic, dskkato, Eli Kobrin, Erick Ochoa, Erik Schultheis, Frédéric Bastien, gaikwadrahul8, Gauri1 Deshpande, guozhong.zhuang, H. Vetinari, Isaac Cilia Attard, Jake Hall, Jason Furmanek, Jerry Ge, Jinzhe Zeng, JJ, johnnkp, Jonathan Albrecht, jongkweh, justkw, Kanvi Khanna, kikoxia, Koan-Sin Tan, Kun-Lu, ltsai1, Lu Teng, luliyucoordinate, Mahmoud Abuzaina, mdfaijul, Milos Puzovic, Nathan Luehr, Om Thakkar, pateldeev, Peng Sun, Philipp Hack, pjpratik, Poliorcetics, rahulbatra85, rangjiaheng, Renato Arantes, Robert Kalmar, roho, Rylan Justice, Sachin Muradi, samypr100, Saoirse Stewart, Shanbin Ke, Shivam Mishra, shuw, Song Ziming, Stephan Hartmann, Sulav, sushreebarsa, T Coxon, Tai Ly, talyz, Thibaut Goetghebuer-Planchon, Thomas Preud'Homme, tilakrayal, Tirumalesh, Tj Xu, Tom Allsop, Trevor Morris, Varghese, Jojimon, Wen Chen, Yaohui Liu, Yimei Sun, Zhoulong Jiang, Zhoulong, Jiang
TensorFlow 2.13.1
Release 2.13.1
Bug Fixes and Other Changes
- Refactor CpuExecutable to propagate LLVM errors.
TensorFlow 2.14.0-rc1
Release 2.14.0
Tensorflow
Breaking Changes
-
Support for Python 3.8 has been removed starting with TF 2.14. The TensorFlow 2.13.1 patch release will still have Python 3.8 support.
-
tf.Tensor- The class hierarchy for
tf.Tensorhas changed, and there are now explicitEagerTensorandSymbolicTensorclasses for eager and tf.function respectively. Users who relied on the exact type of Tensor (e.g.type(t) == tf.Tensor) will need to update their code to useisinstance(t, tf.Tensor). Thetf.is_symbolic_tensorhelper added in 2.13 may be used when it is necessary to determine if a value is specifically a symbolic tensor.
- The class hierarchy for
-
tf.compat.v1.Sessiontf.compat.v1.Session.partial_runandtf.compat.v1.Session.partial_run_setupwill be deprecated in the next release.
-
tf.estimatortf.estimatorAPI will be removed in the next release. TF Estimator Python package will no longer be released.
Known Caveats
tf.lite- when converter flag "_experimenal_use_buffer_offset" is enabled, additional metadata is automatically excluded from the generated model. The behaviour is the same as "exclude_conversion_metadata" is set
- If the model is larger than 2GB, then we also require "exclude_conversion_metadata" flag to be set
Major Features and Improvements
-
The
tensorflowpip package has a new, optional installation method for Linux that installs necessary Nvidia CUDA libraries through pip. As long as the Nvidia driver is already installed on the system, you may now runpip install tensorflow[and-cuda]to install TensorFlow's Nvidia CUDA library dependencies in the Python environment. Aside from the Nvidia driver, no other pre-existing Nvidia CUDA packages are necessary. -
Enable JIT-compiled i64-indexed kernels on GPU for large tensors with more than 2**32 elements.
- Unary GPU kernels: Abs, Atanh, Acos, Acosh, Asin, Asinh, Atan, Cos, Cosh, Sin, Sinh, Tan, Tanh.
- Binary GPU kernels: AddV2, Sub, Div, DivNoNan, Mul, MulNoNan, FloorDiv, Equal, NotEqual, Greater, GreaterEqual, LessEqual, Less.
-
tf.lite- Add experimental supports conversion of models that may be larger than 2GB before buffer deduplication
Bug Fixes and Other Changes
-
tf.py_functionandtf.numpy_functioncan now be used as function decorators for clearer code:@tf.py_function(Tout=tf.float32) def my_fun(x): print("This always executes eagerly.") return x+1 -
tf.lite- Strided_Slice now supports
UINT32.
- Strided_Slice now supports
-
tf.config.experimental.enable_tensor_float_32_execution- Disabling TensorFloat-32 execution now causes TPUs to use float32 precision for float32 matmuls and other ops. TPUs have always used bfloat16 precision for certain ops, like matmul, when such ops had float32 inputs. Now, disabling TensorFloat-32 by calling
tf.config.experimental.enable_tensor_float_32_execution(False)will cause TPUs to use float32 precision for such ops instead of bfloat16.
- Disabling TensorFloat-32 execution now causes TPUs to use float32 precision for float32 matmuls and other ops. TPUs have always used bfloat16 precision for certain ops, like matmul, when such ops had float32 inputs. Now, disabling TensorFloat-32 by calling
-
tf.experimental.dtensor- API changes for Relayout. Added a new API,
dtensor.relayout_like, for relayouting a tensor according to the layout of another tensor. - Added
dtensor.get_default_mesh, for retrieving the current default mesh under the dtensor context. - *fft* ops now support dtensors with any layout. Fixed bug in 'fft2d/ fft3d', 'ifft2d/ifft3d', 'rfft2d/rfft3d', and 'irfft2d/irfft3d' for sharded input.
- API changes for Relayout. Added a new API,
-
tf.experimental.strict_mode- Added a new API,
strict_mode, which converts all deprecation warnings into runtime errors with instructions on switching to recommended substitute.
- Added a new API,
-
TensorFlow Debugger (tfdbg) CLI: ncurses-based CLI for tfdbg v1 was removed.
-
TensorFlow now supports C++ RTTI on mobile and Android. To enable this feature, pass the flag
--define=tf_force_rtti=trueto Bazel when building TensorFlow. This may be needed when linking TensorFlow into RTTI-enabled programs since mixing RTTI and non-RTTI code can cause ABI issues. -
tf.ones,tf.zeros,tf.fill,tf.ones_like,tf.zeros_likenow take an additional Layout argument that controls the output layout of their results. -
tf.nestandtf.datanow support user defined classes implementing__tf_flatten__and__tf_unflatten__methods. See nest_util code examples for an example.
Keras
Keras is a framework built on top of the TensorFlow. See more details on the Keras website.
Major Features and Improvements
tf.kerasModel.compilenow supportsteps_per_execution='auto'as a parameter, allowing automatic tuning of steps per execution duringModel fit,Model.predict, andModel.evaluatefor a significant performance boost.
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
Aakar Dwivedi, Adrian Popescu, ag.ramesh, Akhil Goel, Albert Zeyer, Alex Rosen, Alexey Vishnyakov, Andrew Goodbody, angerson, Ashiq Imran, Ayan Moitra, Ben Barsdell, Bhavani Subramanian, Boian Petkantchin, BrianWieder, Chris Mc, cloudhan, Connor Flanagan, Daniel Lang, Daniel Yudelevich, Darya Parygina, David Korczynski, David Svantesson, dingyuqing05, Dragan Mladjenovic, dskkato, Eli Kobrin, Erick Ochoa, Erik Schultheis, Frédéric Bastien, gaikwadrahul8, Gauri1 Deshpande, georgiie, guozhong.zhuang, H. Vetinari, Isaac Cilia Attard, Jake Hall, Jason Furmanek, Jerry Ge, Jinzhe Zeng, JJ, johnnkp, Jonathan Albrecht, jongkweh, justkw, Kanvi Khanna, kikoxia, Koan-Sin Tan, Kun-Lu, Learning-To-Play, ltsai1, Lu Teng, luliyucoordinate, Mahmoud Abuzaina, mdfaijul, Milos Puzovic, Nathan Luehr, Om Thakkar, pateldeev, Peng Sun, Philipp Hack, pjpratik, Poliorcetics, rahulbatra85, rangjiaheng, Renato Arantes, Robert Kalmar, roho, Rylan Justice, Sachin Muradi, samypr100, Saoirse Stewart, Shanbin Ke, Shivam Mishra, shuw, Song Ziming, Stephan Hartmann, Sulav, sushreebarsa, T Coxon, Tai Ly, talyz, Tensorflow Jenkins, Thibaut Goetghebuer-Planchon, Thomas Preud'Homme, tilakrayal, Tirumalesh, Tj Xu, Tom Allsop, Trevor Morris, Varghese, Jojimon, Wen Chen, Yaohui Liu, Yimei Sun, Zhoulong Jiang, Zhoulong, Jiang
TensorFlow 2.14.0-rc0
Release 2.14.0
Tensorflow
Breaking Changes
-
tf.Tensor- The class hierarchy for
tf.Tensorhas changed, and there are now explicitEagerTensorandSymbolicTensorclasses for eager and tf.function respectively. Users who relied on the exact type of Tensor (e.g.type(t) == tf.Tensor) will need to update their code to useisinstance(t, tf.Tensor). Thetf.is_symbolic_tensorhelper added in 2.13 may be used when it is necessary to determine if a value is specifically a symbolic tensor.
- The class hierarchy for
-
tf.compat.v1.Sessiontf.compat.v1.Session.partial_runandtf.compat.v1.Session.partial_run_setupwill be deprecated in the next release.
Known Caveats
tf.lite- when converter flag "_experimenal_use_buffer_offset" is enabled, additional metadata is automatically excluded from the generated model. The behaviour is the same as "exclude_conversion_metadata" is set
- If the model is larger than 2GB, then we also require "exclude_conversion_metadata" flag to be set
Major Features and Improvements
-
Enable JIT-compiled i64-indexed kernels on GPU for large tensors with more than 2**32 elements.
- Unary GPU kernels: Abs, Atanh, Acos, Acosh, Asin, Asinh, Atan, Cos, Cosh, Sin, Sinh, Tan, Tanh.
- Binary GPU kernels: AddV2, Sub, Div, DivNoNan, Mul, MulNoNan, FloorDiv, Equal, NotEqual, Greater, GreaterEqual, LessEqual, Less.
-
tf.lite- Add experimental supports conversion of models that may be larger than 2GB before buffer deduplication
Bug Fixes and Other Changes
-
tf.py_functionandtf.numpy_functioncan now be used as function decorators for clearer code:@tf.py_function(Tout=tf.float32) def my_fun(x): print("This always executes eagerly.") return x+1 -
tf.lite- Strided_Slice now supports
UINT32.
- Strided_Slice now supports
-
tf.config.experimental.enable_tensor_float_32_execution- Disabling TensorFloat-32 execution now causes TPUs to use float32 precision for float32 matmuls and other ops. TPUs have always used bfloat16 precision for certain ops, like matmul, when such ops had float32 inputs. Now, disabling TensorFloat-32 by calling
tf.config.experimental.enable_tensor_float_32_execution(False)will cause TPUs to use float32 precision for such ops instead of bfloat16.
- Disabling TensorFloat-32 execution now causes TPUs to use float32 precision for float32 matmuls and other ops. TPUs have always used bfloat16 precision for certain ops, like matmul, when such ops had float32 inputs. Now, disabling TensorFloat-32 by calling
-
tf.experimental.dtensor- API changes for Relayout. Added a new API,
dtensor.relayout_like, for relayouting a tensor according to the layout of another tensor. - Added
dtensor.get_default_mesh, for retrieving the current default mesh under the dtensor context. - *fft* ops now support dtensors with any layout. Fixed bug in 'fft2d/ fft3d', 'ifft2d/ifft3d', 'rfft2d/rfft3d', and 'irfft2d/irfft3d' for sharded input.
- API changes for Relayout. Added a new API,
-
tf.experimental.strict_mode- Added a new API,
strict_mode, which converts all deprecation warnings into runtime errors with instructions on switching to a recommended substitute.
- Added a new API,
-
TensorFlow Debugger (tfdbg) CLI: ncurses-based CLI for tfdbg v1 was removed.
-
TensorFlow now supports C++ RTTI on mobile and Android. To enable this feature, pass the flag
--define=tf_force_rtti=trueto Bazel when building TensorFlow. This may be needed when linking TensorFlow into RTTI-enabled programs since mixing RTTI and non-RTTI code can cause ABI issues. -
tf.ones,tf.zeros,tf.fill,tf.ones_like,tf.zeros_likenow take an additional Layout argument that controls the output layout of their results. -
tf.nestandtf.datanow support user defined classes implementing__tf_flatten__and__tf_unflatten__methods. See nest_util code examples for an example.
Keras
Keras is a framework built on top of the TensorFlow. See more details on the Keras website.
Major Features and Improvements
tf.kerasModel.compilenow supportsteps_per_execution='auto'as a parameter, allowing automatic tuning of steps per execution duringModel.fit,Model.predict, andModel.evaluatefor a significant performance boost.
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
Aakar Dwivedi, Adrian Popescu, ag.ramesh, Akhil Goel, Albert Zeyer, Alex Rosen, Alexey Vishnyakov, Andrew Goodbody, angerson, Ashiq Imran, Ayan Moitra, Ben Barsdell, Bhavani Subramanian, Boian Petkantchin, BrianWieder, Chris Mc, cloudhan, Connor Flanagan, Daniel Lang, Daniel Yudelevich, Darya Parygina, David Korczynski, David Svantesson, dingyuqing05, Dragan Mladjenovic, dskkato, Eli Kobrin, Erick Ochoa, Erik Schultheis, Frédéric Bastien, gaikwadrahul8, Gauri1 Deshpande, georgiie, guozhong.zhuang, H. Vetinari, Isaac Cilia Attard, Jake Hall, Jason Furmanek, Jerry Ge, Jinzhe Zeng, JJ, johnnkp, Jonathan Albrecht, jongkweh, justkw, Kanvi Khanna, kikoxia, Koan-Sin Tan, Kun-Lu, Learning-To-Play, ltsai1, Lu Teng, luliyucoordinate, Mahmoud Abuzaina, mdfaijul, Milos Puzovic, Nathan Luehr, Om Thakkar, pateldeev, Peng Sun, Philipp Hack, pjpratik, Poliorcetics, rahulbatra85, rangjiaheng, Renato Arantes, Robert Kalmar, roho, Rylan Justice, Sachin Muradi, samypr100, Saoirse Stewart, Shanbin Ke, Shivam Mishra, shuw, Song Ziming, Stephan Hartmann, Sulav, sushreebarsa, T Coxon, Tai Ly, talyz, Tensorflow Jenkins, Thibaut Goetghebuer-Planchon, Thomas Preud'Homme, tilakrayal, Tirumalesh, Tj Xu, Tom Allsop, Trevor Morris, Varghese, Jojimon, Wen Chen, Yaohui Liu, Yimei Sun, Zhoulong Jiang, Zhoulong, Jiang
TensorFlow 2.13.0
Release 2.13.0
TensorFlow
Breaking Changes
- The LMDB kernels have been changed to return an error. This is in preparation for completely removing them from TensorFlow. The LMDB dependency that these kernels are bringing to TensorFlow has been dropped, thus making the build slightly faster and more secure.
Major Features and Improvements
-
tf.lite- Added 16-bit and 64-bit float type support for built-in op
cast. - The Python TF Lite Interpreter bindings now have an option
experimental_disable_delegate_clusteringto turn-off delegate clustering. - Added int16x8 support for the built-in op
exp - Added int16x8 support for the built-in op
mirror_pad - Added int16x8 support for the built-in ops
space_to_batch_ndandbatch_to_space_nd - Added 16-bit int type support for built-in op
less,greater_than,equal - Added 8-bit and 16-bit support for
floor_divandfloor_mod. - Added 16-bit and 32-bit int support for the built-in op
bitcast. - Added 8-bit/16-bit/32-bit int/uint support for the built-in op
bitwise_xor - Added int16 indices support for built-in op
gatherandgather_nd. - Added 8-bit/16-bit/32-bit int/uint support for the built-in op
right_shift - Added reference implementation for 16-bit int unquantized
add. - Added reference implementation for 16-bit int and 32-bit unsigned int unquantized
mul. add_opsupports broadcasting up to 6 dimensions.- Added 16-bit support for
top_k.
- Added 16-bit and 64-bit float type support for built-in op
-
tf.function- ConcreteFunction (
tf.types.experimental.ConcreteFunction) as generated throughget_concrete_functionnow performs holistic input validation similar to callingtf.functiondirectly. This can cause breakages where existing calls pass Tensors with the wrong shape or omit certain non-Tensor arguments (including default values).
- ConcreteFunction (
-
tf.nntf.nn.embedding_lookup_sparseandtf.nn.safe_embedding_lookup_sparsenow support ids and weights described bytf.RaggedTensors.- Added a new boolean argument
allow_fast_lookuptotf.nn.embedding_lookup_sparseandtf.nn.safe_embedding_lookup_sparse, which enables a simplified and typically faster lookup procedure.
-
tf.datatf.data.Dataset.zipnow supports Python-style zipping, i.e.Dataset.zip(a, b, c).tf.data.Dataset.shufflenow supportstf.data.UNKNOWN_CARDINALITYWhen doing a "full shuffle" usingdataset = dataset.shuffle(dataset.cardinality()). But remember, a "full shuffle" will load the full dataset into memory so that it can be shuffled, so make sure to only use this with small datasets or datasets of small objects (like filenames).
-
tf.mathtf.nn.top_know supports specifying the output index type via parameterindex_type. Supported types aretf.int16,tf.int32(default), andtf.int64.
-
tf.SavedModel- Introduced class method
tf.saved_model.experimental.Fingerprint.from_proto(proto), which can be used to construct aFingerprintobject directly from a protobuf. - Introduced member method
tf.saved_model.experimental.Fingerprint.singleprint(), which provides a convenient way to uniquely identify a SavedModel.
- Introduced class method
Bug Fixes and Other Changes
-
tf.Variable- Changed resource variables to inherit from
tf.compat.v2.Variableinstead oftf.compat.v1.Variable. Some checks forisinstance(v, tf compat.v1.Variable)that previously returned True may now return False.
- Changed resource variables to inherit from
-
tf.distribute- Opened an experimental API,
tf.distribute.experimental.coordinator.get_current_worker_index, for retrieving the worker index from within a worker, when using parameter server training with a custom training loop.
- Opened an experimental API,
-
tf.experimental.dtensor- Deprecated
dtensor.run_onin favor ofdtensor.default_meshto correctly indicate that the context does not override the mesh that the ops and functions will run on, it only sets a fallback default mesh. - List of members of
dtensor.Layoutanddtensor.Meshhave slightly changed as part of efforts to consolidate the C++ and Python source code with pybind11. Most notably,dtensor.Layout.serialized_stringis removed. - Minor API changes to represent Single Device Layout for non-distributed Tensors inside DTensor functions. Runtime support will be added soon.
- Deprecated
-
tf.experimental.ExtensionTypetf.experimental.ExtensionTypenow supports Pythontupleas the type annotation of its fields.
-
tf.nest- Deprecated API
tf.nest.is_sequencehas now been deleted. Please usetf.nest.is_nestedinstead.
- Deprecated API
Keras
Keras is a framework built on top of the TensorFlow. See more details on the Keras website.
Breaking Changes
- Removed the Keras scikit-learn API wrappers (
KerasClassifierandKerasRegressor), which had been deprecated in August 2021. We recommend using SciKeras instead. - The default Keras model saving format is now the Keras v3 format: calling
model.save("xyz.keras")will no longer create a H5 file, it will create a native Keras model file. This will only be breaking for you if you were manually inspecting or modifying H5 files saved by Keras under a.kerasextension. If this breaks you, simply addsave_format="h5"to your.save()call to revert back to the prior behavior. - Added
keras.utils.TimedThreadutility to run a timed thread every x seconds. It can be used to run a threaded function alongside model training or any other snippet of code. - In the
kerasPyPI package, accessible symbols are now restricted to symbols that are intended to be public. This may affect your code if you were usingimport kerasand you usedkerasfunctions that were not public APIs, but were accessible in earlier versions with direct imports. In those cases, please use the following guideline:
- The API may be available in the public Keras API under a different name, so make sure to look for it on keras.io or TensorFlow docs and switch to the public version.
- It could also be a simple python or TF utility that you could easily copy over to your own codebase. In those case, just make it your own!
- If you believe it should definitely be a public Keras API, please open a feature request in keras GitHub repo.
- As a workaround, you could import the same private symbol keraskeras.src, but keep in mind thesrcnamespace is not stable and those APIs may change or be removed in the future.
Major Features and Improvements
- Added F-Score metrics
tf.keras.metrics.FBetaScore,tf.keras.metrics.F1Score, andtf.keras.metrics.R2Score. - Added activation function
tf.keras.activations.mish. - Added experimental
keras.metrics.experimental.PyMetricAPI for metrics that run Python code on the host CPU (compiled outside of the TensorFlow graph). This can be used for integrating metrics from external Python libraries (like sklearn or pycocotools) into Keras as first-class Keras metrics. - Added
tf.keras.optimizers.Lionoptimizer. - Added
tf.keras.layers.SpectralNormalizationlayer wrapper to perform spectral normalization on the weights of a target layer. - The
SidecarEvaluatorModelExportcallback has been added to Keras askeras.callbacks.SidecarEvaluatorModelExport. This callback allows for exporting the model the best-scoring model as evaluated by aSidecarEvaluatorevaluator. The evaluator regularly evaluates the model and exports it if the user-defined comparison function determines that it is an improvement. - Added warmup capabilities to
tf.keras.optimizers.schedules.CosineDecaylearning rate scheduler. You can now specify an initial and target learning rate, and our scheduler will perform a linear interpolation between the two after which it will begin a decay phase. - Added experimental support for an exactly-once visitation guarantee for evaluating Keras models trained with
tf.distribute ParameterServerStrategy, via theexact_evaluation_shardsargument inModel.fitandModel.evaluate. - Added
tf.keras.__internal__.KerasTensor,tf.keras.__internal__.SparseKerasTensor, andtf.keras.__internal__.RaggedKerasTensorclasses. You can use these classes to do instance type checking and type annotations for layer/model inputs and outputs. - All the
tf.keras.dtensor.experimental.optimizersclasses have been merged withtf.keras.optimizers. You can migrate your code to usetf.keras.optimizersdirectly. The API namespace fortf.keras.dtensor.experimental.optimizerswill be removed in future releases. - Added support for
class_weightfor 3+ dimensional targets (e.g. image segmentation masks) inModel.fit. - Added a new loss,
keras.losses.CategoricalFocalCrossentropy. - Remove the
tf.keras.dtensor.experimental.layout_map_scope(). You can user thetf.keras.dtensor.experimental.LayoutMap.scope()instead.
Security
- Fixes correct values rank in UpperBound and LowerBound CVE-2023-33976
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
103yiran, 8bitmp3, Aakar, Aakar Dwivedi, Abinash Satapathy, Aditya Kane, ag.ramesh, Alexander Grund, Andrei Pikas, andreii, Andrew Goodbody, angerson, Anthony_256, Ashay Rane, Ashiq Imran, Awsaf, Balint Cristian, Banikumar Maiti (Intel Aipg), Ben Barsdell, bhack, cfRod, Chao Chen, chenchongsong, Chris Mc, Daniil Kutz, David Rubinstein, dianjiaogit, dixr, Dongfeng Yu, dongfengy, drah, Eric Kunze, Feiyue Chen, Frederic Bastien, Gauri1 Deshpande, guozhong.zhuang, hDn248, HYChou, ingkarat, James Hilliard, Jason Furmanek, Jaya, Jens Glaser, Jerry Ge, Jiao Dian'S Power Pl...
TensorFlow 2.12.1
Release 2.12.1
Bug Fixes and Other Changes
- The use of the ambe config to build and test aarch64 is not needed. The ambe config will be removed in the future. Making cpu_arm64_pip.sh and cpu_arm64_nonpip.sh more similar for easier future maintenance.
TensorFlow 2.13.0-rc2
Release 2.13.0
TensorFlow
Breaking Changes
- The LMDB kernels have been changed to return an error. This is in preparation for completely removing them from TensorFlow. The LMDB dependency that these kernels are bringing to TensorFlow has been dropped, thus making the build slightly faster and more secure.
Major Features and Improvements
-
tf.lite- Added 16-bit and 64-bit float type support for built-in op
cast. - The Python TF Lite Interpreter bindings now have an option
experimental_disable_delegate_clusteringto turn-off delegate clustering. - Added int16x8 support for the built-in op
exp - Added int16x8 support for the built-in op
mirror_pad - Added int16x8 support for the built-in ops
space_to_batch_ndandbatch_to_space_nd - Added 16-bit int type support for built-in op
less,greater_than,equal - Added 8-bit and 16-bit support for
floor_divandfloor_mod. - Added 16-bit and 32-bit int support for the built-in op
bitcast. - Added 8-bit/16-bit/32-bit int/uint support for the built-in op
bitwise_xor - Added int16 indices support for built-in op
gatherandgather_nd. - Added 8-bit/16-bit/32-bit int/uint support for the built-in op
right_shift - Added reference implementation for 16-bit int unquantized
add. - Added reference implementation for 16-bit int and 32-bit unsigned int unquantized
mul. add_opsupports broadcasting up to 6 dimensions.- Added 16-bit support for
top_k.
- Added 16-bit and 64-bit float type support for built-in op
-
tf.function- ConcreteFunction (
tf.types.experimental.ConcreteFunction) as generated throughget_concrete_functionnow performs holistic input validation similar to callingtf.functiondirectly. This can cause breakages where existing calls pass Tensors with the wrong shape or omit certain non-Tensor arguments (including default values).
- ConcreteFunction (
-
tf.nntf.nn.embedding_lookup_sparseandtf.nn.safe_embedding_lookup_sparsenow support ids and weights described bytf.RaggedTensors.- Added a new boolean argument
allow_fast_lookuptotf.nn.embedding_lookup_sparseandtf.nn.safe_embedding_lookup_sparse, which enables a simplified and typically faster lookup procedure.
-
tf.datatf.data.Dataset.zipnow supports Python-style zipping, i.e.Dataset.zip(a, b, c).tf.data.Dataset.shufflenow supportstf.data.UNKNOWN_CARDINALITYWhen doing a "full shuffle" usingdataset = dataset.shuffle(dataset.cardinality()). But remember, a "full shuffle" will load the full dataset into memory so that it can be shuffled, so make sure to only use this with small datasets or datasets of small objects (like filenames).
-
tf.mathtf.nn.top_know supports specifying the output index type via parameterindex_type. Supported types aretf.int16,tf.int32(default), andtf.int64.
-
tf.SavedModel- Introduced class method
tf.saved_model.experimental.Fingerprint.from_proto(proto), which can be used to construct aFingerprintobject directly from a protobuf. - Introduced member method
tf.saved_model.experimental.Fingerprint.singleprint(), which provides a convenient way to uniquely identify a SavedModel.
- Introduced class method
Bug Fixes and Other Changes
-
tf.Variable- Changed resource variables to inherit from
tf.compat.v2.Variableinstead oftf.compat.v1.Variable. Some checks forisinstance(v, tf compat.v1.Variable)that previously returned True may now return False.
- Changed resource variables to inherit from
-
tf.distribute- Opened an experimental API,
tf.distribute.experimental.coordinator.get_current_worker_index, for retrieving the worker index from within a worker, when using parameter server training with a custom training loop.
- Opened an experimental API,
-
tf.experimental.dtensor- Deprecated
dtensor.run_onin favor ofdtensor.default_meshto correctly indicate that the context does not override the mesh that the ops and functions will run on, it only sets a fallback default mesh. - List of members of
dtensor.Layoutanddtensor.Meshhave slightly changed as part of efforts to consolidate the C++ and Python source code with pybind11. Most notably,dtensor.Layout.serialized_stringis removed. - Minor API changes to represent Single Device Layout for non-distributed Tensors inside DTensor functions. Runtime support will be added soon.
- Deprecated
-
tf.experimental.ExtensionTypetf.experimental.ExtensionTypenow supports Pythontupleas the type annotation of its fields.
-
tf.nest- Deprecated API
tf.nest.is_sequencehas now been deleted. Please usetf.nest.is_nestedinstead.
- Deprecated API
Keras
Keras is a framework built on top of the TensorFlow. See more details on the Keras website.
Breaking Changes
- Removed the Keras scikit-learn API wrappers (
KerasClassifierandKerasRegressor), which had been deprecated in August 2021. We recommend using SciKeras instead. - The default Keras model saving format is now the Keras v3 format: calling
model.save("xyz.keras")will no longer create a H5 file, it will create a native Keras model file. This will only be breaking for you if you were manually inspecting or modifying H5 files saved by Keras under a.kerasextension. If this breaks you, simply addsave_format="h5"to your.save()call to revert back to the prior behavior. - Added
keras.utils.TimedThreadutility to run a timed thread every x seconds. It can be used to run a threaded function alongside model training or any other snippet of code. - In the
kerasPyPI package, accessible symbols are now restricted to symbols that are intended to be public. This may affect your code if you were usingimport kerasand you usedkerasfunctions that were not public APIs, but were accessible in earlier versions with direct imports. In those cases, please use the following guideline:
- The API may be available in the public Keras API under a different name, so make sure to look for it on keras.io or TensorFlow docs and switch to the public version.
- It could also be a simple python or TF utility that you could easily copy over to your own codebase. In those case, just make it your own!
- If you believe it should definitely be a public Keras API, please open a feature request in keras GitHub repo.
- As a workaround, you could import the same private symbol keraskeras.src, but keep in mind thesrcnamespace is not stable and those APIs may change or be removed in the future.
Major Features and Improvements
- Added F-Score metrics
tf.keras.metrics.FBetaScore,tf.keras.metrics.F1Score, andtf.keras.metrics.R2Score. - Added activation function
tf.keras.activations.mish. - Added experimental
keras.metrics.experimental.PyMetricAPI for metrics that run Python code on the host CPU (compiled outside of the TensorFlow graph). This can be used for integrating metrics from external Python libraries (like sklearn or pycocotools) into Keras as first-class Keras metrics. - Added
tf.keras.optimizers.Lionoptimizer. - Added
tf.keras.layers.SpectralNormalizationlayer wrapper to perform spectral normalization on the weights of a target layer. - The
SidecarEvaluatorModelExportcallback has been added to Keras askeras.callbacks.SidecarEvaluatorModelExport. This callback allows for exporting the model the best-scoring model as evaluated by aSidecarEvaluatorevaluator. The evaluator regularly evaluates the model and exports it if the user-defined comparison function determines that it is an improvement. - Added warmup capabilities to
tf.keras.optimizers.schedules.CosineDecaylearning rate scheduler. You can now specify an initial and target learning rate, and our scheduler will perform a linear interpolation between the two after which it will begin a decay phase. - Added experimental support for an exactly-once visitation guarantee for evaluating Keras models trained with
tf.distribute ParameterServerStrategy, via theexact_evaluation_shardsargument inModel.fitandModel.evaluate. - Added
tf.keras.__internal__.KerasTensor,tf.keras.__internal__.SparseKerasTensor, andtf.keras.__internal__.RaggedKerasTensorclasses. You can use these classes to do instance type checking and type annotations for layer/model inputs and outputs. - All the
tf.keras.dtensor.experimental.optimizersclasses have been merged withtf.keras.optimizers. You can migrate your code to usetf.keras.optimizersdirectly. The API namespace fortf.keras.dtensor.experimental.optimizerswill be removed in future releases. - Added support for
class_weightfor 3+ dimensional targets (e.g. image segmentation masks) inModel.fit. - Added a new loss,
keras.losses.CategoricalFocalCrossentropy. - Remove the
tf.keras.dtensor.experimental.layout_map_scope(). You can user thetf.keras.dtensor.experimental.LayoutMap.scope()instead.
Security
- N/A
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
103yiran, 8bitmp3, Aakar, Aakar Dwivedi, Abinash Satapathy, Aditya Kane, ag.ramesh, Alexander Grund, Andrei Pikas, andreii, Andrew Goodbody, angerson, Anthony_256, Ashay Rane, Ashiq Imran, Awsaf, Balint Cristian, Banikumar Maiti (Intel Aipg), Ben Barsdell, bhack, cfRod, Chao Chen, chenchongsong, Chris Mc, Daniil Kutz, David Rubinstein, dianjiaogit, dixr, Dongfeng Yu, dongfengy, drah, Eric Kunze, Feiyue Chen, Frederic Bastien, Gauri1 Deshpande, guozhong.zhuang, hDn248, HYChou, ingkarat, James Hilliard, Jason Furmanek, Jaya, Jens Glaser, Jerry Ge, Jiao Dian'S Power Plant, Jie Fu, Jinzhe Zeng, Jukyy, Kaixi Hou, Kanvi Khanna, Karel Ha, karllessard, Koan-Sin Tan, Konstantin Beluchenko, Kulin Seth, K...
TensorFlow 2.13.0-rc1
Release 2.13.0
TensorFlow
Breaking Changes
- The LMDB kernels have been changed to return an error. This is in preparation for completely removing them from TensorFlow. The LMDB dependency that these kernels are bringing to TensorFlow has been dropped, thus making the build slightly faster and more secure.
Major Features and Improvements
-
tf.lite- Added 16-bit and 64-bit float type support for built-in op
cast. - The Python TF Lite Interpreter bindings now have an option
experimental_disable_delegate_clusteringto turn-off delegate clustering. - Added int16x8 support for the built-in op
exp - Added int16x8 support for the built-in op
mirror_pad - Added int16x8 support for the built-in ops
space_to_batch_ndandbatch_to_space_nd - Added 16-bit int type support for built-in op
less,greater_than,equal - Added 8-bit and 16-bit support for
floor_divandfloor_mod. - Added 16-bit and 32-bit int support for the built-in op
bitcast. - Added 8-bit/16-bit/32-bit int/uint support for the built-in op
bitwise_xor - Added int16 indices support for built-in op
gatherandgather_nd. - Added 8-bit/16-bit/32-bit int/uint support for the built-in op
right_shift - Added reference implementation for 16-bit int unquantized
add. - Added reference implementation for 16-bit int and 32-bit unsigned int unquantized
mul. add_opsupports broadcasting up to 6 dimensions.- Added 16-bit support for
top_k.
- Added 16-bit and 64-bit float type support for built-in op
-
tf.function- ConcreteFunction (
tf.types.experimental.ConcreteFunction) as generated throughget_concrete_functionnow performs holistic input validation similar to callingtf.functiondirectly. This can cause breakages where existing calls pass Tensors with the wrong shape or omit certain non-Tensor arguments (including default values).
- ConcreteFunction (
-
tf.nntf.nn.embedding_lookup_sparseandtf.nn.safe_embedding_lookup_sparsenow support ids and weights described bytf.RaggedTensors.- Added a new boolean argument
allow_fast_lookuptotf.nn.embedding_lookup_sparseandtf.nn.safe_embedding_lookup_sparse, which enables a simplified and typically faster lookup procedure.
-
tf.datatf.data.Dataset.zipnow supports Python-style zipping, i.e.Dataset.zip(a, b, c).tf.data.Dataset.shufflenow supportstf.data.UNKNOWN_CARDINALITYWhen doing a "full shuffle" usingdataset = dataset.shuffle(dataset.cardinality()). But remember, a "full shuffle" will load the full dataset into memory so that it can be shuffled, so make sure to only use this with small datasets or datasets of small objects (like filenames).
-
tf.mathtf.nn.top_know supports specifying the output index type via parameterindex_type. Supported types aretf.int16,tf.int32(default), andtf.int64.
-
tf.SavedModel- Introduced class method
tf.saved_model.experimental.Fingerprint.from_proto(proto), which can be used to construct aFingerprintobject directly from a protobuf. - Introduced member method
tf.saved_model.experimental.Fingerprint.singleprint(), which provides a convenient way to uniquely identify a SavedModel.
- Introduced class method
Bug Fixes and Other Changes
-
tf.Variable- Changed resource variables to inherit from
tf.compat.v2.Variableinstead oftf.compat.v1.Variable. Some checks forisinstance(v, tf compat.v1.Variable)that previously returned True may now return False.
- Changed resource variables to inherit from
-
tf.distribute- Opened an experimental API,
tf.distribute.experimental.coordinator.get_current_worker_index, for retrieving the worker index from within a worker, when using parameter server training with a custom training loop.
- Opened an experimental API,
-
tf.experimental.dtensor- Deprecated
dtensor.run_onin favor ofdtensor.default_meshto correctly indicate that the context does not override the mesh that the ops and functions will run on, it only sets a fallback default mesh. - List of members of
dtensor.Layoutanddtensor.Meshhave slightly changed as part of efforts to consolidate the C++ and Python source code with pybind11. Most notably,dtensor.Layout.serialized_stringis removed. - Minor API changes to represent Single Device Layout for non-distributed Tensors inside DTensor functions. Runtime support will be added soon.
- Deprecated
-
tf.experimental.ExtensionTypetf.experimental.ExtensionTypenow supports Pythontupleas the type annotation of its fields.
-
tf.nest- Deprecated API
tf.nest.is_sequencehas now been deleted. Please usetf.nest.is_nestedinstead.
- Deprecated API
Keras
Keras is a framework built on top of the TensorFlow. See more details on the Keras website.
Breaking Changes
- Removed the Keras scikit-learn API wrappers (
KerasClassifierandKerasRegressor), which had been deprecated in August 2021. We recommend using SciKeras instead. - The default Keras model saving format is now the Keras v3 format: calling
model.save("xyz.keras")will no longer create a H5 file, it will create a native Keras model file. This will only be breaking for you if you were manually inspecting or modifying H5 files saved by Keras under a.kerasextension. If this breaks you, simply addsave_format="h5"to your.save()call to revert back to the prior behavior. - Added
keras.utils.TimedThreadutility to run a timed thread every x seconds. It can be used to run a threaded function alongside model training or any other snippet of code. - In the
kerasPyPI package, accessible symbols are now restricted to symbols that are intended to be public. This may affect your code if you were usingimport kerasand you usedkerasfunctions that were not public APIs, but were accessible in earlier versions with direct imports. In those cases, please use the following guideline:
- The API may be available in the public Keras API under a different name, so make sure to look for it on keras.io or TensorFlow docs and switch to the public version.
- It could also be a simple python or TF utility that you could easily copy over to your own codebase. In those case, just make it your own!
- If you believe it should definitely be a public Keras API, please open a feature request in keras GitHub repo.
- As a workaround, you could import the same private symbol keraskeras.src, but keep in mind thesrcnamespace is not stable and those APIs may change or be removed in the future.
Major Features and Improvements
- Added F-Score metrics
tf.keras.metrics.FBetaScore,tf.keras.metrics.F1Score, andtf.keras.metrics.R2Score. - Added activation function
tf.keras.activations.mish. - Added experimental
keras.metrics.experimental.PyMetricAPI for metrics that run Python code on the host CPU (compiled outside of the TensorFlow graph). This can be used for integrating metrics from external Python libraries (like sklearn or pycocotools) into Keras as first-class Keras metrics. - Added
tf.keras.optimizers.Lionoptimizer. - Added
tf.keras.layers.SpectralNormalizationlayer wrapper to perform spectral normalization on the weights of a target layer. - The
SidecarEvaluatorModelExportcallback has been added to Keras askeras.callbacks.SidecarEvaluatorModelExport. This callback allows for exporting the model the best-scoring model as evaluated by aSidecarEvaluatorevaluator. The evaluator regularly evaluates the model and exports it if the user-defined comparison function determines that it is an improvement. - Added warmup capabilities to
tf.keras.optimizers.schedules.CosineDecaylearning rate scheduler. You can now specify an initial and target learning rate, and our scheduler will perform a linear interpolation between the two after which it will begin a decay phase. - Added experimental support for an exactly-once visitation guarantee for evaluating Keras models trained with
tf.distribute ParameterServerStrategy, via theexact_evaluation_shardsargument inModel.fitandModel.evaluate. - Added
tf.keras.__internal__.KerasTensor,tf.keras.__internal__.SparseKerasTensor, andtf.keras.__internal__.RaggedKerasTensorclasses. You can use these classes to do instance type checking and type annotations for layer/model inputs and outputs. - All the
tf.keras.dtensor.experimental.optimizersclasses have been merged withtf.keras.optimizers. You can migrate your code to usetf.keras.optimizersdirectly. The API namespace fortf.keras.dtensor.experimental.optimizerswill be removed in future releases. - Added support for
class_weightfor 3+ dimensional targets (e.g. image segmentation masks) inModel.fit. - Added a new loss,
keras.losses.CategoricalFocalCrossentropy. - Remove the
tf.keras.dtensor.experimental.layout_map_scope(). You can user thetf.keras.dtensor.experimental.LayoutMap.scope()instead.
Security
- N/A
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
103yiran, 8bitmp3, Aakar, Aakar Dwivedi, Abinash Satapathy, Aditya Kane, ag.ramesh, Alexander Grund, Andrei Pikas, andreii, Andrew Goodbody, angerson, Anthony_256, Ashay Rane, Ashiq Imran, Awsaf, Balint Cristian, Banikumar Maiti (Intel Aipg), Ben Barsdell, bhack, cfRod, Chao Chen, chenchongsong, Chris Mc, Daniil Kutz, David Rubinstein, dianjiaogit, dixr, Dongfeng Yu, dongfengy, drah, Eric Kunze, Feiyue Chen, Frederic Bastien, Gauri1 Deshpande, guozhong.zhuang, hDn248, HYChou, ingkarat, James Hilliard, Jason Furmanek, Jaya, Jens Glaser, Jerry Ge, Jiao Dian'S Power Plant, Jie Fu, Jinzhe Zeng, Jukyy, Kaixi Hou, Kanvi Khanna, Karel Ha, karllessard, Koan-Sin Tan, Konstantin Beluchenko, Kulin Seth, K...
TensorFlow 2.13.0-rc0
Release 2.13.0
TensorFlow
Breaking Changes
- The LMDB kernels have been changed to return an error. This is in preparation for completely removing them from TensorFlow. The LMDB dependency that these kernels are bringing to TensorFlow has been dropped, thus making the build slightly faster and more secure.
Major Features and Improvements
-
tf.lite- Add 16-bit and 64-bit float type support for built-in op
cast. - The Python TF Lite Interpreter bindings now have an option
experimental_disable_delegate_clusteringto turn-off delegate clustering. - Add int16x8 support for the built-in op
exp - Add int16x8 support for the built-in op
mirror_pad - Add int16x8 support for the built-in ops
space_to_batch_ndandbatch_to_space_nd - Add 16-bit int type support for built-in op
less,greater_than,equal - Add 8-bit and 16-bit support for
floor_divandfloor_mod. - Add 16-bit and 32-bit int support for the built-in op
bitcast. - Add 8-bit/16-bit/32-bit int/uint support for the built-in op
bitwise_xor - Add int16 indices support for built-in op
gatherandgather_nd. - Add 8-bit/16-bit/32-bit int/uint support for the built-in op
right_shift - Add reference implementation for 16-bit int unquantized
add. - Add reference implementation for 16-bit int and 32-bit unsigned int unquantized
mul. add_opsupports broadcasting up to 6 dimensions.- Add 16-bit support for
top_k.
- Add 16-bit and 64-bit float type support for built-in op
-
tf.function- ConcreteFunction (
tf.types.experimental.ConcreteFunction) as generated throughget_concrete_functionnow performs holistic input validation similar to callingtf.functiondirectly. This can cause breakages where existing calls pass Tensors with the wrong shape or omit certain non-Tensor arguments (including default values).
- ConcreteFunction (
-
tf.nntf.nn.embedding_lookup_sparseandtf.nn.safe_embedding_lookup_sparsenow support ids and weights described bytf.RaggedTensors.- Added a new boolean argument
allow_fast_lookuptotf.nn.embedding_lookup_sparseandtf.nn.safe_embedding_lookup_sparse, which enables a simplified and typically faster lookup procedure.
-
tf.datatf.data.Dataset.zipnow supports Python-style zipping, i.e.Dataset.zip(a, b, c).tf.data.Dataset.shufflenow supports full shuffling. To specify that data should be fully shuffled, usedataset = dataset.shuffle(dataset.cardinality()). This will load the full dataset into memory so that it can be shuffled, so make sure to only use this with datasets of filenames or other small datasets.
-
tf.mathtf.nn.top_know supports specifying the output index type via parameterindex_type. Supported types aretf.int16,tf.int32(default), andtf.int64.
-
tf.SavedModel- Introduce class method
tf.saved_model.experimental.Fingerprint.from_proto(proto), which can be used to construct aFingerprintobject directly from a protobuf. - Introduce member method
tf.saved_model.experimental.Fingerprint.singleprint(), which provides a convenient way to uniquely identify a SavedModel.
- Introduce class method
Bug Fixes and Other Changes
-
tf.Variable- Changed resource variables to inherit from
tf.compat.v2.Variableinstead oftf.compat.v1.Variable. Some checks forisinstance(v, tf compat.v1.Variable)that previously returned True may now return False.
- Changed resource variables to inherit from
-
tf.distribute- Opened an experimental API,
tf.distribute.experimental.coordinator.get_current_worker_index, for retrieving the worker index from within a worker, when using parameter server training with a custom training loop.
- Opened an experimental API,
-
tf.experimental.dtensor- Deprecated
dtensor.run_onin favor ofdtensor.default_meshto correctly indicate that the context does not override the mesh that the ops and functions will run on, it only sets a fallback default mesh. - List of members of dtensor.Layout and dtensor.Mesh have slightly changed as part of efforts to consolidate the C++ and Python source code with pybind11. Most notably, Layout.serialized_string is removed.
- Minor API changes to represent Single Device Layout for non-distributed Tensors inside DTensor functions. Runtime support will be added soon.
- Deprecated
-
tf.experimental.ExtensionTypetf.experimental.ExtensionTypenow supports Pythontupleas the type annotation of its fields.
-
tf.nest- Deprecated API
tf.nest.is_sequencehas now been deleted. Please usetf.nest.is_nestedinstead.
- Deprecated API
Keras
Keras is a framework built on top of the TensorFlow. See more details on the Keras website.
Breaking Changes
-
tf.keras- Removed the Keras scikit-learn API wrappers (
KerasClassifierandKerasRegressor), which had been deprecated in August 2021. We recommend using SciKeras instead. - The default Keras model saving format is now the Keras v3 format: calling
model.save("xyz.keras")will no longer create a H5 file, it will create a native Keras model file. This will only be breaking for you if you were manually inspecting or modifying H5 files saved by Keras under a.kerasextension. If this breaks you, simply addsave_format="h5"to your.save()call to revert back to the prior behavior. - Added
keras.utils.TimedThreadutility to run a timed thread every x seconds. It can be used to run a threaded function alongside model training or any other snippet of code. - In the
kerasPyPI package, accessible symbols are now restricted to symbols that are intended to be public. This may affect your code if you were usingimport kerasand you usedkerasfunctions that were not public APIs, but were accessible in earlier versions with direct imports. In those cases, please use the following guideline:- The API may be available in the public Keras API under a different name, so make sure to look for it on keras.io or TensorFlow docs and switch to the public version.
- It could also be a simple python or TF utility that you could easily copy over to your own codebase. In those case, just make it your own!
- If you believe it should definitely be a public Keras API, please open a feature request in keras GitHub repo.
- As a workaround, you could import the same private symbol keras
keras.src, but keep in mind thesrcnamespace is not stable and those APIs may change or be removed in the future.
- Removed the Keras scikit-learn API wrappers (
Major Features and Improvements
-
tf.keras- Added F-Score metrics
tf.keras.metrics.FBetaScore,tf.keras.metrics.F1Score, andtf.keras.metrics.R2Score. - Added activation function
tf.keras.activations.mish. - Added experimental
keras.metrics.experimental.PyMetricAPI for metrics that run Python code on the host CPU (compiled outside of the TensorFlow graph). This can be used for integrating metrics from external Python libraries (like sklearn or pycocotools) into Keras as first-class Keras metrics. - Added
tf.keras.optimizers.Lionoptimizer. - Added
tf.keras.layers.SpectralNormalizationlayer wrapper to perform spectral normalization on the weights of a target layer. - The
SidecarEvaluatorModelExportcallback has been added to Keras askeras.callbacks.SidecarEvaluatorModelExport. This callback allows for exporting the model the best-scoring model as evaluated by aSidecarEvaluatorevaluator. The evaluator regularly evaluates the model and exports it if the user-defined comparison function determines that it is an improvement. - Added warmup capabilities to
tf.keras.optimizers.schedules.CosineDecaylearning rate scheduler. You can now specify an initial and target learning rate, and our scheduler will perform a linear interpolation between the two after which it will begin a decay phase. - Added experimental support for an exactly-once visitation guarantee for evaluating Keras models trained with
tf.distribute ParameterServerStrategy, via theexact_evaluation_shardsargument inModel.fitandModel.evaluate. - Added
tf.keras.__internal__.KerasTensor,tf.keras.__internal__.SparseKerasTensor, andtf.keras.__internal__.RaggedKerasTensorclasses. You can use these classes to do instance type checking and type annotations for layer/model inputs and outputs. - All the
tf.keras.dtensor.experimental.optimizersclasses have been merged withtf.keras.optimizers. You can migrate your code to usetf.keras.optimizersdirectly. The API namespace fortf.keras.dtensor.experimental.optimizerswill be removed in future releases. - Added support for
class_weightfor 3+ dimensional targets (e.g. image segmentation masks) inModel.fit. - Added a new loss,
keras.losses.CategoricalFocalCrossentropy. - Remove the
tf.keras.dtensor.experimental.layout_map_scope(). You can user thetf.keras.dtensor.experimental.LayoutMap.scope()instead.
- Added F-Score metrics
Security
- N/A
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
103yiran, 8bitmp3, Aakar, Aakar Dwivedi, Abinash Satapathy, Aditya Kane, ag.ramesh, Alexander Grund, Andrei Pikas, andreii, Andrew Goodbody, angerson, Anthony_256, Ashay Rane, Ashiq Imran, Awsaf, Balint Cristian, Banikumar Maiti (Intel Aipg), Ben Barsdell, bhack, cfRod, Chao Chen, chenchongsong, Chris Mc, Daniil Kutz, David Rubinstein, dianjiaogit, dixr, Dongfeng Yu, dongfengy, drah, Eric Kunze, Feiyue Chen, Frederic Bastien, Gauri1 Deshpande, guozhong.zhuang, hDn248, HYChou, ingkarat, James Hilliard, Jason Furmanek, Jaya, Jens Glaser, Jerry Ge, Jiao Dian'S Power Plant, Jie Fu, Jinzhe Zeng, Jukyy, Kaixi Hou, Kanvi Khanna, Karel Ha, karllessard, Koan-Sin Tan, Konstanti...
TensorFlow 2.12.0
Release 2.12.0
TensorFlow
Breaking Changes
-
Build, Compilation and Packaging
- Removed redundant packages
tensorflow-gpuandtf-nightly-gpu. These packages were removed and replaced with packages that direct users to switch totensorflowortf-nightlyrespectively. Since TensorFlow 2.1, the only difference between these two sets of packages was their names, so there is no loss of functionality or GPU support. See https://pypi.org/project/tensorflow-gpu for more details.
- Removed redundant packages
-
tf.function:tf.functionnow uses the Python inspect library directly for parsing the signature of the Python function it is decorated on. This change may break code where the function signature is malformed, but was ignored previously, such as:- Using
functools.wrapson a function with different signature - Using
functools.partialwith an invalidtf.functioninput
- Using
tf.functionnow enforces input parameter names to be valid Python identifiers. Incompatible names are automatically sanitized similarly to existing SavedModel signature behavior.- Parameterless
tf.functions are assumed to have an emptyinput_signatureinstead of an undefined one even if theinput_signatureis unspecified. tf.types.experimental.TraceTypenow requires an additionalplaceholder_valuemethod to be defined.tf.functionnow traces with placeholder values generated by TraceType instead of the value itself.
-
Experimental APIs
tf.config.experimental.enable_mlir_graph_optimizationandtf.config.experimental.disable_mlir_graph_optimizationwere removed.
Major Features and Improvements
-
Support for Python 3.11 has been added.
-
Support for Python 3.7 has been removed. We are not releasing any more patches for Python 3.7.
-
tf.lite:- Add 16-bit float type support for built-in op
fill. - Transpose now supports 6D tensors.
- Float LSTM now supports diagonal recurrent tensors: https://arxiv.org/abs/1903.08023
- Add 16-bit float type support for built-in op
-
tf.experimental.dtensor:- Coordination service now works with
dtensor.initialize_accelerator_system, and enabled by default. - Add
tf.experimental.dtensor.is_dtensorto check if a tensor is a DTensor instance.
- Coordination service now works with
-
tf.data:- Added support for alternative checkpointing protocol which makes it possible to checkpoint the state of the input pipeline without having to store the contents of internal buffers. The new functionality can be enabled through the
experimental_symbolic_checkpointoption oftf.data.Options(). - Added a new
rerandomize_each_iterationargument for thetf.data.Dataset.random()operation, which controls whether the sequence of generated random numbers should be re-randomized every epoch or not (the default behavior). Ifseedis set andrerandomize_each_iteration=True, therandom()operation will produce a different (deterministic) sequence of numbers every epoch. - Added a new
rerandomize_each_iterationargument for thetf.data.Dataset.sample_from_datasets()operation, which controls whether the sequence of generated random numbers used for sampling should be re-randomized every epoch or not. Ifseedis set andrerandomize_each_iteration=True, thesample_from_datasets()operation will use a different (deterministic) sequence of numbers every epoch.
- Added support for alternative checkpointing protocol which makes it possible to checkpoint the state of the input pipeline without having to store the contents of internal buffers. The new functionality can be enabled through the
-
tf.test:- Added
tf.test.experimental.sync_devices, which is useful for accurately measuring performance in benchmarks.
- Added
-
tf.experimental.dtensor:- Added experimental support to ReduceScatter fuse on GPU (NCCL).
Bug Fixes and Other Changes
tf.SavedModel:- Introduced new class
tf.saved_model.experimental.Fingerprintthat contains the fingerprint of the SavedModel. See the SavedModel Fingerprinting RFC for details. - Introduced API
tf.saved_model.experimental.read_fingerprint(export_dir)for reading the fingerprint of a SavedModel.
- Introduced new class
tf.random- Added non-experimental aliases for
tf.random.splitandtf.random.fold_in, the experimental endpoints are still available so no code changes are necessary.
- Added non-experimental aliases for
tf.experimental.ExtensionType- Added function
experimental.extension_type.as_dict(), which converts an instance oftf.experimental.ExtensionTypeto adictrepresentation.
- Added function
stream_executor- Top level
stream_executordirectory has been deleted, users should use equivalent headers and targets undercompiler/xla/stream_executor.
- Top level
tf.nn- Added
tf.nn.experimental.general_dropout, which is similar totf.random.experimental.stateless_dropoutbut accepts a custom sampler function.
- Added
tf.types.experimental.GenericFunction- The
experimental_get_compiler_irmethod supports tf.TensorSpec compilation arguments.
- The
tf.config.experimental.mlir_bridge_rollout- Removed enums
MLIR_BRIDGE_ROLLOUT_SAFE_MODE_ENABLEDandMLIR_BRIDGE_ROLLOUT_SAFE_MODE_FALLBACK_ENABLEDwhich are no longer used by the tf2xla bridge
- Removed enums
Keras
Keras is a framework built on top of the TensorFlow. See more details on the Keras website.
Breaking Changes
tf.keras:
- Moved all saving-related utilities to a new namespace,
keras.saving, for example:keras.saving.load_model,keras.saving.save_model,keras.saving.custom_object_scope,keras.saving.get_custom_objects,keras.saving.register_keras_serializable,keras.saving.get_registered_nameandkeras.saving.get_registered_object. The previous API locations (inkeras.utilsandkeras.models) will be available indefinitely, but we recommend you update your code to point to the new API locations. - Improvements and fixes in Keras loss masking:
- Whether you represent a ragged tensor as a
tf.RaggedTensoror using keras masking, the returned loss values should be the identical to each other. In previous versions Keras may have silently ignored the mask.
- Whether you represent a ragged tensor as a
- If you use masked losses with Keras the loss values may be different in TensorFlow
2.12compared to previous versions. - In cases where the mask was previously ignored, you will now get an error if you pass a mask with an incompatible shape.
Major Features and Improvements
tf.keras:
- The new Keras model saving format (
.keras) is available. You can start using it viamodel.save(f"{fname}.keras", save_format="keras_v3"). In the future it will become the default for all files with the.kerasextension. This file format targets the Python runtime only and makes it possible to reload Python objects identical to the saved originals. The format supports non-numerical state such as vocabulary files and lookup tables, and it is easy to customize in the case of custom layers with exotic elements of state (e.g. a FIFOQueue). The format does not rely on bytecode or pickling, and is safe by default. Note that as a result, Pythonlambdasare disallowed at loading time. If you want to uselambdas, you can passsafe_mode=Falseto the loading method (only do this if you trust the source of the model). - Added a
model.export(filepath)API to create a lightweight SavedModel artifact that can be used for inference (e.g. with TF-Serving). - Added
keras.export.ExportArchiveclass for low-level customization of the process of exporting SavedModel artifacts for inference. Both ways of exporting models are based ontf.functiontracing and produce a TF program composed of TF ops. They are meant primarily for environments where the TF runtime is available, but not the Python interpreter, as is typical for production with TF Serving. - Added utility
tf.keras.utils.FeatureSpace, a one-stop shop for structured data preprocessing and encoding. - Added
tf.SparseTensorinput support totf.keras.layers.Embeddinglayer. The layer now accepts a new boolean argumentsparse. Ifsparseis set to True, the layer returns a SparseTensor instead of a dense Tensor. Defaults to False. - Added
jit_compileas a settable property totf.keras.Model. - Added
synchronizedoptional parameter tolayers.BatchNormalization. - Added deprecation warning to
layers.experimental.SyncBatchNormalizationand suggested to uselayers.BatchNormalizationwithsynchronized=Trueinstead. - Updated
tf.keras.layers.BatchNormalizationto support masking of the inputs (maskargument) when computing the mean and variance. - Add
tf.keras.layers.Identity, a placeholder pass-through layer. - Add
show_trainableoption totf.keras.utils.model_to_dotto display layer trainable status in model plots. - Add ability to save a
tf.keras.utils.FeatureSpaceobject, viafeature_space.save("myfeaturespace.keras"), and reload it viafeature_space = tf.keras.models.load_model("myfeaturespace.keras"). - Added utility
tf.keras.utils.to_ordinalto convert class vector to ordinal regression / classification matrix.
Bug Fixes and Other Changes
- N/A
Security
- Fixes an FPE in TFLite in conv kernel CVE-2023-27579
- Fixes a double free in Fractional(Max/Avg)Pool CVE-2023-25801
- Fixes a null dereference on ParallelConcat with XLA CVE-2023-25676
- Fixes a segfault in Bincount with XLA CVE-2023-25675
- Fixes an NPE in RandomShuffle with XLA enable CVE-2023-25674
- Fixes an FPE in TensorListSplit with XLA CVE-2023-25673
- Fixes segment...