4 Features Released with v2.13.0
Browse Other Releases
Top Contributors in v2.13.0
Directory Browser for v2.13.0
We haven't yet finished calculating and confirming the files and directories changed in this release. Please check back soon.
Release Notes Published
Release 2.13.0
TensorFlow
Breaking Changes
- The LMDB kernels have been changed to return an error. This is in preparation for completely removing them from TensorFlow. The LMDB dependency that these kernels are bringing to TensorFlow has been dropped, thus making the build slightly faster and more secure.
Major Features and Improvements
tf.lite
- Added 16-bit and 64-bit float type support for built-in op
cast
. - The Python TF Lite Interpreter bindings now have an option
experimental_disable_delegate_clustering
to turn-off delegate clustering. - Added int16x8 support for the built-in op
exp
- Added int16x8 support for the built-in op
mirror_pad
- Added int16x8 support for the built-in ops
space_to_batch_nd
andbatch_to_space_nd
- Added 16-bit int type support for built-in op
less
,greater_than
,equal
- Added 8-bit and 16-bit support for
floor_div
andfloor_mod
. - Added 16-bit and 32-bit int support for the built-in op
bitcast
. - Added 8-bit/16-bit/32-bit int/uint support for the built-in op
bitwise_xor
- Added int16 indices support for built-in op
gather
andgather_nd
. - Added 8-bit/16-bit/32-bit int/uint support for the built-in op
right_shift
- Added reference implementation for 16-bit int unquantized
add
. - Added reference implementation for 16-bit int and 32-bit unsigned int unquantized
mul
. -
add_op
supports broadcasting up to 6 dimensions. - Added 16-bit support for
top_k
.
- Added 16-bit and 64-bit float type support for built-in op
tf.function
- ConcreteFunction (
tf.types.experimental.ConcreteFunction
) as generated throughget_concrete_function
now performs holistic input validation similar to callingtf.function
directly. This can cause breakages where existing calls pass Tensors with the wrong shape or omit certain non-Tensor arguments (including default values).
- ConcreteFunction (
tf.nn
-
tf.nn.embedding_lookup_sparse
andtf.nn.safe_embedding_lookup_sparse
now support ids and weights described bytf.RaggedTensor
s. - Added a new boolean argument
allow_fast_lookup
totf.nn.embedding_lookup_sparse
andtf.nn.safe_embedding_lookup_sparse
, which enables a simplified and typically faster lookup procedure.
-
tf.data
-
tf.data.Dataset.zip
now supports Python-style zipping, i.e.Dataset.zip(a, b, c)
. tf.data.Dataset.shuffle
now supportstf.data.UNKNOWN_CARDINALITY
When doing a "full shuffle" usingdataset = dataset.shuffle(dataset.cardinality())
. But remember, a "full shuffle" will load the full dataset into memory so that it can be shuffled, so make sure to only use this with small datasets or datasets of small objects (like filenames).
-
tf.math
tf.nn.top_k
now supports specifying the output index type via parameterindex_type
. Supported types aretf.int16
,tf.int32
(default), andtf.int64
.
tf.SavedModel
- Introduced class method
tf.saved_model.experimental.Fingerprint.from_proto(proto)
, which can be used to construct aFingerprint
object directly from a protobuf. - Introduced member method
tf.saved_model.experimental.Fingerprint.singleprint()
, which provides a convenient way to uniquely identify a SavedModel.
- Introduced class method
Bug Fixes and Other Changes
tf.Variable
- Changed resource variables to inherit from
tf.compat.v2.Variable
instead oftf.compat.v1.Variable
. Some checks forisinstance(v, tf compat.v1.Variable)
that previously returned True may now return False.
- Changed resource variables to inherit from
tf.distribute
- Opened an experimental API,
tf.distribute.experimental.coordinator.get_current_worker_index
, for retrieving the worker index from within a worker, when using parameter server training with a custom training loop.
- Opened an experimental API,
tf.experimental.dtensor
- Deprecated
dtensor.run_on
in favor ofdtensor.default_mesh
to correctly indicate that the context does not override the mesh that the ops and functions will run on, it only sets a fallback default mesh. - List of members of
dtensor.Layout
anddtensor.Mesh
have slightly changed as part of efforts to consolidate the C++ and Python source code with pybind11. Most notably,dtensor.Layout.serialized_string
is removed. - Minor API changes to represent Single Device Layout for non-distributed Tensors inside DTensor functions. Runtime support will be added soon.
- Deprecated
tf.experimental.ExtensionType
-
tf.experimental.ExtensionType
now supports Pythontuple
as the type annotation of its fields.
-
tf.nest
- Deprecated API
tf.nest.is_sequence
has now been deleted. Please usetf.nest.is_nested
instead.
- Deprecated API
Keras
Keras is a framework built on top of the TensorFlow. See more details on the Keras website.
Breaking Changes
- Removed the Keras scikit-learn API wrappers (
KerasClassifier
andKerasRegressor
), which had been deprecated in August 2021. We recommend using SciKeras instead. - The default Keras model saving format is now the Keras v3 format: calling
model.save("xyz.keras")
will no longer create a H5 file, it will create a native Keras model file. This will only be breaking for you if you were manually inspecting or modifying H5 files saved by Keras under a.keras
extension. If this breaks you, simply addsave_format="h5"
to your.save()
call to revert back to the prior behavior. - Added
keras.utils.TimedThread
utility to run a timed thread every x seconds. It can be used to run a threaded function alongside model training or any other snippet of code. - In the
keras
PyPI package, accessible symbols are now restricted to symbols that are intended to be public. This may affect your code if you were usingimport keras
and you usedkeras
functions that were not public APIs, but were accessible in earlier versions with direct imports. In those cases, please use the following guideline: - The API may be available in the public Keras API under a different name, so make sure to look for it on keras.io or TensorFlow docs and switch to the public version. - It could also be a simple python or TF utility that you could easily copy over to your own codebase. In those case, just make it your own! - If you believe it should definitely be a public Keras API, please open a feature request in keras GitHub repo. - As a workaround, you could import the same private symbol keraskeras.src
, but keep in mind thesrc
namespace is not stable and those APIs may change or be removed in the future.
Major Features and Improvements
- Added F-Score metrics
tf.keras.metrics.FBetaScore
,tf.keras.metrics.F1Score
, andtf.keras.metrics.R2Score
. - Added activation function
tf.keras.activations.mish
. - Added experimental
keras.metrics.experimental.PyMetric
API for metrics that run Python code on the host CPU (compiled outside of the TensorFlow graph). This can be used for integrating metrics from external Python libraries (like sklearn or pycocotools) into Keras as first-class Keras metrics. - Added
tf.keras.optimizers.Lion
optimizer. - Added
tf.keras.layers.SpectralNormalization
layer wrapper to perform spectral normalization on the weights of a target layer. - The
SidecarEvaluatorModelExport
callback has been added to Keras askeras.callbacks.SidecarEvaluatorModelExport
. This callback allows for exporting the model the best-scoring model as evaluated by aSidecarEvaluator
evaluator. The evaluator regularly evaluates the model and exports it if the user-defined comparison function determines that it is an improvement. - Added warmup capabilities to
tf.keras.optimizers.schedules.CosineDecay
learning rate scheduler. You can now specify an initial and target learning rate, and our scheduler will perform a linear interpolation between the two after which it will begin a decay phase. - Added experimental support for an exactly-once visitation guarantee for evaluating Keras models trained with
tf.distribute ParameterServerStrategy
, via theexact_evaluation_shards
argument inModel.fit
andModel.evaluate
. - Added
tf.keras.__internal__.KerasTensor
,tf.keras.__internal__.SparseKerasTensor
, andtf.keras.__internal__.RaggedKerasTensor
classes. You can use these classes to do instance type checking and type annotations for layer/model inputs and outputs. - All the
tf.keras.dtensor.experimental.optimizers
classes have been merged withtf.keras.optimizers
. You can migrate your code to usetf.keras.optimizers
directly. The API namespace fortf.keras.dtensor.experimental.optimizers
will be removed in future releases. - Added support for
class_weight
for 3+ dimensional targets (e.g. image segmentation masks) inModel.fit
. - Added a new loss,
keras.losses.CategoricalFocalCrossentropy
. - Remove the
tf.keras.dtensor.experimental.layout_map_scope()
. You can user thetf.keras.dtensor.experimental.LayoutMap.scope()
instead.
Security
- Fixes correct values rank in UpperBound and LowerBound CVE-2023-33976
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
103yiran, 8bitmp3, Aakar, Aakar Dwivedi, Abinash Satapathy, Aditya Kane, ag.ramesh, Alexander Grund, Andrei Pikas, andreii, Andrew Goodbody, angerson, Anthony_256, Ashay Rane, Ashiq Imran, Awsaf, Balint Cristian, Banikumar Maiti (Intel Aipg), Ben Barsdell, bhack, cfRod, Chao Chen, chenchongsong, Chris Mc, Daniil Kutz, David Rubinstein, dianjiaogit, dixr, Dongfeng Yu, dongfengy, drah, Eric Kunze, Feiyue Chen, Frederic Bastien, Gauri1 Deshpande, guozhong.zhuang, hDn248, HYChou, ingkarat, James Hilliard, Jason Furmanek, Jaya, Jens Glaser, Jerry Ge, Jiao Dian'S Power Plant, Jie Fu, Jinzhe Zeng, Jukyy, Kaixi Hou, Kanvi Khanna, Karel Ha, karllessard, Koan-Sin Tan, Konstantin Beluchenko, Kulin Seth, Kun Lu, Kyle Gerard Felker, Leopold Cambier, Lianmin Zheng, linlifan, liuyuanqiang, Lukas Geiger, Luke Hutton, Mahmoud Abuzaina, Manas Mohanty, Mateo Fidabel, Maxiwell S. Garcia, Mayank Raunak, mdfaijul, meatybobby, Meenakshi Venkataraman, Michael Holman, Nathan John Sircombe, Nathan Luehr, nitins17, Om Thakkar, Patrice Vignola, Pavani Majety, per1234, Philipp Hack, pollfly, Prianka Liz Kariat, Rahul Batra, rahulbatra85, ratnam.parikh, Rickard HallerbΓ€ck, Roger Iyengar, Rohit Santhanam, Roman Baranchuk, Sachin Muradi, sanadani, Saoirse Stewart, seanshpark, Shawn Wang, shuw, Srinivasan Narayanamoorthy, Stewart Miles, Sunita Nadampalli, SuryanarayanaY, Takahashi Shuuji, Tatwai Chong, Thibaut Goetghebuer-Planchon, tilakrayal, Tirumalesh, TJ, Tony Sung, Trevor Morris, unda, Vertexwahn, venkat2469, William Muir, Xavier Bonaventura, xiang.zhang, Xiao-Yong Jin, yleeeee, Yong Tang, Yuriy Chernyshov, Zhang, Xiangze, zhaozheng09