Release 2.16.0
TensorFlow
- TensorFlow Windows Build:
- Clang is now the default compiler to build TensorFlow CPU wheels on the Windows Platform starting with this release. The currently supported version is LLVM/clang 17. The official Wheels-published on PyPI will be based on Clang; however, users retain the option to build wheels using the MSVC compiler following the steps mentioned in https://www.tensorflow.org/install/source_windows as has been the case before
Breaking Changes
tf.summary.trace_on
now takes a profiler_outdir
argument. This must be set if profiler
arg is set to True
.
-
tf.summary.trace_export
's profiler_outdir
arg is now a no-op. Enabling the profiler now requires setting profiler_outdir
in trace_on
.
tf.estimator
- The tf.estimator API is removed.
- To continue using tf.estimator, you will need to use TF 2.15 or an earlier version.
Keras 3 will be the default Keras version. You may need to update your script to use Keras 3. Please refer to the new Keras documentation for Keras 3 (https://keras.io/keras_3). To continue using Keras 2, do the following:
- Install
tf-keras
via pip install tf-keras~=2.16
- To switch tf.keras to use Keras 2 (tf-keras), set the environment variable
TF_USE_LEGACY_KERAS=1
directly or in your Python program by doing import os;os.environ["TF_USE_LEGACY_KERAS"]=1
. Please note that this will set it for all packages in your Python runtime program.
- Apple Silicon users: If you previously installed TensorFlow using
pip install tensorflow-macos
, please update your installation method. Use pip install tensorflow
from now on. Starting with TF 2.17, the tensorflow-macos
package will no longer receive updates.
Known Caveats
- Full aarch64 Linux and Arm64 macOS wheels are now published to the
tensorflow
pypi repository and no longer redirect to a separate package.
Major Features and Improvements
- Support for Python 3.12 has been added.
- tensorflow-tpu package is now available for easier TPU based installs.
- TensorFlow pip packages are now built with CUDA 12.3 and cuDNN 8.9.7
Bug Fixes and Other Changes
tf.lite
- Added support for
stablehlo.gather
.
- Added support for
stablehlo.add
.
- Added support for
stablehlo.multiply
.
- Added support for
stablehlo.maximum
.
- Added support for
stablehlo.minimum
.
- Added boolean parameter support for
tfl.gather_nd
.
tf.train.CheckpointOptions
and tf.saved_model.SaveOptions
- These now take in a new argument called
experimental_sharding_callback
. This is a callback function wrapper that will be executed to determine how tensors will be split into shards when the saver writes the checkpoint shards to disk. tf.train.experimental.ShardByTaskPolicy
is the default sharding behavior, but tf.train.experimental.MaxShardSizePolicy
can be used to shard the checkpoint with a maximum shard file size. Users with advanced use cases can also write their own custom tf.train.experimental.ShardingCallback
s.
tf.train.CheckpointOptions
- Added
experimental_skip_slot_variables
(a boolean option) to skip restoring of optimizer slot variables in a checkpoint.
tf.saved_model.SaveOptions
SaveOptions
now takes a new argument called experimental_debug_stripper
. When enabled, this strips the debug nodes from both the node defs and the function defs of the graph. Note that this currently only strips the Assert
nodes from the graph and converts them into NoOp
s instead.
Keras
-
keras.layers.experimental.DynamicEmbedding
- Added
DynamicEmbedding
Keras layer
- Added 'UpdateEmbeddingCallback`
DynamicEmbedding
layer allows for the continuous updating of the vocabulary and embeddings during the training process. This layer maintains a hash table to track the most up-to-date vocabulary based on the inputs received by the layer and the eviction policy. When this layer is used with an UpdateEmbeddingCallback
, which is a time-based callback, the vocabulary lookup tensor is updated at the time interval set in the UpdateEmbeddingCallback
based on the most up-to-date vocabulary hash table maintained by the layer. If this layer is not used in conjunction with UpdateEmbeddingCallback
the behavior of the layer would be same as keras.layers.Embedding
.
-
keras.optimizers.Adam
- Added the option to set adaptive epsilon to match implementations with Jax and PyTorch equivalents.
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
Aakar Dwivedi, Akhil Goel, Alexander Grund, Alexander Pivovarov, Andrew Goodbody, Andrey Portnoy, Aneta Kaczyńska, AnetaKaczynska, ArkadebMisra, Ashiq Imran, Ayan Moitra, Ben Barsdell, Ben Creech, Benedikt Lorch, Bhavani Subramanian, Bianca Van Schaik, Chao, Chase Riley Roberts, Connor Flanagan, David Hall, David Svantesson, David Svantesson-Yeung, dependabot[bot], Dr. Christoph Mittendorf, Dragan Mladjenovic, ekuznetsov139, Eli Kobrin, Eugene Kuznetsov, Faijul Amin, Frédéric Bastien, fsx950223, gaoyiyeah, Gauri1 Deshpande, Gautam, Giulio C.N, guozhong.zhuang, Harshit Monish, James Hilliard, Jane Liu, Jaroslav Sevcik, jeffhataws, Jerome Massot, Jerry Ge, jglaser, jmaksymc, Kaixi Hou, kamaljeeti, Kamil Magierski, Koan-Sin Tan, lingzhi98, looi, Mahmoud Abuzaina, Malik Shahzad Muzaffar, Meekail Zain, mraunak, Neil Girdhar, Olli Lupton, Om Thakkar, Paul Strawder, Pavel Emeliyanenko, Pearu Peterson, pemeliya, Philipp Hack, Pierluigi Urru, Pratik Joshi, radekzc, Rafik Saliev, Ragu, Rahul Batra, rahulbatra85, Raunak, redwrasse, Rodrigo Gomes, ronaghy, Sachin Muradi, Shanbin Ke, shawnwang18, Sheng Yang, Shivam Mishra, Shu Wang, Strawder, Paul, Surya, sushreebarsa, Tai Ly, talyz, Thibaut Goetghebuer-Planchon, Tj Xu, Tom Allsop, Trevor Morris, Varghese, Jojimon, weihanmines, wenchenvincent, Wenjie Zheng, Who Who Who, Yasir Ashfaq, yasiribmcon, Yoshio Soma, Yuanqiang Liu, Yuriy Chernyshov