Release 2.9.0
Breaking Changes
- Due to security issues in TF 2.8, all boosted trees code has now been removed (after being deprecated in TF 2.8). Users should switch to TensorFlow Decision Forests.
- Build, Compilation and Packaging
- TensorFlow is now compiled with
_GLIBCXX_USE_CXX11_ABI=1
. Downstream projects that encounter std::__cxx11
or [abi:cxx11]
linker errors will need to adopt this compiler option. See the GNU C++ Library docs on Dual ABI.
- TensorFlow Python wheels now specifically conform to manylinux2014, an upgrade from manylinux2010. The minimum Pip version supporting manylinux2014 is Pip 19.3 (see pypa/manylinux. This change may affect you if you have been using TensorFlow on a very old platform equivalent to CentOS 6, as manylinux2014 targets CentOS 7 as a compatibility base. Note that TensorFlow does not officially support either platform.
- Discussion for these changes can be found on SIG Build's TensorFlow Community Forum thread
- The
tf.keras.mixed_precision.experimental
API has been removed. The non-experimental symbols under tf.keras.mixed_precision
have been available since TensorFlow 2.4 and should be used instead.
- The non-experimental API has some minor differences from the experimental API. In most cases, you only need to make three minor changes:
- Remove the word "experimental" from
tf.keras.mixed_precision
symbols. E.g., replace tf.keras.mixed_precision.experimental.global_policy
with tf.keras.mixed_precision.global_policy
.
- Replace
tf.keras.mixed_precision.experimental.set_policy
with tf.keras.mixed_precision.set_global_policy
. The experimental symbol set_policy
was renamed to set_global_policy
in the non-experimental API.
- Replace
LossScaleOptimizer(opt, "dynamic")
with LossScaleOptimizer(opt)
. If you pass anything other than "dynamic"
to the second argument, see (1) of the next section.
- In the following rare cases, you need to make more changes when switching to the non-experimental API:
- If you passed anything other than
"dynamic"
to the loss_scale
argument (the second argument) of LossScaleOptimizer
:
- If you passed a value to the
loss_scale
argument (the second argument) of Policy
:
- The experimental version of
Policy
optionally took in a tf.compat.v1.mixed_precision.LossScale
in the constructor, which defaulted to a dynamic loss scale for the "mixed_float16"
policy and no loss scale for other policies. In Model.compile
, if the model's policy had a loss scale, the optimizer would be wrapped with a LossScaleOptimizer
. With the non-experimental Policy
, there is no loss scale associated with the Policy
, and Model.compile
wraps the optimizer with a LossScaleOptimizer
if and only if the policy is a "mixed_float16"
policy. If you previously passed a LossScale
to the experimental Policy
, consider just removing it, as the default loss scaling behavior is usually what you want. If you really want to customize the loss scaling behavior, you can wrap your optimizer with a LossScaleOptimizer
before passing it to Model.compile
.
- If you use the very rarely-used function
tf.keras.mixed_precision.experimental.get_layer_policy
:
- Replace
tf.keras.mixed_precision.experimental.get_layer_policy(layer)
with layer.dtype_policy
.
tf.mixed_precision.experimental.LossScale
and its subclasses have been removed from the TF2 namespace. This symbols were very rarely used and were only useful in TF2 for use in the now-removed tf.keras.mixed_precision.experimental
API. The symbols are still available under tf.compat.v1.mixed_precision
.
- The
experimental_relax_shapes
heuristic for tf.function
has been deprecated and replaced with reduce_retracing
which encompasses broader heuristics to reduce the number of retraces (see below)
Major Features and Improvements
Bug Fixes and Other Changes
tf.data
:
- Fixed bug in
tf.data.experimental.parse_example_dataset
when tf.io.RaggedFeatures
would specify value_key
but no partitions
. Before the fix, setting value_key
but no partitions
would result in the feature key being replaced by the value key, e.g. {'value_key': <RaggedTensor>}
instead of {'key': <RaggedTensor>}
. Now the correct feature key will be used. This aligns the behavior of tf.data.experimental.parse_example_dataset
to match the behavior of tf.io.parse_example
.
- Added a new field,
filter_parallelization
, to tf.data.experimental.OptimizationOptions
. If it is set to True
, tf.data will run Filter
transformation with multiple threads. Its default value is False
if not specified.
tf.keras
:
- Fixed bug in optimizers that prevented them from properly checkpointing slot variables when they are
ShardedVariable
s (used for training with tf.distribute.experimental.ParameterServerStrategy
).
tf.random
:
- Added
tf.random.experimental.index_shuffle
, for shuffling a sequence without materializing the sequence in memory.
tf.RaggedTensor
:
- Introduced
tf.experimental.RowPartition
, which encodes how one dimension in a RaggedTensor relates to another, into the public API.
- Introduced
tf.experimental.DynamicRaggedShape
, which represents the shape of a RaggedTensor.
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
Aaron Debattista, Abel Soares Siqueira, Abhishek Varma, Andrei Ivanov, andreii, Andrew Goodbody, apeltop, Arnab Dutta, Ashiq Imran, Banikumar Maiti (Intel Aipg), Ben Greiner, Benjamin Peterson, bhack, Christopher Bate, chunduriv, Copybara-Service, DEKHTIARJonathan, Deven Desai, Duncan Riach, Eric Kunze, Everton Constantino, Faruk D, Fredrik Knutsson, gadagashwini, Gauri1 Deshpande, gtiHibGele, Guozhong Zhuang, Islem-Esi, Ivanov Viktor, Jason Furmanek, Jason Zaman, Jim, Jinzhe Zeng, John Laxson, Jonas Eschle, Jonas Eschle 'Mayou36, Jonathan Dekhtiar, Kaixi Hou, Kanvi Khanna, KaurkerDevourer, Koan-Sin Tan, kushanam, Laramie Leavitt, Li-Wen Chang, lipracer, Louis Sugy, Lu Teng, Mahmoud Abuzaina, Malcolm Slaney, Malik Shahzad Muzaffar, Marek Šuppa, Matt Conley, Michael Melesse, Milos Puzovic, mohantym, Nathan John Sircombe, Nathan Luehr, Nilesh Agarwalla, Patrice Vignola, peterjc123, Philip Turner, Rajeshwar Reddy T, Robert Kalmar, Rodrigo Formigone, Rohit Santhanam, rui, Sachin Muradi, Saduf2019, sandip, Scott Leishman, Serge Panev, Shi,Guangyong, Srinivasan Narayanamoorthy, stanley, Steven I Reeves, stevenireeves, sushreebarsa, Tamas Bela Feher, Tao He, Thomas Schmeyer, Tiago Almeida, Trevor Morris, Uday Bondhugula, Uwe L. Korn, Varghese, Jojimon, Vishnuvardhan Janapati, William Muir, William Raveane, xutianming, Yasuhiro Matsumoto, Yimei Sun, Yong Tang, Yu Feng, Yuriy Chernyshov, zhaozheng09