Release 2.11.0
Breaking Changes
tf.keras.optimizers.Optimizer
now points to the new Keras optimizer, and old optimizers have moved to the tf.keras.optimizers.legacy
namespace.
If you find your workflow failing due to this change, you may be facing one of the following issues:
- Checkpoint loading failure. The new optimizer handles optimizer state differently from the old optimizer, which simplies the logic of
checkpoint saving/loading, but at the cost of breaking checkpoint backward compatibility in some cases. If you want to keep using an old
checkpoint, please change your optimizer to
tf.keras.optimizer.legacy.XXX
(e.g. tf.keras.optimizer.legacy.Adam
).
- TF1 compatibility. The new optimizer,
tf.keras.optimizers.Optimizer
, does not support TF1 any more, so please use the legacy optimizer
tf.keras.optimizer.legacy.XXX
.
We highly recommend to migrate your workflow to TF2 for stable support and new features.
- Old optimizer API not found. The new optimizer,
tf.keras.optimizers.Optimizer
, has a different set of public APIs from the old optimizer.
These API changes are mostly related to getting rid of slot variables and TF1 support. Please check the API documentation to find alternatives
to the missing API. If you must call the deprecated API, please change your optimizer to the legacy optimizer.
- Learning rate schedule access. When using a
LearningRateSchedule
, The new optimizer's learning_rate
property returns the
current learning rate value instead of a LearningRateSchedule
object as before. If you need to access the LearningRateSchedule
object,
please use optimizer._learning_rate
.
- If you implemented a custom optimizer based on the old optimizer. Please set your optimizer to subclass
tf.keras.optimizer.legacy.XXX
. If you want to migrate to the new optimizer and find it does not support your optimizer, please file
an issue in the Keras GitHub repo.
- Errors, such as
Cannot recognize variable...
. The new optimizer requires all optimizer variables to be created at the first
apply_gradients()
or minimize()
call. If your workflow calls optimizer to update different parts of model in multiple stages,
please call optimizer.build(model.trainable_variables)
before the training loop.
- Timeout or performance loss. We don't anticipate this to happen, but if you see such issues, please use the legacy optimizer, and file
an issue in the Keras GitHub repo.
The old Keras optimizer will never be deleted, but will not see any new feature additions. New optimizers (for example,
tf.keras.optimizers.Adafactor
) will only be implemented based on tf.keras.optimizers.Optimizer
, the new base class.
Major Features and Improvements
Bug Fixes and Other Changes
tf.image
- Added an optional parameter
return_index_map
to tf.image.ssim
which causes the returned value to be the local SSIM map instead of the global
mean.
TF Core:
-
tf.custom_gradient
can now be applied to functions that accept "composite" tensors, such as tf.RaggedTensor
, as inputs.
- Fix device placement issues related to datasets with ragged tensors of strings (i.e. variant encoded data with types not supported on GPU).
-
experimental_follow_type_hints
for tf.function has been deprecated. Please use input_signature
or reduce_retracing
to minimize retracing.
tf.SparseTensor
:
- Introduced
set_shape
, which sets the static dense shape of the sparse tensor and has the same semantics as tf.Tensor.set_shape
.
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
103yiran, 8bitmp3, Aakar Dwivedi, Alexander Grund, alif_elham, Aman Agarwal, amoitra, Andrei Ivanov, andreii, Andrew Goodbody, angerson, Ashay Rane, Azeem Shaikh, Ben Barsdell, bhack, Bhavani Subramanian, Cedric Nugteren, Chandra Kumar Ramasamy, Christopher Bate, CohenAriel, Cotarou, cramasam, Enrico Minack, Francisco Unda, Frederic Bastien, gadagashwini, Gauri1 Deshpande, george, Jake, Jeff, Jerry Ge, Jingxuan He, Jojimon Varghese, Jonathan Dekhtiar, Kaixi Hou, Kanvi Khanna, kcoul, Keith Smiley, Kevin Hu, Kun Lu, kushanam, Lianmin Zheng, liuyuanqiang, Louis Sugy, Mahmoud Abuzaina, Marius Brehler, mdfaijul, Meenakshi Venkataraman, Milos Puzovic, mohantym, Namrata-Ibm, Nathan John Sircombe, Nathan Luehr, Olaf Lipinski, Om Thakkar, Osman F Bayram, Patrice Vignola, Pavani Majety, Philipp Hack, Prianka Liz Kariat, Rahul Batra, RajeshT, Renato Golin, riestere, Roger Iyengar, Rohit Santhanam, Rsanthanam-Amd, Sadeed Pv, Samuel Marks, Shimokawa, Naoaki, Siddhesh Kothadi, Simengliu-Nv, Sindre Seppola, snadampal, Srinivasan Narayanamoorthy, sushreebarsa, syedshahbaaz, Tamas Bela Feher, Tatwai Chong, Thibaut Goetghebuer-Planchon, tilakrayal, Tom Anderson, Tomohiro Endo, Trevor Morris, vibhutisawant, Victor Zhang, Vremold, Xavier Bonaventura, Yanming Wang, Yasir Modak, Yimei Sun, Yong Tang, Yulv-Git, zhuoran.liu, zotanika