TensorFlow: v2.11.0-rc0 Release

Release date:
November 15, 2022
Previous version:
v2.10.1 (released November 15, 2022)
Magnitude:
2,338 Diff Delta
Contributors:
35 total committers
Data confidence:
Commits:

106 Features Released with v2.11.0-rc0

Top Contributors in v2.11.0-rc0

vinila21
alankelly
cantonios
milpuz01
nitins17
hawkinsp
tensorflower-gardener
mpcallanan
pak-laura
tensorflow-jenkins

Directory Browser for v2.11.0-rc0

We haven't yet finished calculating and confirming the files and directories changed in this release. Please check back soon.

Release Notes Published

Release 2.11.0

Breaking Changes

  • tf.keras.optimizers.Optimizer now points to the new Keras optimizer, and old optimizers have moved to the tf.keras.optimizers.legacy namespace. If you find your workflow failing due to this change, you may be facing one of the following issues:

    • Checkpoint loading failure. The new optimizer handles optimizer state differently from the old optimizer, which simplies the logic of checkpoint saving/loading, but at the cost of breaking checkpoint backward compatibility in some cases. If you want to keep using an old checkpoint, please change your optimizer to tf.keras.optimizer.legacy.XXX (e.g. tf.keras.optimizer.legacy.Adam).
    • TF1 compatibility. The new optimizer, tf.keras.optimizers.Optimizer, does not support TF1 any more, so please use the legacy optimizer tf.keras.optimizer.legacy.XXX. We highly recommend to migrate your workflow to TF2 for stable support and new features.
    • Old optimizer API not found. The new optimizer, tf.keras.optimizers.Optimizer, has a different set of public APIs from the old optimizer. These API changes are mostly related to getting rid of slot variables and TF1 support. Please check the API documentation to find alternatives to the missing API. If you must call the deprecated API, please change your optimizer to the legacy optimizer.
    • Learning rate schedule access. When using a LearningRateSchedule, The new optimizer's learning_rate property returns the current learning rate value instead of a LearningRateSchedule object as before. If you need to access the LearningRateSchedule object, please use optimizer._learning_rate.
    • If you implemented a custom optimizer based on the old optimizer. Please set your optimizer to subclass tf.keras.optimizer.legacy.XXX. If you want to migrate to the new optimizer and find it does not support your optimizer, please file an issue in the Keras GitHub repo.
    • Errors, such as Cannot recognize variable.... The new optimizer requires all optimizer variables to be created at the first apply_gradients() or minimize() call. If your workflow calls optimizer to update different parts of model in multiple stages, please call optimizer.build(model.trainable_variables) before the training loop.
    • Timeout or performance loss. We don't anticipate this to happen, but if you see such issues, please use the legacy optimizer, and file an issue in the Keras GitHub repo.

    The old Keras optimizer will never be deleted, but will not see any new feature additions. New optimizers (for example, tf.keras.optimizers.Adafactor) will only be implemented based on tf.keras.optimizers.Optimizer, the new base class.

Major Features and Improvements

  • tf.lite:

    • New operations supported: tf.unsortedsegmentmin, tf.atan2 and tf.sign.
    • Updates to existing operations:
      • tfl.mul now supports complex32 inputs.
  • tf.experimental.StructuredTensor

    • Introduced tf.experimental.StructuredTensor, which provides a flexible and TensorFlow-native way to encode structured data such as protocol buffers or pandas dataframes.
  • tf.keras:

    • Added a new get_metrics_result() method to tf.keras.models.Model.
      • Returns the current metrics values of the model as a dict.
    • Added a new group normalization layer - tf.keras.layers.GroupNormalization.
    • Added weight decay support for all Keras optimizers.
    • Added Adafactor optimizer tf.keras.optimizers.Adafactor.
    • Added warmstart_embedding_matrix to tf.keras.utils.
      • This utility can be used to warmstart an embeddings matrix, so you reuse previously-learned word embeddings when working with a new set of words which may include previously unseen words (the embedding vectors for unseen words will be randomly initialized).
  • tf.Variable:

    • Added CompositeTensor as a baseclass to ResourceVariable.
      • This allows tf.Variables to be nested in tf.experimental.ExtensionTypes.
    • Added a new constructor argument experimental_enable_variable_lifting to tf.Variable, defaulting to True.
      • When it's False, the variable won't be lifted out of tf.function, thus it can be used as a tf.function-local variable: during each execution of the tf.function, the variable will be created and then disposed, similar to a local (that is, stack-allocated) variable in C/C++. Currently, experimental_enable_variable_lifting=False only works on non-XLA devices (for example, under @tf.function(jit_compile=False)).
  • TF SavedModel:

    • Added fingerprint.pb to the SavedModel directory. The fingerprint.pb file is a protobuf containing the "fingerprint" of the SavedModel. See the RFC for more details regarding its design and properties.
  • TF pip:

    • Windows CPU-builds for x86/x64 processors are now built, maintained, tested and released by a third party: Intel. Installing the windows-native pip packages for tensorflow or tensorflow-cpu would install Intel's tensorflow-intel package. These packages are provided as-is. Tensorflow will use reasonable efforts to maintain the availability and integrity of this pip package. There may be delays if the third party fails to release the pip package. For using TensorFlow GPU on Windows, you will need to install TensorFlow in WSL2.

Bug Fixes and Other Changes

  • tf.image

    • Added an optional parameter return_index_map to tf.image.ssim which causes the returned value to be the local SSIM map instead of the global mean.
  • TF Core:

    • tf.custom_gradient can now be applied to functions that accept "composite" tensors, such as tf.RaggedTensor, as inputs.
    • Fix device placement issues related to datasets with ragged tensors of strings (i.e. variant encoded data with types not supported on GPU).
    • experimental_follow_type_hints for tf.function has been deprecated. Please use input_signature or reduce_retracing to minimize retracing.
  • tf.SparseTensor:

    • Introduced set_shape, which sets the static dense shape of the sparse tensor and has the same semantics as tf.Tensor.set_shape.

Thanks to our Contributors

This release contains contributions from many people at Google, as well as:

103yiran, 8bitmp3, Aakar Dwivedi, Alexander Grund, alif_elham, Aman Agarwal, amoitra, Andrei Ivanov, andreii, Andrew Goodbody, angerson, Ashay Rane, Azeem Shaikh, Ben Barsdell, bhack, Bhavani Subramanian, Cedric Nugteren, Chandra Kumar Ramasamy, Christopher Bate, CohenAriel, Cotarou, cramasam, Enrico Minack, Francisco Unda, Frederic Bastien, gadagashwini, Gauri1 Deshpande, george, Jake, Jeff, Jerry Ge, Jingxuan He, Jojimon Varghese, Jonathan Dekhtiar, Kaixi Hou, Kanvi Khanna, kcoul, Keith Smiley, Kevin Hu, Kun Lu, kushanam, Lianmin Zheng, liuyuanqiang, Louis Sugy, Mahmoud Abuzaina, Marius Brehler, mdfaijul, Meenakshi Venkataraman, Milos Puzovic, mohantym, Namrata-Ibm, Nathan John Sircombe, Nathan Luehr, Olaf Lipinski, Om Thakkar, Osman F Bayram, Patrice Vignola, Pavani Majety, Philipp Hack, Prianka Liz Kariat, Rahul Batra, RajeshT, Renato Golin, riestere, Roger Iyengar, Rohit Santhanam, Rsanthanam-Amd, Sadeed Pv, Samuel Marks, Shimokawa, Naoaki, Siddhesh Kothadi, Simengliu-Nv, Sindre Seppola, snadampal, Srinivasan Narayanamoorthy, sushreebarsa, syedshahbaaz, Tamas Bela Feher, Tatwai Chong, Thibaut Goetghebuer-Planchon, tilakrayal, Tom Anderson, Tomohiro Endo, Trevor Morris, vibhutisawant, Victor Zhang, Vremold, Xavier Bonaventura, Yanming Wang, Yasir Modak, Yimei Sun, Yong Tang, Yulv-Git, zhuoran.liu, zotanika