Release 2.7.0
Breaking Changes
Major Features and Improvements
Improvements to the TensorFlow debugging experience:
- Previously, TensorFlow error stack traces involved many internal frames, which could be challenging to read through, while not being actionable for end users. As of TF 2.7, TensorFlow filters internal frames in most errors that it raises, to keep stack traces short, readable, and focused on what's actionable for end users (their own code).
This behavior can be disabled by calling tf.debugging.disable_traceback_filtering()
, and can be re-enabled via tf.debugging.enable_traceback_filtering()
. If you are debugging a TensorFlow-internal issue (e.g. to prepare a TensorFlow PR), make sure to disable traceback filtering. You can check whether this feature is currently enabled by calling tf.debugging.is_traceback_filtering_enabled()
.
Note that this feature is only available with Python 3.7 or higher.
* Improve the informativeness of error messages raised by Keras Layer.__call__()
, by adding the full list of argument values passed to the layer in every exception.
Introduce the tf.compat.v1.keras.utils.track_tf1_style_variables
decorator, which enables using large classes of tf1-style variable_scope, get_variable
, and compat.v1.layer
-based components from within TF2 models running with TF2 behavior enabled.
tf.data
:
tf.data service now supports auto-sharding. Users specify the sharding policy with tf.data.experimental.service.ShardingPolicy
enum. It can be one of OFF
(equivalent to today's "parallel_epochs"
mode), DYNAMIC
(equivalent to today's "distributed_epoch"
mode), or one of the static sharding policies: FILE
, DATA
, FILE_OR_DATA
, or HINT
(corresponding to values of tf.data.experimental.AutoShardPolicy
).
Static sharding (auto-sharding) requires the number of tf.data service workers be fixed. Users need to specify the worker addresses in tensorflow.data.experimental.DispatcherConfig
.
tf.data.experimental.service.register_dataset
now accepts optional compression
argument.
Keras:
-
tf.keras.layers.Conv
now includes a public convolution_op
method. This method can be used to simplify the implementation of Conv subclasses. There are two primary ways to use this new method. The first is to use the method directly in your own call
method:
python
class StandardizedConv2D(tf.keras.layers.Conv2D):
def call(self, inputs):
mean, var = tf.nn.moments(self.kernel, axes=[0, 1, 2], keepdims=True)
return self.convolution_op(inputs, (self.kernel - mean) / tf.sqrt(var + 1e-10))
Alternatively, you can override convolution_op
:
python
class StandardizedConv2D(tf.keras.Layer):
def convolution_op(self, inputs, kernel):
mean, var = tf.nn.moments(kernel, axes=[0, 1, 2], keepdims=True)
# Author code uses std + 1e-5
return super().convolution_op(inputs, (kernel - mean) / tf.sqrt(var + 1e-10))
- Added
merge_state()
method to tf.keras.metrics.Metric
for use in distributed computations.
- Added
sparse
and ragged
options to tf.keras.layers.TextVectorization
to allow for SparseTensor
and RaggedTensor
outputs from the layer.
distribute.experimental.rpc package:
- distribute.experimental.rpc package introduces APIs to create a GRPC based server to register tf.function methods and a GRPC client to invoke remote registered methods. RPC APIs are intended for multi-client setups i.e. server and clients are started in separate binaries independently.
Example usage to create server:
```python
server = tf.distribute.experimental.rpc.Server.create("grpc",
"127.0.0.1:1234")
@tf.function(input_signature=[
tf.TensorSpec([], tf.int32),
tf.TensorSpec([], dtypes.int32)
])
def _remote_multiply(a, b):
return tf.math.multiply(a, b)
server.register("multiply", _remote_multiply)
```
Example usage to create client:
python
client = tf.distribute.experimental.rpc.Client.create("grpc", address)
a = tf.constant(2, dtype=tf.int32)
b = tf.constant(3, dtype=tf.int32)
result = client.multiply(a, b)
tf.lite
:
- Add experimental API
experimental_from_jax
to support conversion from Jax models to TensorFlow Lite.
- Support uint32 data type for cast op.
- Add experimental quantization debugger
tf.lite.QuantizationDebugger
Extension Types
- Add experimental API to define new Python classes that can be handled by TensorFlow APIs. To create an extension type, simply define a Python class with
tf.experimental.ExtensionType
as its base, and use type annotations to specify the type for each field. E.g.:
python
class MaskedTensor(tf.experimental.ExtensionType):
values: tf.Tensor
mask: tf.Tensor
The tf.ExtensionType
base class works similarly to typing.NamedTuple
and @dataclasses.dataclass
from the standard Python library.
- Extension types are supported by Keras, tf.data, TF-hub, SavedModel, tf.function, control flow ops, py_function, and distribution strategy.
- Add "dispatch decorators" that can be used to override the default behavior of TensorFlow ops (such as
tf.add
or tf.concat
) when they are applied to ExtensionType values.
- The
BatchableExtensionType
API can be used to define extension types that support APIs that make use of batching, such as tf.data.Dataset
and tf.map_fn
.
Bug Fixes and Other Changes
- TF Core:
- Random number generation (RNG) system
- Add argument
alg
to tf.random.stateless_*
functions to explicitly select the RNG algorithm.
- Add
tf.nn.experimental.stateless_dropout
, a stateless version of tf.nn.dropout
.
-
tf.random.Generator
now can be created inside the scope of tf.distribute.experimental.ParameterServerStrategy
and tf.distribute.experimental.CentralStorageStrategy
.
- Add an experimental session config
tf.experimental.disable_functional_ops_lowering
which disables functional control flow op lowering optimization. This is useful when executing within a portable runtime where control flow op kernels may not be loaded due to selective registration.
- Add a new experimental argument
experimental_is_anonymous
to tf.lookup.StaticHashTable.__init__
to create the table in anonymous mode. In this mode, the table resource can only be accessed via resource handles (not resource names) and will be deleted automatically when all resource handles pointing to it are gone.
-
tf.data
:
- Introduce the
tf.data.experimental.at
API which provides random access for input pipelines that consist of transformations that support random access. The initial set of transformations that support random access includes: tf.data.Dataset.from_tensor_slices
,tf.data.Dataset.shuffle
, tf.data.Dataset.batch
, tf.data.Dataset.shard
, tf.data.Dataset.map
, and tf.data.Dataset.range
.
- Promote
tf.data.Options.experimental_deterministic
API to tf.data.Options.deterministic
and deprecate the experimental endpoint.
- Move autotuning options from
tf.data.Options.experimental_optimization.autotune*
to a newly created tf.data.Options.autotune.*
and remove support for tf.data.Options.experimental_optimization.autotune_buffers
.
- Add support for user-defined names of tf.data core Python API, which can be used to disambiguate tf.data events in TF Profiler Trace Viewer.
- Promote
tf.data.experimental.sample_from_datasets
API to tf.data.Dataset.sample_from_datasets
and deprecate the experimental endpoint.
- TF SavedModel:
- Custom gradients are now saved by default. See
tf.saved_model.SaveOptions
to disable this.
- XLA:
- Add a new API that allows custom call functions to signal errors. The old API will be deprecated in a future release. See https://www.tensorflow.org/xla/custom_call for details.
- XLA:GPU reductions are deterministic by default (reductions within
jit_compile=True
are now deterministic).
- XLA:GPU works with Horovod (OSS contribution by Trent Lo from NVidia)
-
tf.saved_model.save
:
- When saving a model, not specifying a namespace whitelist for custom ops with a namespace will now default to allowing rather than rejecting them all.
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
8bitmp3, Abhilash Majumder, abhilash1910, AdeshChoudhar, Adrian Garcia Badaracco, Adrian Ratiu, ag.ramesh, Aleksandr Nikolaev, Alexander Bosch, Alexander Grund, Annie Tallund, Anush Elangovan, Artem Sokolovskii, azazhu, Balint Cristian, Bas Aarts, Ben Barsdell, bhack, cfRod, Cheney-Wang, Cheng Ren, Christopher Bate, collin, Danila Bespalov, David Datascientist, Deven Desai, Ehsan Kia, Ellie, Fan Du, fo40225, Frederic Bastien, fsx950223, Gauri1 Deshpande, geetachavan1, Guillaume Klein, guozhong.zhuang, helen, Håkon Sandsmark, japm48, jgehw, Jinzhe Zeng, Jonathan Dekhtiar, Kai Zhu, Kaixi Hou, Kanvi Khanna, Koan-Sin Tan, Koki Ibukuro, Kulin Seth, KumaTea, Kun-Lu, Lemo, lipracer, liuyuanqiang, Mahmoud Abuzaina, Marius Brehler, Maxiwell S. Garcia, mdfaijul, metarutaiga, Michal Szutenberg, nammbash, Neil Girdhar, Nishidha Panpaliya, Nyadla-Sys, Patrice Vignola, Peter Kasting, Philipp Hack, PINTO0309, Prateek Gupta, puneeshkhanna, Rahul Butani, Rajeshwar Reddy T, Reza Rahimi, RinozaJiffry, rmothukuru, Rohit Santhanam, Saduf2019, Samuel Marks, sclarkson, Sergii Khomenko, Sheng, Yang, Sidong-Wei, slowy07, Srinivasan Narayanamoorthy, Srishti Srivastava, stanley, Stella Alice Schlotter, Steven I Reeves, stevenireeves, svobora, Takayoshi Koizumi, Tamas Bela Feher, Thibaut Goetghebuer-Planchon, Trent Lo, Twice, Varghese, Jojimon, Vishnuvardhan Janapati, Wang Yanzhang, Wang,Quintin, William Muir, William Raveane, Yasir Modak, Yasuhiro Matsumoto, Yi Li, Yong Tang, zhaozheng09, Zhoulong Jiang, zzpmiracle