TensorFlow: v2.14.0-rc0 Release

Release date:
September 12, 2023
Previous version:
v2.13.1 (released September 12, 2023)
674 Diff Delta
15 total committers
Data confidence:

47 Features Released with v2.14.0-rc0

Top Contributors in v2.14.0-rc0


Directory Browser for v2.14.0-rc0

We haven't yet finished calculating and confirming the files and directories changed in this release. Please check back soon.

Release Notes Published

Release 2.14.0


Breaking Changes

  • tf.Tensor

    • The class hierarchy for tf.Tensor has changed, and there are now explicit EagerTensor and SymbolicTensor classes for eager and tf.function respectively. Users who relied on the exact type of Tensor (e.g. type(t) == tf.Tensor) will need to update their code to use isinstance(t, tf.Tensor). The tf.is_symbolic_tensor helper added in 2.13 may be used when it is necessary to determine if a value is specifically a symbolic tensor.
  • tf.compat.v1.Session

    • tf.compat.v1.Session.partial_run and tf.compat.v1.Session.partial_run_setup will be deprecated in the next release.

Known Caveats

  • tf.lite
    • when converter flag "_experimenal_use_buffer_offset" is enabled, additional metadata is automatically excluded from the generated model. The behaviour is the same as "exclude_conversion_metadata" is set
    • If the model is larger than 2GB, then we also require "exclude_conversion_metadata" flag to be set

Major Features and Improvements

  • Enable JIT-compiled i64-indexed kernels on GPU for large tensors with more than 2**32 elements.

    • Unary GPU kernels: Abs, Atanh, Acos, Acosh, Asin, Asinh, Atan, Cos, Cosh, Sin, Sinh, Tan, Tanh.
    • Binary GPU kernels: AddV2, Sub, Div, DivNoNan, Mul, MulNoNan, FloorDiv, Equal, NotEqual, Greater, GreaterEqual, LessEqual, Less.
  • tf.lite

    • Add experimental supports conversion of models that may be larger than 2GB before buffer deduplication

Bug Fixes and Other Changes

  • tf.py_function and tf.numpy_function can now be used as function decorators for clearer code: @tf.py_function(Tout=tf.float32) def my_fun(x): print("This always executes eagerly.") return x+1

  • tf.lite

    • Strided_Slice now supports UINT32.
  • tf.config.experimental.enable_tensor_float_32_execution

    • Disabling TensorFloat-32 execution now causes TPUs to use float32 precision for float32 matmuls and other ops. TPUs have always used bfloat16 precision for certain ops, like matmul, when such ops had float32 inputs. Now, disabling TensorFloat-32 by calling tf.config.experimental.enable_tensor_float_32_execution(False) will cause TPUs to use float32 precision for such ops instead of bfloat16.
  • tf.experimental.dtensor

    • API changes for Relayout. Added a new API, dtensor.relayout_like, for relayouting a tensor according to the layout of another tensor.
    • Added dtensor.get_default_mesh, for retrieving the current default mesh under the dtensor context.
    • *fft* ops now support dtensors with any layout. Fixed bug in 'fft2d/ fft3d', 'ifft2d/ifft3d', 'rfft2d/rfft3d', and 'irfft2d/irfft3d' for sharded input.
  • tf.experimental.strict_mode

    • Added a new API, strict_mode, which converts all deprecation warnings into runtime errors with instructions on switching to a recommended substitute.
  • TensorFlow Debugger (tfdbg) CLI: ncurses-based CLI for tfdbg v1 was removed.

  • TensorFlow now supports C++ RTTI on mobile and Android. To enable this feature, pass the flag --define=tf_force_rtti=true to Bazel when building TensorFlow. This may be needed when linking TensorFlow into RTTI-enabled programs since mixing RTTI and non-RTTI code can cause ABI issues.

  • tf.ones, tf.zeros, tf.fill, tf.ones_like, tf.zeros_like now take an additional Layout argument that controls the output layout of their results.

  • tf.nest and tf.data now support user defined classes implementing __tf_flatten__ and __tf_unflatten__ methods. See nest_util code examples for an example.


Keras is a framework built on top of the TensorFlow. See more details on the Keras website.

Major Features and Improvements

  • tf.keras
    • Model.compile now support steps_per_execution='auto' as a parameter, allowing automatic tuning of steps per execution during Model.fit, Model.predict, and Model.evaluate for a significant performance boost.