Release 0.12.0
Major Features and Improvements
- TensorFlow now builds and runs on Microsoft Windows (tested on Windows 10,
Windows 7, and Windows Server 2016). Supported languages include Python (via a
pip package) and C++. CUDA 8.0 and cuDNN 5.1 are supported for GPU
acceleration. Known limitations include: It is not currently possible to load
a custom op library. The GCS and HDFS file systems are not currently
supported. The following ops are not currently implemented:
DepthwiseConv2dNative, DepthwiseConv2dNativeBackpropFilter,
DepthwiseConv2dNativeBackpropInput, Dequantize, Digamma, Erf, Erfc, Igamma,
Igammac, Lgamma, Polygamma, QuantizeAndDequantize, QuantizedAvgPool,
QuantizedBatchNomWithGlobalNormalization, QuantizedBiasAdd, QuantizedConcat,
QuantizedConv2D, QuantizedMatmul, QuantizedMaxPool,
QuantizeDownAndShrinkRange, QuantizedRelu, QuantizedRelu6, QuantizedReshape,
QuantizeV2, RequantizationRange, and Requantize.
- Go: Experimental API in Go to create and execute graphs
(https://godoc.org/github.com/tensorflow/tensorflow/tensorflow/go)
- New checkpoint format becomes the default in
tf.train.Saver
. Old V1
checkpoints continue to be readable; controlled by the write_version
argument, tf.train.Saver
now by default writes out in the new V2
format. It significantly reduces the peak memory required and latency
incurred during restore.
- Added a new library for library of matrix-free (iterative) solvers for linear
equations, linear least-squares, eigenvalues and singular values in
tensorflow/contrib/solvers. Initial version has lanczos bidiagonalization,
conjugate gradients and CGLS.
- Added gradients for
matrix_solve_ls
and self_adjoint_eig
.
- Large cleanup to add second order gradient for ops with C++ gradients and
improve existing gradients such that most ops can now be differentiated
multiple times.
- Added a solver for ordinary differential equations,
tf.contrib.integrate.odeint
.
- New contrib module for tensors with named axes,
tf.contrib.labeled_tensor
.
- Visualization of embeddings in TensorBoard.
Breaking Changes to the API
BusAdjacency
enum replaced with a protocol buffer DeviceLocality
. PCI bus
indexing now starts from 1 instead of 0, and bus_id==0
is used where
previously BUS_ANY
was used.
Env::FileExists
and FileSystem::FileExists
now return a
tensorflow::Status
intead of a bool. Any callers to this function can be
converted to a bool by adding .ok()
to the call.
- C API: Type
TF_SessionWithGraph
has been renamed to TF_Session
, indicating
its preferred use in language bindings for TensorFlow. What was previously
TF_Session
has been renamed to TF_DeprecatedSession
.
- C API: Renamed
TF_Port
to TF_Output
.
- C API: The caller retains ownership of
TF_Tensor
objects provided to
TF_Run
, TF_SessionRun
, TF_SetAttrTensor
etc.
- Renamed
tf.image.per_image_whitening()
to
tf.image.per_image_standardization()
- Move Summary protobuf constructors to
tf.summary
submodule.
- Deprecate
histogram_summary
, audio_summary
, scalar_summary
,
image_summary
, merge_summary
, and merge_all_summaries
.
- Combined
batch_*
and regular version of linear algebra and FFT ops. The
regular op now handles batches as well. All batch_*
Python interfaces were
removed.
tf.all_variables
, tf.VARIABLES
and tf.initialize_all_variables
renamed
to tf.global_variables
, tf.GLOBAL_VARIABLES
and
tf.global_variable_initializers
respectively.
Bug Fixes and Other Changes
- Use threadsafe version of
lgamma
function.
- Fix
tf.sqrt
handling of negative arguments.
- Fixed bug causing incorrect number of threads to be used for multi-threaded
benchmarks.
- Performance optimizations for
batch_matmul
on multi-core CPUs.
- Improve trace,
matrix_set_diag
, matrix_diag_part
and their gradients to
work for rectangular matrices.
- Support for SVD of complex valued matrices.
Thanks to our Contributors
This release contains contributions from many people at Google, as well as:
@a7744hsc, Abhi Agg, @admcrae, Adriano Carmezim, Aki Sukegawa, Alex Kendall,
Alexander Rosenberg Johansen, @amcrae, Amlan Kar, Andre Simpelo, Andreas Eberle,
Andrew Hundt, Arnaud Lenglet, @b0noI, Balachander Ramachandran, Ben Barsdell,
Ben Guidarelli, Benjamin Mularczyk, Burness Duan, @c0g, Changming Sun,
@chanis, Corey Wharton, Dan J, Daniel Trebbien, Darren Garvey, David Brailovsky,
David Jones, Di Zeng, @DjangoPeng, Dr. Kashif Rasul, @drag0, Fabrizio (Misto)
Milo, FabríCio Ceschin, @fp, @Ghedeon, @guschmue, Gökçen Eraslan, Haosdent
Huang, Haroen Viaene, Harold Cooper, Henrik Holst, @hoangmit, Ivan Ukhov, Javier
Dehesa, Jingtian Peng, Jithin Odattu, Joan Pastor, Johan Mathe, Johannes Mayer,
Jongwook Choi, Justus Schwabedal, Kai Wolf, Kamil Hryniewicz, Kamran Amini,
Karen Brems, Karl Lattimer, @kborer, Ken Shirriff, Kevin Rose, Larissa Laich,
Laurent Mazare, Leonard Lee, Liang-Chi Hsieh, Liangliang He, Luke Iwanski,
Marek Kolodziej, Moustafa Alzantot, @MrQianjinsi, @nagachika, Neil Han, Nick
Meehan, Niels Ole Salscheider, Nikhil Mishra, @nschuc, Ondrej Skopek, OndřEj
Filip, @OscarDPan, Pablo Moyano, Przemyslaw Tredak, @qitaishui, @Quarazy,
@raix852, Philipp Helo, Sam Abrahams, @SriramRamesh, Till Hoffmann, Tushar Soni,
@tvn, @tyfkda, Uwe Schmidt, Victor Villas, Vit Stepanovs, Vladislav Gubarev,
@wujingyue, Xuesong Yang, Yi Liu, Yilei Yang, @youyou3, Yuan (Terry) Tang,
Yuming Wang, Zafar Takhirov, @zhongyuk, Ziming Dong, @guotong1988
We are also grateful to all who filed issues or helped resolve them, asked and
answered questions, and were part of inspiring discussions.