Announcement: TensorFlow 2.0 is coming!
And TensorFlow 1.x is going to be…
The eagerly-awaited update for the popular machine learning framework TensorFlow was announced earlier in August by Martin Wicke from Google AI.
The exciting news was announced on his Google Group and it already caused a buzz around the next major version of the framework — TensorFlow 2.0. If you’re excited like me and eager to stay up to date with the details of 2.0 development, I strongly encourage you to subscribe to the Google Group!
What makes this more appealing is that you can be a part of the coming public design reviews and even contribute to the features of TensorFlow 2.0 by voicing your concerns and proposing changes! This is exactly why I’m in love with open source development as the community works together and supports one another for the common goals.
So What’s Wrong with TensorFlow 1.x?
If you’re a beginner of TensorFlow, chances are you’d find the learning curve very steep as it is a low-level framework, unlike Keras. Therefore, some users might find it relatively hard to learn and apply.
Nevertheless, TensorFlow has become the world’s most widely adopted machine learning framework, catering to a broad spectrum of users and use-cases.
These rapid changes have propelled the next major version of TensorFlow which will be a major milestone with a focus on ease of use.
While TensorFlow 1.x will still be maintained and developed during the early releases of TensorFlow 2.0, it has been announced that TensorFlow 1.x will no longer be developed once a final version of TensorFlow 2.0 is released. TensorFlow team will still continue to issue security patches for the last TensorFlow 1.x release for one year after TensorFlow 2.0’s release date.
Stronger integration of Keras, Eager, and Estimators to use same data pipelines, APIs, and serialization formats (Saved Model).
Canned Estimators for commonly used ML models (such as TimeSeries, RNNs, TensorForest, additional boosted trees features) and related functionality (like sequence feature columns) in TensorFlow Core (migrated from contrib if they exist).
Use DistributionStrategy to utilize multiple GPUs and multiple TPU cores.
Distributed training support (multi-machine).
Simpler export to a GraphDef/SavedModel.
Building out a set of models across image recognition, object detection, speech, translation, recommendation, and reinforcement learning that demonstrate best practices and serve as a starting point for high-performance model development.
A growing set of high-performance Cloud TPU reference models.
As much as possible, move large projects inside tf.contrib to separate repositories.
The tf.contrib module will be discontinued in its current form with TensorFlow 2.0. Experimental development will happen in other repositories in the future.
Thank you for reading.
With so many exciting and user-friendly features, I really look forward to the early release of TensorFlow 2.0. A preview version of TensorFlow 2.0 will be released later this year — which is next month in December!
As always, if you have any questions, feel free to leave your comments below. Till then, see you in the next post!