TensorFlow is the Machine Learning framework by Google. It is primarily used for Deep Learning related tasks and seamlessly integrates with other Google APIs as well. TensorFlow is clearly one of the most used libraries for Deep Learning in the industry right now and totally worth learning!
By the end of this tutorial, you will have knowledge of the following.
- What is TensorFlow?
- What is new in TF 2.0?
- TensorFlow vs Keras
- Installing TensorFlow
- Image Classifier in TensorFlow
What Is TensorFlow?
TensorFlow started as an Open-Source Deep Learning library by Google and now is a complete framework for end to end Machine Learning processes. You might be wondering why Google chose this name and what does “Tensor” mean.
What Is A Tensor?
Tensors are effectively Multi-dimensional arrays that enable you to perform complex operations on multi-dimensional arrays. However, they are not just an N-dimensional array.
A Tensor also includes the transformations such as dot product, addition, matrix multiplication, etc.
But Why Are They Important?
Tensors are not new. They have been in use since long, but their characteristics are heavily exploited in the area of Deep Learning, where the data is usually huge and of multiple dimensions.
Tensors, just like Numpy arrays, also have a shape and data type. All tensors are immutable like Python numbers and strings: you can never update the contents of a tensor, only create a new one.
But what makes them different from usual Numpy arrays is their ability to utilize GPU memory and compute power which is of the utmost importance when data is high-dimensional and size is in millions or more.
Tensors are highly used in Deep Learning frameworks such as Facebook’s Pytorch and Google’s TensorFlow, which is even named after them !
Google has also developed another AI accelerator, called Tensor Processing Unit (TPU), especially for TensorFlow which takes the optimization to a next level altogether!
What’s New In TF 2.0 ?
Google had released the first version of TensorFlow 1 in 2015 by the Google Brain Team.
Using TensorFlow 1.x to make neural networks was not an easy task as it required a lot of code to be written.
Lazy Evaluation Vs Eager Evaluation
With TensorFlow 1.x, there used to be a need to make Sessions and run those sessions to generate the output of any “graph”. Let’s understand this with below code
import tensorflow as tf
a = tf.constant(1)
Running the above code won’t give you the output you want, i.e., 3. This is because TensorFlow 1.x worked in sessions.
A session is a type of environment that contains all the variables and the transformations that it needs to do.
A graph of transformations was made which was not evaluated until it was specifically called by running tf.session.run().
Therefore, the above code will return what you expect if you do:
This is called Lazy evaluation. As it lazily waits until it is specifically told to run.
This lengthy and complicated process needed to be resolved and hence the need for TensorFlow 2.x came.
TF 2.x comes with Eager evaluation by default which makes it really easy for us to code and run the processes.
There are no sessions now and the neural network training which took 100 lines in TF 1.x takes less than 20 with TF 2.x.
TensorFlow’s eager execution is an imperative programming environment that evaluates operations immediately, without building graphs: operations return concrete values instead of constructing a computational graph to run later.
This makes it easy to get started with TensorFlow and debug models, and it reduces boilerplate as well.
TensorFlow Vs Keras
The question is really not TensorFlow vs Keras. It is TensorFlow with Keras. Keras provided a high-level API over TensorFlow 1.x which made it very easy to work with it.
Now with TF 2.0, TensorFlow has officially made Keras a part of its API for model designing and training with tf.keras.
All the code which was earlier done in Keras is now suggested to be done with tf.keras in TF 2.0 as it lets it use all the TensorFlow components and ecosystem such as:
- TensorFlow Serving which is used to serve/deploy TensorFlow models seamlessly.
- TensorFlow Lite which is the mobile version of TensorFlow capable of running on Android and IOS.
- TensorBoard is a suite of visualization tools to understand, debug, and optimize TensorFlow programs.
If you are new to Machine Learning then the easiest way to get things rolling is by opening up a Colab Notebook. Just go to https://colab.research.google.com/ and click on “New Python 3 Notebook.”
Make sure the kernel says “connected” on the top right. Good news, TensorFlow comes pre-installed in Google Colab.
Voila! You’re all set.
To check if you’re using the right version, run the below snippet.
|import tensorflow as tf
It should say any version above 2.0.0 and you’re good to go.
Image Classifier In TensorFlow
Let’s now go over the “Hello World” of Deep Learning problems – the MNIST dataset.
We’ll build a short neural network to predict on the MNIST dataset. We will follow the below steps.
- Build a neural network that classifies images.
- Train a neural network.
- Evaluate the accuracy of the model
|import tensorflow as tf|
Loading the MNIST data.
|mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
Building a tf.keras.Sequential model by stacking up the layers.
We’d need to choose an optimizer and a loss function as well for the model to train upon.
|model = tf.keras.models.Sequential([
Defining the Sparse Categorical Cross Entropy loss function.
|loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)|
Compiling the model.
Training the model with 5 epochs.
|model.fit(x_train, y_train, epochs=5)|
Evaluating the model.
|model.evaluate(x_test, y_test, verbose=2)|
|313/313 – 0s – loss: 0.0825 – accuracy: 0.9753 [0.082541823387146, 0.9753000140190125]|
The image classifier is now trained to ~98% accuracy on this dataset.
Before You Go
TensorFlow 2 focuses on simplicity and ease of use, with updates like eager execution, intuitive higher-level APIs, and flexible model building on any platform.
TensorFlow is the go-to library/tool for any deep learning task these days. The other most used and popular library is Facebook’s PyTorch.
TensorFlow’s extended ecosystem makes it a great place to begin your Deep Learning journey. It is easy to understand and more importantly, easy to implement.
The best place to start is with the user-friendly Sequential API. You can create models by plugging together building blocks. Learn more about deep learning techniques.
So, now that you have a detailed idea of all the major Deep learning frameworks out there, you can make an informed decision and choose the one that suits your project best.
If you are interested to know more about deep learning and artificial intelligence, check out our PG Diploma in Machine Learning and AI program which is designed for working professionals and provide 30+ case studies & assignments, 25+ industry mentorship sessions, 5+ practical hands-on capstone projects, more than 450 hours of rigorous training & job placement assistance with top firms.