Implementing Neural Networks From Scratch with Python [With Examples]

In this article, we will learn how to train and build a Neural Network from Scratch.

We will use the Churn dataset to train our neural network. Training a neural network is not complicated. We need to pre-process our data so that our model can easily take our data and train itself without any obstacles. You will proceed as follow:

  • Install Tensorflow
  • Import Libraries
  • Import the Dataset
  • Transform the input data
  • Split the data
  • Initialize the model
  • Build the model
  • Train the model
  • Evaluate the model

Churn rate is the measure of a company’s subscribers or a party who tends to discontinue in a specific time period. This rate plays an essential role in deciding the profits and form plans to gain new customers. In simple terms, we can say that company growth can be measured by Churn rate.

In this dataset, we have thirteen features, but we use only a few features that meet our requirements to predict the chance of discontinuing a user. 

Install TensorFlow

We can either use Google Colab if your PC or laptop doesn’t have a GPU or else you can use Jupyter Notebooks. If you are using your system, upgrade pip, and then install TensorFlow as follows.

Image source

Import Libraries

In the above code lines, I just imported all the libraries I will need in the process.

NumpyIt is a library used to perform mathematical operations on arrays.

Pandas →  To load the data file as a Pandas data frame and analyze the data.

Matplotlib →  I’ve imported a pyplot to plot graphs of the data.

Import Dataset

Our dataset is in the CSV format, so we load the dataset using pandas operations. Then we split the dataset into Dependent and Independent variables, where X is considered as Independent, and Y is considered as Dependent.

Transform the data

In our dataset, we have two categorical features, Geography and Gender. We need to create dummies for these two features, so we use the get_dummies method and then append them to our Independent Features Data.

Once we are done creating dummies and concatenating them to our data, we will remove the original features, i.e., Gender and Geography, from our train data.

Read: Machine Learning vs Neural Networks

Split data

From Sklearn, sub-library model_selection, we will import the train_test_split, which is used to split train and test sets. We can use the train_test_split function to do the split. The test_size = 0.3  indicates the percentage of the data that should be held over for testing.

Normalize the data

It’s essential to make sure that all the feature values lie in the same range. It would be difficult for the model to learn the underlying patterns between the features and learn how to make decisions, so we normalize our data into the same range using the StandardScaler method. 

Import dependencies

Now, we will import functionalities required to construct a deep neural network.

Build the Model

It’s time to build our model!. Now let us initialize our sequential model. The sequential API allows you to create models layer-by-layer for most problems.

The first thing we need to do before building a model is to create a model object itself. This object will be an instance of the class called Sequential.

Adding the first fully connected layer

If you are unaware of the types of layers and their functionality, I recommend checking my blog on Introduction to Neural Networks, which lets you know most of the concepts you should be aware of.

It means that this operation’s output should have six neurons in which we apply the ReLU activation function to break the linearity, and the no of input neurons is 11. We add all these hyperparameters using the .add() method. 

We will add a hidden layer with the same configuration where the output of this hidden layer will have six nodes.

Output Layer

This layer’s output will have only one node, which tells whether the user stays or leaves the subscription. In this layer, we use sigmoid as our activation function.

Learn about: Deep Learning vs Neural Networks

Compiling

Now we need to connect our network with an optimizer. An optimizer will update the weights of our network based on the error. This process is known as back-propagation. 

Here we will use adam as our optimizer. Since our outcome is in terms of the binary, we use binary cross-entropy, and the metrics we use is accuracy.

Training the model

This stage is the crucial path where we need to train our model to learn the underlying patterns, relationships between the data, and predict the new outcome based on its knowledge.

We use the model.fit() method to train the model. We pass three arguments inside the method, which are

inputx_train is the input that is fed to the network

outputthis contains the correct answers for the x_train, i.e., y_train

no.of.epochs It means the number of times you are going to train the network with the dataset.

Evaluate

You can evaluate the model’s performance by importing accuracy_score from the sklearn library in which you need to pass two arguments. One is the actual output, and the other one is the predicted outputs.

Also readNeural Network Applications in Real World

Conclusion

That’s all for now. I hope you enjoyed building your first neural network. Happy Learning!

If you’re interested to learn more about neural network, machine learning & AI, check out IIIT-B & upGrad’s PG Diploma in Machine Learning & AI which is designed for working professionals and offers 450+ hours of rigorous training, 30+ case studies & assignments, IIIT-B Alumni status, 5+ practical hands-on capstone projects & job assistance with top firms.

Lead the AI Driven Technological Revolution

PG DIPLOMA IN MACHINE LEARNING AND ARTIFICIAL INTELLIGENCE
Learn More

Leave a comment

Your email address will not be published.

Accelerate Your Career with upGrad

Our Popular Machine Learning Course

×