Homebreadcumb forward arrow iconBlogbreadcumb forward arrow iconArtificial Intelligencebreadcumb forward arrow iconRecurrent Neural Networks: Introduction, Problems, LSTMs Explained

Recurrent Neural Networks: Introduction, Problems, LSTMs Explained

Last updated:
8th Feb, 2021
Read Time
5 Mins
share image icon
In this article
Chevron in toc
View All
Recurrent Neural Networks: Introduction, Problems, LSTMs Explained


In a traditional feed-forward neural network, the flow of the information is only in one direction, which is from the input layer to the hidden layers and finally the output layer.

Here, the output of each layer depends only on its immediately previous layer, and so it does not have any memory of the past layers while going forward. For example, consider a simple neural network and feed in the word “layer” as the input.

Top Machine Learning and AI Courses Online

Ads of upGrad blog

The neural network will process the word one character at a time. While it reaches the character “e”, it no longer has any memory of the previous characters “l”, “a” and “y”. This is why a feed-forward neural network will never be able to predict the next character output.

Now, this is where a recurrent neural network comes to the rescue. It is able to remember all the previous characters because it possesses a memory of its own. As the name suggests, the flow of information recurs in a loop in the case of a recurrent neural network.

At every stage, it receives two inputs: the current state, and the information gathered from the previous states. Thus, this kind of neural network does well in tasks like predicting the next character and other sequential data in general, like speech, audio, time series, etc.

Taking the above example of the word “layer”, suppose the neural network is trying to predict the fifth character. The hidden block in the diagram above applies a recurrence formula at every time step, which will have the current input as well as its previous state. So for time-step t if the current input is “y” then the previous state is “a” and the formula is applied to both “y” and “a” to get the next output state. 

The formula is represented as–

ht = f(ht-1 , xt)

where ht is the new state, ht-1 is the previous state and it is the current input. Each input corresponds to a time step, and the same weight matrix is assigned to the RNN at each time step along with the same function.

Trending Machine Learning Skills

So taking the f(x) activation function as tanh and assigning the weights (whh and wxh) to the current and previous states, we get-

ht = tanh (Whhht-1 + Wxhxt)

Now the output would be-

yt = Whyht

If we try to have a deeper recurrent neural network, it means having more than one hidden layer. Here, all hidden layers will have different weights and different activation functions so that each layer is independent and behaves differently. Having the same weights and bias for each hidden layer will defeat the purpose and make them behave the same.


Some problems or issues which occur while training recurrent neural networks are vanishing gradients and exploding gradients. A gradient is simply the measure of how much the output of a function changes with the change in input. Higher the gradient or slope, the faster the recurrent neural network is learning, and vice-versa.

Vanishing gradients take place when the gradient value is so small that the RNN model takes extremely long to learn or doesn’t learn at all. This is a difficult problem to tackle, however, it can be solved by using LSTMs, GRU, or the Relu activation function.

Exploding gradients take place when some of the weights are given far too much importance by assigning them an extremely high value. This problem is easier to tackle than vanishing gradients. RMSprop can be used to adjust the learning rate, or the backpropagation can be truncated at a suitable time step.

Popular AI and ML Blogs & Free Courses


Recurrent neural networks by default tend to have a short-term memory, with the exception of LSTMs. They basically have a 3-gate module—

Forget gate: This gate decides how much of the past information should be remembered and how much should be omitted. 

Input gate: This gate decides how much of the present input is to be added to the current state. 

Output gate: This gate decides how much of the current state will be passed on to the output.

Ads of upGrad blog

Also Read: Machine Learning Project Ideas


This modified version of RNN thus is able to remember things for a longer-term, without having to worry about vanishing gradients. LSTMs are helpful in classifying or predicting series where the duration of time lags are unknown. RNNs in general have always helped in sequenced data modeling, with the added advantage that they can process inputs and outputs of varying lengths.

If you’re interested to learn more about machine learning, check out IIIT-B & upGrad’s PG Diploma in Machine Learning & AI which is designed for working professionals and offers 450+ hours of rigorous training, 30+ case studies & assignments, IIIT-B Alumni status, 5+ practical hands-on capstone projects & job assistance with top firms.

Learn ML Course from the World’s top Universities. Earn Masters, Executive PGP, or Advanced Certificate Programs to fast-track your career.


Pavan Vadapalli

Blog Author
Director of Engineering @ upGrad. Motivated to leverage technology to solve problems. Seasoned leader for startups and fast moving orgs. Working on solving problems of scale and long term technology strategy.
Get Free Consultation

Select Coursecaret down icon
Selectcaret down icon
By clicking 'Submit' you Agree to  
UpGrad's Terms & Conditions

Our Popular Machine Learning Course

Explore Free Courses

Suggested Blogs

Data Preprocessing in Machine Learning: 7 Easy Steps To Follow
Summary: In this article, you will learn about data preprocessing in Machine Learning: 7 easy steps to follow. Acquire the dataset Import all the cr
Read More

by Kechit Goyal

29 Oct 2023

Natural Language Processing (NLP) Projects & Topics For Beginners [2023]
What are Natural Language Processing Projects? NLP project ideas advanced encompass various applications and research areas that leverage computation
Read More

by Pavan Vadapalli

04 Oct 2023

15 Interesting MATLAB Project Ideas & Topics For Beginners [2023]
Learning about MATLAB can be tedious. It’s capable of performing many tasks and solving highly complex problems of different domains. If youR
Read More

by Pavan Vadapalli

03 Oct 2023

Top 16 Artificial Intelligence Project Ideas & Topics for Beginners [2023]
Summary: In this article, you will learn the 16 AI project ideas & Topics. Take a glimpse below. Predict Housing Price Enron Investigation Stock
Read More

by Pavan Vadapalli

27 Sep 2023

Top 15 Deep Learning Interview Questions & Answers
Although still evolving, Deep Learning has emerged as a breakthrough technology in the field of Data Science. From Google’s DeepMind to self-dri
Read More

by Prashant Kathuria

21 Sep 2023

Top 8 Exciting AWS Projects & Ideas For Beginners [2023]
AWS Projects & Topics Looking for AWS project ideas? Then you’ve come to the right place because, in this article, we’ve shared multiple AWS proj
Read More

by Pavan Vadapalli

19 Sep 2023

Top 15 IoT Interview Questions & Answers 2023 – For Beginners & Experienced
These days, the minute you indulge in any technology-oriented discussion, interview questions on cloud computing come up in some form or the other. Th
Read More

by Kechit Goyal

15 Sep 2023

45+ Interesting Machine Learning Project Ideas For Beginners [2023]
Summary: In this Article, you will learn Stock Prices Predictor Sports Predictor Develop A Sentiment Analyzer Enhance Healthcare Prepare ML Algorith
Read More

by Jaideep Khare

14 Sep 2023

Why GPUs for Machine Learning? Ultimate Guide
In the realm of modern technology, the convergence of data and algorithms has paved the way for groundbreaking advancements in artificial intelligence
Read More

by Pavan Vadapalli

14 Sep 2023

Schedule 1:1 free counsellingTalk to Career Expert
footer sticky close icon