6 Types of Activation Function in Neural Networks You Need to Know

With Deep Learning becoming a mainstream technology, lately, there’s been a lot of talk about ANNs or Artificial Neural Networks. Today, ANN is a core component in diverse emerging domains such as handwriting recognition, image compression, stock exchange prediction, and so much more. Read more about types of artificial neural networks in machine learning.

But what is an Artificial Neural Network?

Artificial Neural Network is a Deep Learning model that draws inspiration from the neural structure of the human brain. ANNs have been designed to mimic the functions of the human brain that learn from experiences and adapt accordingly to the situation. Like the human brain has a multi-tiered structure containing billions of neurons arranged in a hierarchy, ANN also has a network of neurons that are interconnected to each other via axons.

These interconnected neurons pass electrical signals (called synapses) from one layer to another. This imitation of brain modeling allows the ANN to learn from experience without requiring human intervention. 

Read: Artificial Neural Network in Data Mining

Thus, ANNs are complex structures containing interconnected adaptive elements known as artificial neurons that can perform large computations for knowledge representation. They possess all the fundamental qualities of the biological neuron system, including learning capability, robustness, non-linearity, high parallelism, fault and failure tolerance, ability to handle imprecise and fuzzy information, and generalizing ability. 

Core Characteristics of Artificial Neural Networks

  • Non-linearity imparts a better fit to the data. 
  • High parallelism promotes fast processing and hardware failure-tolerance. 
  • Generalization allows for the application of the model to unlearned data.
  • Noise insensitivity that allows accurate prediction even for uncertain data and measurement errors.
  • Learning and adaptivity allow the model to update its internal architecture according to the changing environment. 

ANN-based computing primarily aims to design advanced mathematical algorithms that allow Artificial Neural Networks to learn by imitating the information processing and knowledge acquisition functions of the human brain.

Components of Artificial Neural Networks 

ANNs are comprised of three core layers or phases – an input layer, hidden layer/s, and an output layer.  

  • Input Layer: The first layer is fed with the input, that is, raw data. It conveys the information from the outside world to the network. In this layer, no computation is performed – the nodes merely pass on the information to the hidden layer.
  • Hidden Layer: In this layer, the nodes lie hidden behind the input layer – they comprise the abstraction part in every neural network. All the computations on the features entered through the input layer occur in the hidden layer/s, and then, it transfers the result to the output layer.
  • Output Layer: This layer depicts the results of the computations performed by the network to the outer world.

 

Source 

Neural networks can be categorized into different types based on the activity of the hidden layer/s. For instance, in a simple neural network, the hidden units can construct their unique representation of the input. Here, the weights between the hidden and input units decide when each hidden unit is active.

Thus, by adjusting these weights, the hidden layer can choose what it should represent. Other architectures include the single layer and multilayer models. In a single layer, there’s usually only an input and output layer – it lacks a hidden layer. Whereas, in a multilayer model, there is one or more than one hidden layer.

What are Activation Functions in a Neural Network?

As we mentioned earlier, ANNs are a crucial component of many structures that are helping revolutionize the world around us. But have you ever wondered, how do ANNs deliver state-of-the-art performance to find solutions to real-world problems?

The answer is – Activation Functions.

ANNs use activation functions (AFs) to perform complex computations in the hidden layers and then transfer the result to the output layer. The primary purpose of AFs is to introduce non-linear properties in the neural network.

They convert the linear input signals of a node into non-linear output signals to facilitate the learning of high order polynomials that go beyond one degree for deep networks. A unique aspect of AFs is that they are differentiable – this helps them function during the backpropagation of the neural networks.

What is the need for non-linearity?

If activation functions are not applied, the output signal would be a linear function, which is a polynomial of one degree. While it is easy to solve linear equations, they have a limited complexity quotient and hence, have less power to learn complex functional mappings from data. Thus, without AFs, a neural network would be a linear regression model with limited abilities.

This is certainly not what we want from a neural network. The task of neural networks is to compute highly complicated calculations. Furthermore, without AFs, neural networks cannot learn and model other complicated data, including images, speech, videos, audio, etc.

 AFs help neural networks to make sense of complicated, high dimensional, and non-linear Big Data sets that have an intricate architecture – they contain multiple hidden layers in between the input and output layer.

Read: Deep Learning Vs Neural Network

Now, without further ado, let’s dive into the different types of activation functions used in ANNs.

Types of Activation Functions

1. Sigmoid Function

In an ANN, the sigmoid function is a non-linear AF used primarily in feedforward neural networks. It is a differentiable real function, defined for real input values, and containing positive derivatives everywhere with a specific degree of smoothness. The sigmoid function appears in the output layer of the deep learning models and is used for predicting probability-based outputs. The sigmoid function is represented as:

Source 

Generally, the derivatives of the sigmoid function are applied to learning algorithms. The graph of the sigmoid function is ‘S’ shaped. 

Some of the major drawbacks of the sigmoid function include gradient saturation, slow convergence, sharp damp gradients during backpropagation from within deeper hidden layers to the input layers, and non-zero centered output that causes the gradient updates to propagate in varying directions.

2. Hyperbolic Tangent Function (Tanh)

The hyperbolic tangent function, a.k.a., the tanh function, is another type of AF. It is a smoother, zero-centered function having a range between -1 to 1. As a result, the output of the tanh function is represented by:

Source 

The tanh function is much more extensively used than the sigmoid function since it delivers better training performance for multilayer neural networks. The biggest advantage of the tanh function is that it produces a zero-centered output, thereby supporting the backpropagation process. The tanh function has been mostly used in recurrent neural networks for natural language processing and speech recognition tasks.

However, the tanh function, too, has a limitation – just like the sigmoid function, it cannot solve the vanishing gradient problem. Also, the tanh function can only attain a gradient of 1 when the input value is 0 (x is zero). As a result, the function can produce some dead neurons during the computation process.

3. Softmax Function 

The softmax function is another type of AF used in neural networks to compute probability distribution from a vector of real numbers. This function generates an output that ranges between values 0 and 1 and with the sum of the probabilities being equal to 1. The softmax function is represented as follows:

Source

This function is mainly used in multi-class models where it returns probabilities of each class, with the target class having the highest probability. It appears in almost all the output layers of the DL architecture where they are used. The primary difference between the sigmoid and softmax AF is that while the former is used in binary classification, the latter is used for multivariate classification. 

4. Softsign Function

The softsign function is another AF that is used in neural network computing. Although it is primarily in regression computation problems, nowadays it is also used in DL based text-to-speech applications. It is a quadratic polynomial, represented by:

 Source

Here “x” equals the absolute value of the input. 

 The main difference between the softsign function and the tanh function is that unlike the tanh function that converges exponentially, the softsign function converges in a polynomial form. 

5. Rectified Linear Unit (ReLU) Function 

One of the most popular AFs in DL models, the rectified linear unit (ReLU) function, is a fast-learning AF that promises to deliver state-of-the-art performance with stellar results. Compared to other AFs like the sigmoid and tanh functions, the ReLU function offers much better performance and generalization in deep learning. The function is a nearly linear function that retains the properties of linear models, which makes them easy to optimize with gradient-descent methods. 

The ReLU function performs a threshold operation on each input element where all values less than zero are set to zero. Thus, the ReLU is represented as:

Source

By rectifying the values of the inputs less than zero and setting them to zero, this function eliminates the vanishing gradient problem observed in the earlier types of activation functions (sigmoid and tanh).

The most significant advantage of using the ReLU function in computation is that it guarantees faster computation – it does not compute exponentials and divisions, thereby boosting the overall computation speed. Another critical aspect of the ReLU function is that it introduces sparsity in the hidden units by squishing the values between zero to maximum.

6. Exponential Linear Units (ELUs) Function

The exponential linear units (ELUs) function is an AF that is also used to speed up the training of neural networks (just like ReLU function). The biggest advantage of the ELU function is that it can eliminate the vanishing gradient problem by using identity for positive values and by improving the learning characteristics of the model.

ELUs have negative values that push the mean unit activation closer to zero, thereby reducing computational complexity and improving the learning speed. The ELU is an excellent alternative to the ReLU – it decreases bias shifts by pushing mean activation towards zero during the training process. 

The exponential linear unit function is represented as:

The derivative or gradient of the ELU equation is presented as:

Source

Here “α” equals the ELU hyperparameter that controls the saturation point for negative net inputs, which is usually set to 1.0. However, the ELU function has a limitation – it is not zero-centered.

Conclusion

Today, AFs like ReLU and ELU have gained maximum attention since they help to eliminate the vanishing gradient problem that causes major problems in the training process train and degrades the accuracy and performance of neural network models.

If you would like to know more about Machine Learning and Artificial Intelligence, check out IIT Madras and upGrad’s Advanced Certification in Machine Learning and Cloud. 

Lead the AI Driven Technological Revolution

ADVANCED CERTIFICATION IN MACHINE LEARNING AND CLOUD FROM IIT MADRAS & UPGRAD
Explore Now!!!

Leave a comment

Your email address will not be published. Required fields are marked *

×
Master Machine Learning Online
Earn Internationally recognised credentials - MSc from LJMU, UK & PG Diploma from IIIT Bangalore.
Apply for Test
By clicking Apply for Test,
you agree to our terms and conditions and our privacy policy.