Two of the most popular and powerful algorithms are Deep Learning and Deep Neural Networks. Deep learning algorithms are transforming the world as we know it. The main success of these algorithms is in the design of the architecture of these neural networks. Let us now discuss some of the famous neural network architecture.
Popular Neural Network Architectures
LeNet5 is a neural network architecture that was created by Yann LeCun in the year 1994. LeNet5 propelled the deep Learning field. It can be said that LeNet5 was the very first convolutional neural network that has the leading role at the beginning of the Deep Learning field.
LeNet5 has a very fundamental architecture. Across the entire image will be distributed with image features. Similar features can be extracted in a very effective way by using learnable parameters with convolutions. When the LeNet5 was created, the CPUs were very slow, and No GPU can be used to help the training.
The main advantage of this architecture is the saving of computation and parameters. In an extensive multi-layer neural network, Each pixel was used as a separate input, and LeNet5 contrasted this. There are high spatially correlations between the images and using the single-pixel as different input features would be a disadvantage of these correlations and not be used in the first layer.
Features of LeNet5:
- The cost of Large Computations can be avoided by sparsing the connection matrix between layers.
- The final classifier will be a multi-layer neural network
- In the form of sigmoids or tanh, there will be non-linearity
- The spatial average of maps are used in the subsample
- Extraction of spatial features are done by using convolution
- Non-linearity, Pooling, and Convolution are the three sequence layers used in convolutional neural network
In a few words, It can be said that LeNet5 Neural Network Architecture has inspired many people and architectures in the field of Deep Learning.
The gap in the progress of neural network architecture:
The neural network did not progress much from the year 1998 to 2010. Many researchers were slowly improving, and many people did not notice their increasing power. With the rise of cheap digital and cell-phone cameras, data availability increased. GPU has now become a general-purpose computing tool, and CPUs also became faster with the increase of computing power. In those years, the progress rate of the neural network was prolonged, but slowly people started noticing the increasing power of the neural network.
2. Dan Ciresan Net
Very first implementation of GPU Neural nets was published by Jurgen Schmidhuber and Dan Claudiu Ciresan in 2010. There were up to 9 layers of the neural network. It was implemented on an NVIDIA GTX 280 graphics processor, and it had both backward and forward.
This neural network architecture has won the challenging competition of ImageNet by a considerable margin. It is a much broader and more in-depth version of LeNet. Alex Krizhevsky released it in 2012.
Complex hierarchies and objects can be learned using this architecture. The much more extensive neural network was created by scaling the insights of LeNet in AlexNet Architecture.
The work contributions are as follows:
- Training time was reduced by using GPUs NVIDIA GTX 580.
- Averaging effects of average pooling are avoided, and max pooling is overlapped.
- Overfitting of the model is avoided by selectively ignoring the single neurons by using the technique of dropout.
- Rectified linear units are used as non-linearities
Bigger images and more massive datasets were allowed to use because training time was 10x faster and GPU offered a more considerable number of cores than the CPUs. The success of AlexNet led to a revolution in the Neural Network Sciences. Useful tasks were solved by large neural networks, namely convolutional neural networks. It has now become the workhorse of Deep Learning.
Overfeat is a new derivative of AlexNet that came up in December 2013 and was created by the NYU lab from Yann LeCun. Many papers were published on learning bounding boxes after learning the article proposed bounding boxes. But Segment objects can also be discovered rather than learning artificial bounding boxes.
The first time VGG networks from Oxford used smaller 3×3 filters in each convolutional layers. Smaller 3×3 filters were also used in combination as a sequence of convolutions.
VGG contrasts the principles of LeNet as in LeNet. Similar features in an image were captured by using large convolutions. In VGG, smaller filters were used on the first layers of the network, which was avoided in LeNet architecture. In VGG, large filters of AlexNet like 9 x 9 or 11 x 11 were not used. Emulation by the insight of the effect of larger receptive fields such as 7 x 7 and 5 x 5 were possible because of multiple 3 x 3 convolution in sequence. It was also the most significant advantage of VGG. Recent Network Architectures such as ResNet and Inception are using this idea of multiple 3×3 convolutions in series.
Network-in-network is a neural network architecture that provides higher combinational power and has simple & great insight. A higher strength of the combination is provided to the features of a convolutional layer by using 1×1 convolutions.
7. GoogLeNet and Inception
GoogLeNet is the first inception architecture which aims at decreasing the burden of computation of deep neural networks. The categorization of video frames and images content was done by using deep learning models. Large deployments and efficiency of architectures on the server farms became the main interest of big internet giants such as Google. Many people agreed in 2014 neural networks, and deep learning is nowhere to go back.
8. Bottleneck Layer
Inference time was kept low at each layer by the reduction of the number of operations and features by the bottleneck layer of Inception. The number of features will be reduced to 4 times before the data is passed to the expensive convolution modules. This is the success of Bottleneck layer architecture because it saved the cost of computation by very large.
The idea of ResNet is straightforward, and that is to bypass the input to the next layers and also to feed the output of two successive convolutional layers. More than a hundred and thousand layers of the network were trained for the first time in ResNet.
Inception and ResNet’s concepts have been re-hashed in SqueezeNet in the recent release. Complex compression algorithms’ needs have been removed, and delivery of parameters and small network sizes have become possible with better design of architecture.
Bonus: 11. ENet
Adam Paszke designed the neural network architecture called ENet. It is a very light-weight and efficient network. It uses very few computations and parameters in the architecture by combining all the modern architectures’ features. Scene-parsing and pixel-wise labelling have been performed by using it.
Here are the neural network architectures that are commonly used. We hope this article was informative in helping you to learn neural networks.
You can check our PG Diploma in Machine Learning and AI, which provides practical hands-on workshops, one-to-one industry mentor, 12 case studies and assignments, IIIT-B Alumni status, and more.
Latest posts by Kechit Goyal (see all)
- Must Read 30 Selenium Interview Questions & Answers: Ultimate Guide 2020 - February 20, 2020
- Django Developer Salary in India in 2020 [For Freshers & Experienced] - February 20, 2020
- Top 32 Microsoft Azure Interview Questions & Answers You Need to Learn in 2020 - February 19, 2020