The deep learning algorithms like Recurrent Neural Networks (RNN) and Convolutional Neural Networks (CNN) have done significant work in solving problems of various fields like speech recognition, computer vision, and a lot more in the last few years. Although the results had great accuracy, it mostly worked on euclidean data.
But when it comes to Network Science, Physics, Biology, Computer Graphics, and Recommender Systems, we have to deal with non-euclidean data, i.e. manifolds and graphs. Geometric Deep Learning deals with this non-euclidean data with a sense of deep learning techniques as a whole to the manifold or graph-structured data.
What is Geometric Deep Learning?
In the past few years, we have seen significant advancement in the field of deep learning and machine learning. The computer power is growing rapidly, and the available data is combined with the algorithms which were developed back in 1980 – 1990 for their new applications.
If there’s an area which benefited a lot from this development would be Representation Learning. Representation learning is a part of supervised learning, and it is also called Feature Learning. Feature learning directly replaces Feature Engineering in a lot of applications. For your information, feature engineering is a field which deals with developing descriptors and features for performing on other Machine Learning Tasks.
One of the best examples is the use of Convolutional Neural Networks (CNN) for object detection, image classification, and achieving great accuracy hence setting a benchmark for the other conventional algorithms. ImageNet conducted a competition in 2012 and outperformed a SOTA substantially based on Feature Engineering.
Let us now get into understanding the field having a similar origin and a blossoming future, geometric deep learning.
The term geometric deep learning was first termed by Bronstein et al. in their article published in 2017, the title of the article was, “Geometric Deep Learning: going beyond euclidean data”.
It is a strong title which tells that geometric deep learning is capable of employing deep learning even on non-euclidean data. Non-euclidean data is a set of data which cannot fit in a two-dimensional space.
Usually, a graphic specialization or a mesh which is very extensive in the computer graphics field to visualize the non-euclidean data.
The figure on the left indicates the geodesic distance and on the right is the euclidean distance. The mesh in the above figure is a person’s face. Now, across the mesh, the shortest surface distance is the geodesic distance between two landmarks. While the distance calculated between two landmarks using a straight line is the euclidean distance.
Geodesic distance is the main advantage of representing any mesh in a non-euclidean form as it is more consequential for the tasks performed on it. It is not that we cannot cast the non-euclidean data into euclidean data inherently, but what happens is, there is a high cost in losing the performance and efficiency.
A prime and important example of non-euclidean data would be a graph. A graph is such a data structure which consists of entities or nodes which are connected to the relationships or edges. A graph can be used to model almost every and anything.
Well, you do not need an understanding of Graph Theory, you just need to read a bit on it so that you can use the software libraries which are required in the process. You should have a crystal clear basic knowledge of geometric deep learning for an outstanding introduction to graph and its fundamental theory.
For the data to be used to solve the problem based on geometric deep learning, if you already recognize the achievable instances based on the data you need to dispose of, or contrariwise; then it is a best-case scenario.
What we want to understand is what differs the inductive reasoning and deductive reasoning. When it comes to deductive reasoning, the general terms are used to come to a specific conclusion or to make a particular claim. Let us combine both of these assertions to form an example.
“All the girls scored 10/10 in the test” and “Taylor is a girl” eventually means that “Taylor has scored 10/10 in the test”. Inductive reasoning is vice versa; here, a general idea or conclusion is drawn from particular terms. Let us take an example to visualize the reasoning. Answer this question:
Which cow yields only long-life (UHT) milk? If you say “none”, you are among the 21% of the interviewed youth. 5% of the interviewed youth marked “Milka-cows”, 10% marked “all”, 2% of them lined up for “female cows” and “black & white cows”, and 50% of them had no answer.
Also Read: Recurrent Neural Network
There’s a lot to be analyzed from this result but let’s consider the Milka-cows thought. Let us understand the conclusion in inductive reasoning form with the youth’s point of view. Firstly, “Milka-cow is a special breed”, “UHT milk is special”, which eventually leads to “UHT milk is yielded by a Milka-cow”.
What can we sum up from this? Inductive bias or Inductive reasoning is a set of assumptions of the learner, which is sufficient to explain its inductive and deductive interference. One has to be very careful while designing the algorithms of inductive bias. One can use inductive interference to achieve the results which are equivalent to deductive inferences.
Interesting fact: From the math corpus in greater computer science, if there’s any subject which is fabled for being thought about as a tough subject is Graph Theory in discrete math.
However, graph theory allows us to perform a few exciting tasks and provide amazing insights with deep learning.
Graph segmentation is a process of classifying one and all the components of a graph like nodes (entities), edges (relationships). Think of autonomous cars which need to get their environment monitored after a regular interval and predict what they would be next up to by the pedestrians.
Usually, human pedestrians are either represented as huge bounding boxes in three dimensions or as more degrees of motion skeletons. With faster and better three-dimensional semantic segmentation, autonomous car’s would have more and more algorithms which makes the perception feasible.
In graph classification, the algorithm gets a graph or subgraph as input and interprets one output of n classes which are specified having a certainty value combined with the prediction. It is equivalent to image classification of which the employed network has two main parts.
The first important part is feature extractor which creates an optical representation of the input data. Then to constrain the output regression to a particular dimensionality, fully connected layers are used. On the other hand, a softmax layer is required for multi-class classification.
We have understood Geometric Deep Learning in depth by putting it in the Deep Learning context overall. We can conclude that geometric deep learning deals with irregular data as a whole, and we learnt about graphs by illustrating how promising their role in learning biases is.
If you’re interested to learn more about deep learning techniques, machine learning, check out IIIT-B & upGrad’s PG Diploma in Machine Learning & AI which is designed for working professionals and offers 450+ hours of rigorous training, 30+ case studies & assignments, IIIT-B Alumni status, 5+ practical hands-on capstone projects & job assistance with top firms.