Programs

PCA in Machine Learning: Assumptions, Steps to Apply & Applications

Understanding the Dimensionality Reduction in ML

ML (Machine Learning) algorithms are tested with some data which can be called a feature set at the time of development & testing. Developers need to reduce the number of input variables in their feature set to increase the performance of any particular ML model/algorithm.

Best Machine Learning Courses & AI Courses Online

For example, suppose you have a dataset with numerous columns, or you have an array of points in a 3-D space. In that case, you can reduce the dimensions of your dataset by applying dimensionality reduction techniques in ML. PCA (Principal Component Analysis) is one of the widely used dimensionality reduction techniques by ML developers/testers. Let us dive deeper into understanding PCA in machine learning.

Let’s take a closer look at what we mean by principle component analysis in machine learning and why we use PCA  in machine learning. 

Enroll for the Machine Learning Course from the World’s top Universities. Earn Masters, Executive PGP, or Advanced Certificate Programs to fast-track your career.

Principal Component Analysis

PCA is an unsupervised statistical technique that is used to reduce the dimensions of the dataset. ML models with many input variables or higher dimensionality tend to fail when operating on a higher input dataset. PCA helps in identifying relationships among different variables & then coupling them. PCA works on some assumptions which are to be followed and it helps developers maintain a standard.

PCA involves the transformation of variables in the dataset into a new set of variables which are called PCs (Principal Components). The principal components would be equal to the number of original variables in the given dataset.

PCA in machine learning is based on some mathematical concepts, which include Variance and covariance, Eigenvalues, and eigen factors. 

What are some of the most common terms used in PCA algorithm in machine learning?

As stated earlier principal component analysis is an unsupervised learning algorithm that is used specifically for the reduction of dimensionality in machine learning. Here are some of the most commonly used terms in PCA machine learning

  • Dimensionality- It basically refers to the number of features or variables that you can find in a dataset. To put it simply, dimensionality refers to the number of columns that are present in the dataset. 
  • Correlation- The second most commonly used term in PCA machine learning is a correlation. Correlation means how strongly two variables are related to each other. 
  • Orthogonal- Yet another commonly used term of PCA algorithm in machine learning is orthogonal. It states that variables are not co-related to each other. This automatically means that the correlation between the two variables is basically zero. 
  • Covariance Matrix- Last but not least, the covariance matrix is one of the most commonly used terms that is used in the principal component analysis in machine learning. It basically refers to a matrix that contains covariance between the pair of variables. 

In-demand Machine Learning Skills

The first principal component (PC1) contains the maximum variation which was present in earlier variables, and this variation decreases as we move to the lower level. The final PC would have the least variation among variables and you will be able to reduce the dimensions of your feature set.

Assumptions in PCA

There are some assumptions in PCA which are to be followed as they will lead to accurate functioning of this dimensionality reduction technique in ML. The assumptions in PCA are:

• There must be linearity in the data set, i.e. the variables combine in a linear manner to form the dataset. The variables exhibit relationships among themselves.

• PCA assumes that the principal component with high variance must be paid attention and the PCs with lower variance are disregarded as noise. Pearson correlation coefficient framework led to the origin of PCA, and there it was assumed first that the axes with high variance would only be turned into principal components.

FYI: Free Deep Learning Course!

• All variables should be accessed on the same ratio level of measurement. The most preferred norm is at least 150 observations of the sample set with a ratio measurement of 5:1.

• Extreme values that deviate from other data points in any dataset, which are also called outliers, should be less. More number of outliers will represent experimental errors and will degrade your ML model/algorithm.

• The feature set must be correlated and the reduced feature set after applying PCA will represent the original data set but in an effective way with fewer dimensions.

Must Read: Machine Learning Salary in India

Steps for Applying PCA

The steps for applying PCA on any ML model/algorithm are as follows:

• Normalisation of data is very necessary to apply PCA. Unscaled data can cause problems in the relative comparison of the dataset. For example, if we have a list of numbers under a column in some 2-D dataset, the mean of those numbers is subtracted from all numbers to normalise the 2-D dataset. Normalising the data can be done in a 3-D dataset too.

• Once you have normalised the dataset, find the covariance among different dimensions and put them in a covariance matrix. The off-diagonal elements in the covariance matrix will represent the covariance among each pair of variables and the diagonal elements will represent the variances of each variable/dimension.

A covariance matrix constructed for any dataset will always be symmetric. A covariance matrix will represent the relationship in data, and you can understand the amount of variance in each principal component easily.

• You have to find the eigenvalues of the covariance matrix which represents the variability in data on an orthogonal basis in the plot. You will also have to find eigenvectors of the covariance matrix which will represent the direction in which maximum variance among the data occurs.

Suppose your covariance matrix ‘C’ has a square matrix ‘E’ of eigenvalues of ‘C’. In that case, it should satisfy this equation – determinant of (EI – C) = 0, where ‘I’ is an identity matrix of the same dimension as of ‘C’. You should check that their covariance matrix is a symmetric/square matrix because then only the calculation of eigenvalues is possible.

• Arrange the eigenvalues in an ascending/descending order and select the higher eigenvalues. You can choose how many eigenvalues you want to proceed with. You will lose some information while ignoring the smaller eigenvalues, but those minute values will not create enough impact on the final result.

The selected higher eigenvalues will become the dimensions of your updated feature set. We also form a feature vector, which is a vector matrix consisting of eigenvectors of relative chosen eigenvalues.

• Using the feature vector, we find the principal components of the dataset under analysis. We multiply the transpose of the feature vector with the transpose of the scaled matrix (a scaled version of data after normalisation) to obtain a matrix containing principal components.

We will notice that the highest eigenvalue will be appropriate for the data, and the other ones will not provide much information about the dataset. This proves that we are not losing data when reducing the dimensions of the dataset; we are just representing it more effectively.

These methods are implemented to finally reduce the dimensions of any dataset in PCA.

Applications of PCA

Data is generated in many sectors, and there is a need to analyse data for the growth of any firm/company. PCA will help in reducing the dimensions of the data, thus making it easier to analyse. The applications of PCA are:

• Neuroscience – Neuroscientists use PCA to identify any neuron or to map the brain structure during phase transitions.

• Finance – PCA is used in the finance sector for reducing the dimensionality of data to create fixed income portfolios. Many other facets of the finance sector involve PCA like forecasting returns, making asset allocation algorithms or equity algorithms, etc.

• Image Technology – PCA is also used for image compression or digital image processing. Each image can be represented via a matrix by plotting the intensity values of each pixel, and then we can apply PCA on it.

• Facial Recognition – PCA in facial recognition leads to the creation of eigenfaces which makes facial recognition more accurate.

• Medical – PCA is used on a lot of medical data to find the correlation among different variables. For example, doctors use PCA to show the correlation between cholesterol & low-density lipoprotein.

• Security – Anomalies can be found easily using PCA. It is used to identify cyber/computer attacks and visualise them with the help of PCA.

Other Applications Of PCA in machine learning

Now that you have a detailed understanding of what is principal component analysis in machine learning, let’s take a look at some of the various other applications of this tool. 

  • PCA has been used to show the correlation between cholesterol and low-density lipoprotein.
  • PCA has also been used for the detection and visualization of computer network attacks. 
  • It has been used on HSVR data, with the aim to find out the seismic characteristics of earthquake-prone areas.
  • PCA has been used for anomaly detection. 
  • PCA is used to simplify traditional complex business decisions. 
  • This algorithm has been used extensively especially for understanding convoluted and multidirectional factors. This increases the probability of neural ensembles triggering action potentials.

Advantages of applying PCA

  • It is easy to compute- Since PCA is based on linear algebra, it is computationally easy to solve by computers.
  • Increases the speed of other machine learning algorithms-  Machine learning algorithms tend to converge faster on principal components, than the original dataset. This is one of the main reasons why PCA  in machine learning is preferred over others. 

Disadvantage of applying PCA

One major disadvantage of PCA is that, when computing PCA using varied statistical software tools, it often assumes that the feature has no empty rows or no missing values. One effective way to solve this problem is to quickly remove the rows or columns with the missing values or simply impute the missing values with a close approximation. 

Also Read: Machine Learning Project Ideas

Takeaway Points

PCA can also lead to low model performance after applying it if the original dataset has a weak correlation or no correlation. The variables need to be related to one other to apply PCA perfectly. PCA provides us with a combination of features, and individual feature importance from the original dataset is eradicated. The principal axes with the most variance are the ideal principal components.

Popular AI and ML Blogs & Free Courses

Conclusion

PCA is a widely used technique for decreasing the dimensions of a feature set.

If you’re interested to learn more about machine learning, check out IIIT-B & upGrad’s PG Diploma in Machine Learning & AI which is designed for working professionals and offers 450+ hours of rigorous training, 30+ case studies & assignments, IIIT-B Alumni status, 5+ practical hands-on capstone projects & job assistance with top firms.

Can PCA be used on all data?

Yes. Principal Component Analysis (PCA) is a data analysis technique that provides a way of looking at and understanding data which is very high dimensional. In other words, PCA can be applied to data that has a large number of variables. There is a common misconception that PCA can only be used on data that is in a certain form. For example, many people think PCA is only useful on variables that are numerical. This is not the case. In fact, PCA can be used on variables of all types. For example, PCA can be applied to categorical variables, ordinal variables, and so on.

What are the limitations of Principal Component Analysis?

PCA is a great tool to analyze your data and extract two or three most important factors. It is great to spot the outliers and the trend. But, it has some limitations like: It is not suitable for small data sets (Generally, data set should have more than 30 rows). It does not find the important factors but selects them based on the values. So, it is difficult to find the important factors. It does not have a strong mathematical structure behind it. It is difficult to compare the data with PCA. It cannot find any non-linear relationships.

What are the advantages of principal component analysis?

Principal component analysis (PCA) is a statistical method used to transform a large number of possibly correlated variables into a much smaller number of uncorrelated variables referred to as principal components. PCA can be used as a data reduction technique as it allows us to find the most important variables that are needed to describe a dataset. PCA can also be used to reduce the dimensionality of the data space in order to get insight on the inner structure of the data. This is helpful when dealing with large datasets.

Want to share this article?

Lead the AI Driven Technological Revolution

PG DIPLOMA IN MACHINE LEARNING AND ARTIFICIAL INTELLIGENCE
Learn More

Leave a comment

Your email address will not be published. Required fields are marked *

Our Popular Machine Learning Course

Get Free Consultation

Leave a comment

Your email address will not be published. Required fields are marked *

×
Get Free career counselling from upGrad experts!
Book a session with an industry professional today!
No Thanks
Let's do it
Get Free career counselling from upGrad experts!
Book a Session with an industry professional today!
Let's do it
No Thanks