PCA in Machine Learning: Assumptions, Steps to Apply & Applications

Understanding the Dimensionality Reduction in ML

ML (Machine Learning) algorithms are tested with some data which can be called a feature set at the time of development & testing. Developers need to reduce the number of input variables in their feature set to increase the performance of any particular ML model/algorithm.

For example, suppose you have a dataset with numerous columns, or you have an array of points in a 3-D space. In that case, you can reduce the dimensions of your dataset by applying dimensionality reduction techniques in ML. PCA (Principal Component Analysis) is one of the widely used dimensionality reduction techniques by ML developers/testers. Let us dive deeper into understanding PCA in machine learning.

Principal Component Analysis

PCA is an unsupervised statistical technique that is used to reduce the dimensions of the dataset. ML models with many input variables or higher dimensionality tend to fail when operating on a higher input dataset. PCA helps in identifying relationships among different variables & then coupling them. PCA works on some assumptions which are to be followed and it helps developers maintain a standard.

PCA involves the transformation of variables in the dataset into a new set of variables which are called PCs (Principal Components). The principal components would be equal to the number of original variables in the given dataset.

The first principal component (PC1) contains the maximum variation which was present in earlier variables, and this variation decreases as we move to the lower level. The final PC would have the least variation among variables and you will be able to reduce the dimensions of your feature set.

Assumptions in PCA

There are some assumptions in PCA which are to be followed as they will lead to accurate functioning of this dimensionality reduction technique in ML. The assumptions in PCA are:

• There must be linearity in the data set, i.e. the variables combine in a linear manner to form the dataset. The variables exhibit relationships among themselves.

• PCA assumes that the principal component with high variance must be paid attention and the PCs with lower variance are disregarded as noise. Pearson correlation coefficient framework led to the origin of PCA, and there it was assumed first that the axes with high variance would only be turned into principal components.

• All variables should be accessed on the same ratio level of measurement. The most preferred norm is at least 150 observations of the sample set with a ratio measurement of 5:1.

• Extreme values that deviate from other data points in any dataset, which are also called outliers, should be less. More number of outliers will represent experimental errors and will degrade your ML model/algorithm.

• The feature set must be correlated and the reduced feature set after applying PCA will represent the original data set but in an effective way with fewer dimensions.

Must Read: Machine Learning Salary in India

Steps for Applying PCA

The steps for applying PCA on any ML model/algorithm are as follows:

• Normalisation of data is very necessary to apply PCA. Unscaled data can cause problems in the relative comparison of the dataset. For example, if we have a list of numbers under a column in some 2-D dataset, the mean of those numbers is subtracted from all numbers to normalise the 2-D dataset. Normalising the data can be done in a 3-D dataset too.

• Once you have normalised the dataset, find the covariance among different dimensions and put them in a covariance matrix. The off-diagonal elements in the covariance matrix will represent the covariance among each pair of variables and the diagonal elements will represent the variances of each variable/dimension.

A covariance matrix constructed for any dataset will always be symmetric. A covariance matrix will represent the relationship in data, and you can understand the amount of variance in each principal component easily.

• You have to find the eigenvalues of the covariance matrix which represents the variability in data on an orthogonal basis in the plot. You will also have to find eigenvectors of the covariance matrix which will represent the direction in which maximum variance among the data occurs.

Suppose your covariance matrix ‘C’ has a square matrix ‘E’ of eigenvalues of ‘C’. In that case, it should satisfy this equation – determinant of (EI – C) = 0, where ‘I’ is an identity matrix of the same dimension as of ‘C’. You should check that their covariance matrix is a symmetric/square matrix because then only the calculation of eigenvalues is possible.

• Arrange the eigenvalues in an ascending/descending order and select the higher eigenvalues. You can choose how many eigenvalues you want to proceed with. You will lose some information while ignoring the smaller eigenvalues, but those minute values will not create enough impact on the final result.

The selected higher eigenvalues will become the dimensions of your updated feature set. We also form a feature vector, which is a vector matrix consisting of eigenvectors of relative chosen eigenvalues.

• Using the feature vector, we find the principal components of the dataset under analysis. We multiply the transpose of the feature vector with the transpose of the scaled matrix (a scaled version of data after normalisation) to obtain a matrix containing principal components.

We will notice that the highest eigenvalue will be appropriate for the data, and the other ones will not provide much information about the dataset. This proves that we are not losing data when reducing the dimensions of the dataset; we are just representing it more effectively.

These methods are implemented to finally reduce the dimensions of any dataset in PCA.

Applications of PCA

Data is generated in many sectors, and there is a need to analyse data for the growth of any firm/company. PCA will help in reducing the dimensions of the data, thus making it easier to analyse. The applications of PCA are:

• Neuroscience – Neuroscientists use PCA to identify any neuron or to map the brain structure during phase transitions.

• Finance – PCA is used in the finance sector for reducing the dimensionality of data to create fixed income portfolios. Many other facets of the finance sector involve PCA like forecasting returns, making asset allocation algorithms or equity algorithms, etc.

• Image Technology – PCA is also used for image compression or digital image processing. Each image can be represented via a matrix by plotting the intensity values of each pixel, and then we can apply PCA on it.

Facial Recognition – PCA in facial recognition leads to the creation of eigenfaces which makes facial recognition more accurate.

• Medical – PCA is used on a lot of medical data to find the correlation among different variables. For example, doctors use PCA to show the correlation between cholesterol & low-density lipoprotein.

• Security – Anomalies can be found easily using PCA. It is used to identify cyber/computer attacks and visualise them with the help of PCA.

Takeaway Points

PCA can also lead to low model performance after applying it if the original dataset has a weak correlation or no correlation. The variables need to be related to one other to apply PCA perfectly. PCA provides us with a combination of features, and individual feature importance from the original dataset is eradicated. The principal axes with the most variance are the ideal principal components.

Also Read: Machine Learning Project Ideas

Conclusion

PCA is a widely used technique for decreasing the dimensions of a feature set.

If you’re interested to learn more about machine learning, check out IIIT-B & upGrad’s PG Diploma in Machine Learning & AI which is designed for working professionals and offers 450+ hours of rigorous training, 30+ case studies & assignments, IIIT-B Alumni status, 5+ practical hands-on capstone projects & job assistance with top firms.

Lead the AI Driven Technological Revolution

PG DIPLOMA IN MACHINE LEARNING AND ARTIFICIAL INTELLIGENCE
Learn More

Leave a comment

Your email address will not be published. Required fields are marked *

Our Popular Machine Learning Course

Accelerate Your Career with upGrad

×