Factor analysis in R is a statistical technique that simplifies data interpretation by reducing the initial variables into a smaller number of factors. You must be thinking – what are factors? Well, the definition of factors states that they are a representation of the ‘latent variables’ that underlie the original variables.
Thus, factor analysis represents dataset variables y1, y2,… yp as a linear combination of latent variables called factors, denoted by f1, f2,…fm where m<p. However, the factors cannot be observed or measured, and thus, their existence is hypothetical.
Introducing the factor analysis model
Before we discuss the details of factor analysis in R, let us get introduced to the basic idea of the factor analysis model. The factor analysis model, as stated in the previous section, is a linear combination of random, hypothetical, and latent variables called factors (f1, f2,…fm). Since the factors are theoretical, they may not exist. Thus, for the variables in the observation vectors of a sample, the factor analysis model is defined as:
In the above model, α denotes the mean vector, Ɣ represents the factor loading that represents the relationship between the pth observed variable and the mth latent factor, and δ indicates the random error to show that there is no exact relationship between the factors.
An example of factor analysis in R
In this section, we will look at an example to understand factor analysis in R. For the study, we will use two R packages – ‘psych’ and ‘GPArotation’. We will begin by describing the dataset and then move on to selecting the number of factors for analysis. Finally, we will perform factor analysis by using the fa() function of the ‘psych’ package.
So, here is a step-by-step example of factor analysis in R:
1. The dataset
To succinctly understand the factor analysis method, we shall use an example to elucidate on the model. Let us consider a dataset consisting of 13 diverse variables that a prospective consumer considers while investing in a property. The variables are:
- Rent/Cost/Lease amount
- Distance from landmarks
- Availability of essential services
- Maintenance cost
- Resale value
- Prospect as per requirement
- Resale value
- Reliability of the builder
We will apply the factor analysis method on a dataset that contains 14 different variables that customers usually consider while purchasing a car. The variables are:
- Exterior looks
- Space and comfort
- Fuel type
- Fuel efficiency
- After-sales service
- Resale value
- Test drive
- Product reviews
Read: Best Datasets for Machine Learning Projects
2. Data importing
In this step, the CSV format dataset is read to R to store as the variable. A window opens for choosing the CSV file, and the ‘header’ option ensures that the first row in the file is taken to be the header.
3. Installing Packages
The install.packages() function is called for installing the ‘psyche’ and ‘GPArotation’ packages to carry out further analysis.
4. Number of factors
In this step, the number of factors to be selected for analysis is evaluated through methods like ‘Parallel Analysis’ and ‘eigenvalue’, and a scree plot is generated. In this example, the ‘psych’ package’s ‘fa.parallel’ function performs Parallel Analysis. The data frame and the factor method (‘minres’) are specified. Subsequently, the maximum number of considerable factors and a scree plot are generated.
5. Factor Analysis
The fa() function of ‘psyche’ package runs the factor analysis with a supply of the following arguments:
- r – Covariance matrix or the raw data
- nfactors – Number of factors to be extracted
- rotate – Oblique rotation (rotate = “oblimin”) is used in this example
- fm – It is the factor extraction technique. In this example, Ordinary Least Squared, or Minres (fm = “minres”) has been used. Three factors are considered first:
- The output generated indicates the loadings and factors. Now, the loadings, usually used as single loading, would chart the relative propensity needed to consider these factors as essentially viable. For instance, if the loading was represented as thus:
|Distance from Landmark||0.666||0.783||-0.142|
|Availability of essential services||0.787||0.132||-0.231|
|Prospect as per requirement||0.424||0.327||0.675|
|Reliability of the builder||0.646||0.785||-0.478|
- Now, considering loadings above 0.3 are established as cut-off and the highest reading on each factor is taken into account. You can even consider negative values if they represent the highest loading.
- This model is further replicated under four factors in a simple structure, however with single loading as displayed above.
Explore our Popular Data Science Online Courses
Our learners also read: Free Python Course with Certification
upGrad’s Exclusive Data Science Webinar for you –
How upGrad helps for your Data Science Career?
Top Data Science Skills to Learn to upskill
Top Data Science Skills to Learn in 2022
Data Analysis Online Courses
Inferential Statistics Online Courses
Hypothesis Testing Online Courses
Logistic Regression Online Courses
Linear Regression Courses
Linear Algebra for Analysis Online Courses
6. Adequacy Test
In this step, the model is validated by examining the output of factor analysis:
Here depending on the final outcome, the values are judged on the basis of parameters like RMSR value, RSMEA value, and finally the Tucker-Lewis Index. These values determine the general validity and sustainability of the model.
The last step pertains to the theoretical aspect of the analysis. The factors could then be summarized according to the value of the loadings obtained. In the current context, such factors could be:
- Rent/Cost/Lease amount
- Availability of essential services
Also Read: Data Manipulation in R: What is, Variables, Using dplyr package
Get data science certification from the World’s top Universities. Learn Executive PG Programs, Advanced Certificate Programs, or Masters Programs to fast-track your career.
Read our popular Data Science Articles
In this article, we discussed the basic idea of factor analysis in R through the factor analysis model. We further illustrated the concept with the help of a real-world example where the number of variables related to car purchasing was reduced to some common factors. Now, go ahead and try it out!
If you are curious to learn about R, data science, check out our PG Diploma in Data Science which is created for working professionals and offers 10+ case studies & projects, practical hands-on workshops, mentorship with
industry experts, 1-on-1 with industry mentors, 400+ hours of learning and job assistance with top firms.
What is the main reason for using factor analysis?
Factor analysis is a statistical data analysis and reduction technique. It is used for explaining the correlation between different outcomes as a result of one or more latent factors. There is an involvement of the data reduction technique because there is an attempt made to represent the available dataset of variables in a smaller number by using factor analysis.
Factor analysis is a widely used technique in performing business research. The primary objective over here is to capture certain psychological states of respondents that couldn't have been measured directly. Factor analysis is commonly used in finance, operations research, product management, marketing, psychometrics, biology, and personality theories.
The factor analysis technique is utilized whenever there is an involvement of large datasets affecting a smaller number of latent variables.
What are the disadvantages of factor analysis?
Every technique has certain flaws, and that's the case even with factor analysis. Based on various observations made by the researchers while using the factor analysis technique, there are a few major disadvantages that have come in front of people.
1. Answers are only around the asked questions - The data you receive is only based on the questions you have asked and nothing around it. For instance, if you have not asked anything about sleep habits, then you won't find any factor that is related to sleep habits. This shows that it becomes pretty important to choose the right set of questions. This makes the process complicated.
2. It is tricky to determine the number of factors to include - The major task of a factor analyst is to determine the number of factors that should be included. Different analysts make use of different methods for determining this number, but there is no solid method for making this happen.
How many variables are necessary for factor analysis?
Every factor needs to have at least three variables that consist of high loadings. For supporting your factor analysis, it is essential to have a sufficient number of observations. In order to get stable results through factor analysis, there should be at least 20 observations per variable in the dataset. Usually, the number of observations in every variable lies somewhere between 10 to 100. Only some statisticians perform factor analysis with variables containing as low as 5 observations.