Exploratory Data Analysis (EDA) is a very common and important practice followed by all data scientists. It is the process of looking at tables and tables of data from different angles in order to understand it fully. Gaining a good understanding of data helps us to clean and summarize it, which then brings out the insights and trends which were otherwise unclear.
EDA has no hard-core set of rules which are to be followed like in ‘data analysis’, for example. People who are new to the field always tend to confuse between the two terms, which are mostly similar but different in their purpose. Unlike EDA, data analysis is more inclined towards the implementation of probabilities and statistical methods to reveal facts and relationships among different variants.
Coming back, there is no right or wrong way to perform EDA. It varies from person to person however, there are some major guidelines commonly followed which are listed below.
- Handling missing values: Null values can be seen when all the data may not have been available or recorded during collection.
- Removing duplicate data: It is important to prevent any overfitting or bias created during training the machine learning algorithm using repeated data records
- Handling outliers: Outliers are records that drastically differ from the rest of the data and don’t follow the trend. It can arise due to certain exceptions or inaccuracy during data collection
- Scaling and normalizing: This is only done for numerical data variables. Most of the time the variables greatly differ in their range and scale which makes it difficult to compare them and find correlations.
- Univariate and Bivariate analysis: Univariate analysis is usually done by seeing how one variable is affecting the target variable. Bivariate analysis is carried out between any 2 variables, it can either be numerical or categorical or both.
We will look at how some of these are implemented using a very famous ‘Home Credit Default Risk’ dataset available on Kaggle here. The data contains information about the loan applicant at the time of applying for the loan. It contains two types of scenarios:
- The client with payment difficulties: he/she had late payment more than X days
on at least one of the first Y instalments of the loan in our sample,
- All other cases: All other cases when the payment is paid on time.
We’ll be only working on the application data files for the sake of this article.
Related:Â Python Project Ideas & Topics for Beginners
Looking at the Data
app_data = pd.read_csv( ‘application_data.csv’ )
app_data.info()
After reading the application data, we use the info() function to get a short overview of the data we’ll be dealing with. The output below informs us that we have around 300000 loan records with 122 variables. Out of these, there are 16 categorical variables and the rest numerical.
<class ‘pandas.core.frame.DataFrame’>
RangeIndex: 307511 entries, 0 to 307510
Columns: 122 entries, SK_ID_CURR to AMT_REQ_CREDIT_BUREAU_YEAR
dtypes: float64(65), int64(41), object(16)
memory usage: 286.2+ MB
It is always a good practice to handle and analyse numerical and categorical data separately.
categorical = app_data.select_dtypes(include = object).columns
app_data[categorical].apply(pd.Series.nunique, axis = 0)
Looking only at the categorical features below, we see that most of them only have a few categories which make them easier to analyse using simple plots.
NAME_CONTRACT_TYPE Â Â Â Â Â Â 2
CODE_GENDERÂ Â Â Â Â Â Â Â Â Â 3
FLAG_OWN_CAR Â Â Â Â Â Â Â Â Â 2
FLAG_OWN_REALTYÂ Â Â Â Â Â Â Â 2
NAME_TYPE_SUITEÂ Â Â Â Â Â Â Â 7
NAME_INCOME_TYPE Â Â Â Â Â Â Â 8
NAME_EDUCATION_TYPEÂ Â Â Â Â Â 5
NAME_FAMILY_STATUS Â Â Â Â Â Â 6
NAME_HOUSING_TYPEÂ Â Â Â Â Â Â 6
OCCUPATION_TYPE Â Â Â Â Â Â Â 18
WEEKDAY_APPR_PROCESS_START Â Â 7
ORGANIZATION_TYPE Â Â Â Â Â Â 58
FONDKAPREMONT_MODE Â Â Â Â Â Â 4
HOUSETYPE_MODE Â Â Â Â Â Â Â Â 3
WALLSMATERIAL_MODE Â Â Â Â Â Â 7
EMERGENCYSTATE_MODEÂ Â Â Â Â Â 2
dtype: int64
Now for the numerical features, the describe() method gives us the statistics of our data:
numer= app_data.describe()
numerical= numer.columns
numer
Looking at the entire table it’s evident that:
- days_birth is negative: applicant’s age (in days) relative to the day of application
- days_employed has outliers (max value is around 100 years) (635243)
- amt_annuity- mean much smaller than the max value
So now we know which features will have to be analysed further.
Our learners also read: Free Python Course with Certification
Explore our Popular Data Science Online Certifications
upGrad’s Exclusive Data Science Webinar for you –
Transformation & Opportunities in Analytics & Insights
Missing Data
We can make a point plot of all the features having missing values by plotting the % of missing data along Y-axis.
missing = pd.DataFrame( (app_data.isnull().sum()) * 100 / app_data.shape[0]).reset_index()
plt.figure(figsize = (16,5))
ax = sns.pointplot(‘index’, 0, data = missing)
plt.xticks(rotation = 90, fontsize = 7)
plt.title(“Percentage of Missing values”)
plt.ylabel(“PERCENTAGE”)
plt.show()
Many columns have a lot of missing data (30-70%), some have few missing data (13-19%) and many columns also have no missing data at all. It is not really necessary to modify the dataset when you just have to perform EDA. However, going ahead with data pre-processing, we should know how to handle missing values.
For features with less missing values, we can use regression to predict the missing values or fill with the mean of the values present, depending on the feature. And for features with a very high number of missing values, it is better to drop those columns as they give very less insight on analysis.Â
Top Data Science Skills You Should Learn
SL. No
Top Data Science Skills to Learn in 2022
1
Data Analysis Online Certification
Inferential Statistics Online Certification
2
Hypothesis Testing Online Certification
Logistic Regression Online Certification
3
Linear Regression Certification
Linear Algebra for Analysis Online Certification
Data Imbalance
In this dataset, loan defaulters are identified using the binary variable ‘TARGET’.
100 * app_data[‘TARGET’].value_counts() / len(app_data[‘TARGET’])
0Â Â 91.927118
1 Â Â 8.072882
Name: TARGET, dtype: float64
We see that the data is highly imbalanced with a ratio of 92:8. Most of the loans were paid back on time (target = 0). So whenever there is such a huge imbalance, it is better to take features and compare them with the target variable (targeted analysis) to determine what categories in those features tend to default on the loans more than others.
Below are just a few examples of graphs that can be made using the seaborn library of python and simple user-defined functions.
Gender
Males (M) have a higher chance of defaulting compared to females (F), even though the number of female applicants is almost twice as more. So females are more reliable than men for paying back their loans.
Education Type
Even though most student loans are for their secondary education or higher education, it is the lower secondary education loans that are riskiest for the company followed by secondary.
Also Read:Â Career in Data Science
Read our popular Data Science Articles
Conclusion
Such kind of an analysis seen above is done vastly in risk analytics in banking and financial services. This way data archives can be used to minimise the risk of losing money while lending to customers. The scope of EDA in all other sectors is endless and it should be used extensively.
If you are curious to learn about data science, check out IIIT-B & upGrad’s Executive PG in Data Science which is created for working professionals and offers 10+ case studies & projects, practical hands-on workshops, mentorship with industry experts, 1-on-1 with industry mentors, 400+ hours of learning and job assistance with top firms.
Exploratory Data Analysis is considered to be the initial level when you start modelling your data. This is quite an insightful technique to analyze the best practices for modelling your data. You will be able to extract visual plots, graphs, and reports from the data to get a complete understanding of it.
Outliers are referred to the anomalies or slight variances in your data. It can happen during the data collection. There are 4 ways in which we can detect an outlier in the data set. These methods are as follows:
Unlike data analysis, there are no hard and fast rules and regulations to be followed for EDA. One cannot say that this is the right method or that is the wrong method to perform EDA. Beginners are often misunderstood and get confused between EDA and data analysis.Why is Exploratory Data Analysis (EDA) needed?
The EDA involves certain steps to completely analyse the data including deriving the statistical results, finding missing data values, handling the faulty data entries, and finally deducing various plots and graphs.
The primary aim of this analysis is to ensure that the data set you are using is suitable to start applying modelling algorithms. That’s the reason that this is the first step you should be performing on your data before moving to the modelling stage. What are outliers and how to handle them?
1. Boxplot - Boxplot is a method of detecting an outlier where we segregate the data through their quartiles.
2. Scatterplot - A scatter plot displays the data of 2 variables in the form of a collection of points marked on the cartesian plane. The value of one variable represents the horizontal axis (x-ais) and the value of the other variable represents the vertical axis (y-axis).
3. Z-score - While calculating the Z-score, we look for the points that are far away from the centre and consider them as outliers.
4. InterQuartile Range (IQR) - The InterQuartile Range or IQR is the difference between the upper and lower quartiles or 75th and 25th quartile, often referred to as the statistical dispersion. What are the guidelines to perform EDA?
However, there are some guidelines that are commonly practised:
1. Handling missing values
2. Removing duplicate data
3. Handling outliers
4. Scaling and normalizing
5. Univariate and Bivariate analysis