Blog_Banner_Asset
    Homebreadcumb forward arrow iconBlogbreadcumb forward arrow iconData Sciencebreadcumb forward arrow iconWhat is Normalization in Data Mining and How to Do It?

What is Normalization in Data Mining and How to Do It?

Last updated:
21st Sep, 2022
Views
Read Time
11 Mins
share image icon
In this article
Chevron in toc
View All
What is Normalization in Data Mining and How to Do It?

Companies are increasingly relying on data to learn more about their customers. Thus, data analysts have a bigger responsibility to explore and analyze large blocks of raw data and glean meaningful customer trends and patterns out of it. This is known as data mining. Data analysts use data mining techniques, advanced statistical analysis, and data visualization technologies to gain new insights.

These can help a business develop effective marketing strategies to improve business performance, scale-up sales, and reduce overhead costs. Although there are tools and algorithms for data mining, it is not a cakewalk, as real-world data is heterogeneous. Thus, there are quite a few challenges when it comes to data mining. Learn data science if you want to gain expertise in data mining. 

One of the common challenges is that, usually, databases contain attributes of different units, range, and scales. Applying algorithms to such drastically ranging data may not deliver accurate results. This calls for data normalization in data mining.

It is a necessary process required to normalize heterogeneous data.  Data can be put into a smaller range, such as 0.0 to 1.0 or -1.0 to 1.0. In simple words, data normalization makes data easier to classify and understand.

Why is Normalization in Data Mining Needed?

Data normalization is mainly needed to minimize or exclude duplicate data. Duplicity in data is a critical issue. This is because it is increasingly problematic to store data in relational databases, keeping identical data in more than one place. Normalization in data mining is a beneficial procedure as it allows achieving certain advantages as mentioned below:

  • It is a lot easier to apply data mining algorithms on a set of normalized data.
  • The results of data mining algorithms applied to a set of normalized data are more accurate and effective.
  • Once the data is normalized, the extraction of data from databases becomes a lot faster.
  • More specific data analyzing methods can be applied to normalized data.

Read: Data Mining Techniques

3 Popular Techniques for Data Normalization in Data Mining

There are three popular methods to carry out normalization in data mining. They include: 

Min Max Normalization

What is easier to understand – the difference between 200 and 1000000 or the difference between 0.2 and 1. Indeed, when the difference between the minimum and maximum values is less, the data becomes more readable. The min-max normalization functions by converting a range of data into a scale that ranges from  0 to 1.

Min-Max Normalization Formula

To understand the formula, here is an example. Suppose a company wants to decide on a promotion based on the years of work experience of its employees. So, it needs to analyze a database that looks like this: 

Employee NameYears of Experience
ABC8
XYZ20
PQR10
MNO15

 

  • The minimum value is 8
  • The maximum value is 20

As this formula scales the data between 0 and 1, 

  • The new min is 0
  • The new max is 1

Here, V stands for the respective value of the attribute, i.e., 8, 10, 15, 20

After applying the min-max normalization formula, the following are the V’ values for the attributes:

  • For 8 years of experience: v’= 0
  • For 10 years of experience: v’ = 0.16
  • For 15 years of experience: v’ = 0.58
  • For 20 years of experience: v’ = 1

So, the min-max normalization can reduce big numbers to much smaller values.  This makes it extremely easy to read the difference between the ranging numbers.

Our learners also read: Top Python Courses for Free

Decimal Scaling Normalization

Decimal scaling is another technique for normalization in data mining. It functions by converting a number to a decimal point. Normalization by decimal scaling follows the method of standard deviation. In decimal scaling normalization, the decimal point of values of the attributes is moved. The movement of the decimal points in decimal scaling normalization is dependent upon the maximum values amongst all values of the attribute. 

 Decimal Scaling Formula

Here: 

  • V’ is the new value after applying the decimal scaling
  • V is the respective value of the attribute

Now, integer J defines the movement of decimal points. So, how to define it? It is equal to the number of digits present in the maximum value in the data table. Here is an example:

Suppose a company wants to compare the salaries of the new joiners. Here are the data values:

Employee NameSalary
ABC10,000
XYZ25,000
PQR8,000
MNO15,000

Now, look for the maximum value in the data. In this case, it is 25,000. Now count the number of digits in this value. In this case, it is ‘5’. So here ‘j’ is equal to 5, i.e 100,000. This means the V (value of the attribute) needs to be divided by 100,000 here.

upGrad’s Exclusive Data Science Webinar for you –

How to Build Digital & Data Mindset

Explore our Popular Data Science Courses

After applying the zero decimal scaling formula, here are the new values:

NameSalarySalary after Decimal Scaling
ABC10,0000.1
XYZ25, 0000.25
PQR8, 0000.08
MNO15,0000.15

Thus, decimal scaling can tone down big numbers into easy to understand smaller decimal values. Also, data attributed to different units becomes easy to read and understand once it is converted into smaller decimal values.

Must Read: Data Mining Project Ideas & Topics

Z-Score Normalization

Z-Score value is to understand how far the data point is from the mean. Technically, it measures the standard deviations below or above the mean. It ranges from -3 standard deviation up to +3 standard deviation. Z-score normalization in data mining is useful for those kinds of data analysis wherein there is a need to compare a value with respect to a mean(average) value, such as results from tests or surveys. Thus, Z-score normalization is also popularly known as Standardization. 

The following formula is used in the case of z-score normalization on every single value of the dataset.

New value = (x – μ) / σ

Here: 

  • x: Original value
  •  μ: Mean of data
  •  σ: Standard deviation of data

Below is an example of how to perform z score normalization on a given dataset.

Suppose we have the following dataset: 

Data 
3
5
5
8
9
12
12
13
15
16
17
19
22
24
25
134

Therefore, we can find that the mean of this dataset is 21.2 also the standard deviation is 29.8.

If we have to perform z score normalization on the first value of the dataset, 

Then according to the formula it will be,

New value = (x – μ) / σ

New value = (3-21.2)/ 29.8

∴ New value = -0.61

By performing z score normalization on each of the value of the dataset, we will get the following chart.

Top Data Science Skills to Learn

Data Z score normalized value 
3-0.61
5-0.54
5-0.54
8-0.44
9-0.41
12-0.31
12-0.31
13-0.28
15-0.21
16-0.17
17-0.14
19-0.07
220.03
240.09
250.13
1343.79

The mean of this normalized dataset is 0 and the standard deviation is 1. 

For example, a person’s weight is 150 pounds. Now, if there is a need to compare that value with the average weight of a population listed in a vast table of data,  Z-score normalization is needed to study such values, especially if someone’s weight is recorded in kilograms.

Difference between Min Max normalization and Z Score Normalization: 

 

Min Max normalizationZ Score Normalization 
  • For scaling the minimum and maximum values of the feature are used.
  • Applicable when the features are of different sizes
  • The values are scaled between the range of [0,1] or [-1, 1]
  • Gets easily affected by outliers
  • A transformer named MaxMinScaler is available in Scikit-Learn
  • This method transforms an n-dimensional data into an n-dimensional unit hypercube
  • Best if the distribution is unknown
  • Also known as Scaling Normalization 
  • For scaling mean deviation and the standard deviation is used.
  • Useful when want to maintain a zero mean and unit standard deviation.
  • No fixed range is present
  • Not that much affected by outliers.
  • Transformer named StandardScaler is available in Scikit-Learn to perform the task.
  • This method translates data to the mean vector of the original data and then either squeezes or expands it.
  • Best when Normal or Gaussian distribution 
  • Also known as Standardization

 

Drawbacks of doing data Normalization:

Even though there are quite a few benefits of Normalization by decimal scaling, there are also some downsides of doing it.

  1. Due to its very nature of compartmentalizing the data, it creates a longer task, as there are now more tables that need to be joined. This increases the length of the task and makes it more mundane and slower. Also, the database becomes harder to comprehend.
  2. Tables that will be generated will have codes instead of real data. This is due to the fact that repeated data is stored as lines of code instead of normal data. Thus, there is always a need to go through the lookup table, which makes the entire process yet again slow.
  3. Making queries become difficult once normalization is applied to the dataset. It is because the SQL it contains is built dynamically and is usually made up of desktop-friendly query tools. Therefore, it becomes difficult to propose a database model without knowing the customer’s needs.
  4. The analysis and designing become more detailed and strenuous. Normalizing data is already complex and difficult, on top of that knowing the purpose of the database and then adjusting everything according to it becomes even more difficult. If an expert poorly normalizes a database, it will perform inadequately and might not be able to store the required data. 

What is Denormalization?

In simple words, denormalization is quite literally the opposite of normalization which is also used in databases for varied reasons. As the name suggests, denormalization means reversing the normalization or not normalizing, thus, the process is often done after normalization has been applied. 

In denormalization, data are combined together to execute the queries quickly. In this method, redundancy is added which plays a major part in executing the queries much faster. The pros of using denormalization include fast retrieval of data as fewer joins need to be done, and query solving is quicker, therefore, less likely to have bugs. 

However, unlike normalization, data integrity is not maintained in this process, as a large variety of data is clubbed together. Also, by doing so, the number of tables generated reduces significantly, which is quite the opposite of normalization. Also, the updates and inserts are quite expensive comparatively and also make them harder to write. 

Read our popular Data Science Articles

Conclusion

As data comes from different sources, it is very common to have different attributes in any batch of data. Thus, normalization in data mining is like pre-processing and preparing the data for analysis.

If you are curious to learn about data science, check out IIIT-B & upGrad’s Executive PG Programme in Data Science which is created for working professionals and offers 10+ case studies & projects, practical hands-on workshops, mentorship with industry experts, 1-on-1 with industry mentors, 400+ hours of learning and job assistance with top firms.

Profile

Rohit Sharma

Blog Author
Rohit Sharma is the Program Director for the UpGrad-IIIT Bangalore, PG Diploma Data Analytics Program.

Frequently Asked Questions (FAQs)

1What is meant by Normalisation in Data mining?

Normalization is the process of scaling an attribute's data such that it falls within a narrower range, like -1.0 to 1.0 or 0.0 to 1.0. It is beneficial for classification algorithms in general. Normalization is typically necessary when dealing with characteristics on various scales; otherwise, it may dilute the efficacy of an equally significant attribute on a lower scale due to other attributes having values on a greater scale. In other words, when numerous characteristics exist but their values are on various scales, this might result in inadequate data models when doing data mining activities. As a result, they are normalized to put all of the characteristics on the same scale.

2What are the different types of Normalization?

Normalization is a procedure that should be followed for each database you create. Normal Forms refers to the act of taking a database architecture and applying a set of formal criteria and rules to it. The normalization process is classified as follows: First Normal Form (1 NF), Second Normal Form (2 NF), Third Normal Form (3 NF), Boyce Codd Normal Form or Fourth Normal Form ( BCNF or 4 NF), Fifth Normal Form (5 NF), and Sixth Normal Form (6 NF) (6 NF).

3What is Min-Max Normalisation?

One of the most prevalent methods for normalizing data is min-max Normalization. For each feature, the minimum value is converted to a 0, the highest value is converted to a 1, and all other values are converted to a decimal between 0 and 1. For example, if the minimum value of a feature was 20 and the highest value was 40, 30 would be converted to about 0.5 since it is halfway between 20 and 40. One significant drawback of min-max Normalization is that it does not handle outliers well. For example, if you have 99 values ranging from 0 to 40, and one of them is 100, all 99 values will be converted to values ranging from 0 to 0.4.

Explore Free Courses

Suggested Blogs

Priority Queue in Data Structure: Characteristics, Types & Implementation
57467
Introduction The priority queue in the data structure is an extension of the “normal” queue. It is an abstract data type that contains a
Read More

by Rohit Sharma

15 Jul 2024

An Overview of Association Rule Mining & its Applications
142458
Association Rule Mining in data mining, as the name suggests, involves discovering relationships between seemingly independent relational databases or
Read More

by Abhinav Rai

13 Jul 2024

Data Mining Techniques & Tools: Types of Data, Methods, Applications [With Examples]
101684
Why data mining techniques are important like never before? Businesses these days are collecting data at a very striking rate. The sources of this eno
Read More

by Rohit Sharma

12 Jul 2024

17 Must Read Pandas Interview Questions & Answers [For Freshers & Experienced]
58114
Pandas is a BSD-licensed and open-source Python library offering high-performance, easy-to-use data structures, and data analysis tools. The full form
Read More

by Rohit Sharma

11 Jul 2024

Top 7 Data Types of Python | Python Data Types
99373
Data types are an essential concept in the python programming language. In Python, every value has its own python data type. The classification of dat
Read More

by Rohit Sharma

11 Jul 2024

What is Decision Tree in Data Mining? Types, Real World Examples & Applications
16859
Introduction to Data Mining In its raw form, data requires efficient processing to transform into valuable information. Predicting outcomes hinges on
Read More

by Rohit Sharma

04 Jul 2024

6 Phases of Data Analytics Lifecycle Every Data Analyst Should Know About
82805
What is a Data Analytics Lifecycle? Data is crucial in today’s digital world. As it gets created, consumed, tested, processed, and reused, data goes
Read More

by Rohit Sharma

04 Jul 2024

Most Common Binary Tree Interview Questions & Answers [For Freshers & Experienced]
10471
Introduction Data structures are one of the most fundamental concepts in object-oriented programming. To explain it simply, a data structure is a par
Read More

by Rohit Sharma

03 Jul 2024

Data Science Vs Data Analytics: Difference Between Data Science and Data Analytics
70271
Summary: In this article, you will learn, Difference between Data Science and Data Analytics Job roles Skills Career perspectives Which one is right
Read More

by Rohit Sharma

02 Jul 2024

Schedule 1:1 free counsellingTalk to Career Expert
icon
footer sticky close icon