Programs

Data Cleaning Techniques: Learn Simple & Effective Ways To Clean Data

Data cleansing is an essential part of data science. Working with impure data can lead to many difficulties. And today, we’ll be discussing the same. Poor or dirty data can have a negative effect on business as it can do a lot of harm, impacting dependent decisions. 

You’ll find out why data cleaning is essential, what factors affect your data quality, and how you can clean the data you have with the help of data cleaning algorithms. It’s a detailed guide, so make sure you bookmark it for future reference. 

Let’s get started. 

Why Data Cleaning is Necessary

Data cleaning might seem dull and uninteresting, but it’s one of the most important tasks you would have to do as a data science professional. Having wrong or bad quality data can be detrimental to your processes and analysis. Poor data can cause a stellar algorithm to fail. 

On the other hand, high-quality data can cause a simple algorithm to give you outstanding results. There are many data cleaning techniques, and you should get familiar with them to improve your data quality. Not all data is useful. So that’s another major factor that affects your data quality. Poor quality data can come from many sources.

Usually, they are a result of human error, but they can also arise if a lot of data is combined from different sources. Multichannel data is not only important, but it is also the norm. So as a data scientist, you can expect errors from this type of data. They can cause incorrect insights in your project and sidetrack your data analysis process. This is why data cleaning methods in data mining are so important. 

Read: Cluster Analysis in R

For example, suppose your company has a list of employees’ addresses. Now, if your data also includes a few addresses of your clients, wouldn’t it damage the list? And wouldn’t your efforts to analyze the list would go in vain? In this data-backed market, data science courses to improve your business decisions is vital. 

There are many reasons why data cleaning is essential. Some of them are listed below:

Efficiency

Having clean data (free from wrong and inconsistent values) can help you in performing your analysis a lot faster. You’d save a considerable amount of time by doing this task beforehand. When you clean your data before using it, you’d be able to avoid multiple errors. If you use data containing false values, your results won’t be accurate. A data scientist has to spend significantly more time cleaning and purifying data than analyzing it. 

And the chances are, you would have to redo the entire task again, which can cause a lot of waste of time. If you choose to clean your data before using it, you can generate results faster and avoid redoing the entire task again. 

Must read: Learn excel online free!

upGrad’s Exclusive Data Science Webinar for you –

How upGrad helps for your Data Science Career?

Error Margin

When you don’t use accurate data for analysis, you will surely make mistakes. Suppose, you’ve gotten a lot of effort and time into analyzing a specific group of datasets. You are very eager to show the results to your superior, but in the meeting, your superior points out a few mistakes the situation gets kind of embarrassing and painful.

Wouldn’t you want to avoid such mistakes from happening? Not only do they cause embarrassment, but they also waste resources. Data cleansing helps you in that regard full stop it is a widespread practice, and you should learn the methods used to clean data.

Using a simple algorithm with clean data is way better than using an advanced with unclean data.

Our learners also read: Free Python Course with Certification

Determining Data Quality

Is The Data Valid? (Validity)

The validity of your data is the degree to which it follows the rules of your particular requirements. For example, you how to import phone numbers of different customers, but in some places, you added email addresses in the data. Now because your needs were explicitly for phone numbers, the email addresses would be invalid. 

Validity errors take place when the input method isn’t properly inspected. You might be using spreadsheets for collecting your data. And you might enter the wrong information in the cells of the spreadsheet. 

There are multiple kinds of constraints your data has to conform to for being valid. Here they are:

Range: 

Some types of numbers have to be in a specific range. For example, the number of products you can transport in a day must have a minimum and maximum value. There would surely be a particular range for the data. There would be a starting point and an end-point. 

Data-Type: 

Some data cells might require a specific kind of data, such as numeric, Boolean, etc. For example, in a Boolean section, you wouldn’t add a numerical value.

Compulsory constraints:

In every scenario, there are some mandatory constraints your data should follow. The compulsory restrictions depend on your specific needs. Surely, specific columns of your data shouldn’t be empty.For example, in the list of your clients’ names, the column of ‘name’ can’t be empty. 

Cross-field examination:

There are certain conditions which affect multiple fields of data in a particular form. Suppose the time of departure of a flight couldn’t be earlier than it’s arrival. In a balance sheet, the sum of the debit and credit of the client must be the same. It can’t be different. 

These values are related to each other, and that’s why you might need to perform cross-field examination. 

Unique Requirements:

Particulars types of data have unique restrictions. Two customers can’t have the same customer support ticket. Such kind of data must be unique to a particular field and can’t be shared by multiple ones. 

Set-Membership Restrictions:

Some values are restricted to a particular set. Like, gender can either be Male, Female or Unknown. 

Regular Patterns:

Some pieces of data follow a specific format. For example, email addresses have the format ‘randomperson@randomemail.com’. Similarly, phone numbers have ten digits.

If the data isn’t in the required format, it would also be invalid. 

If a person omits the ‘@’ while entering an email address, then the email address would be invalid, wouldn’t it? Checking the validity of your data is the first step to determine its quality. Most of the time, the cause of entry of invalid information is human error.

Getting rid of it will help you in streamlining your process and avoiding useless data values beforehand. 

Explore our Popular Data Science Courses

Accuracy

Now that you know that most of the data you have is valid, you’ll have to focus on establishing its accuracy. Even though the data is valid, it doesn’t mean the data is accurate. And determining accuracy helps you to figure out if the data you entered was accurate or not. 

The address of a client could be in the right format, but it doesn’t need to be the right one. Maybe the email has an additional digit or character that makes it wrong. Another example is of the phone number of a customer. 

Read: Top Machine Learning APIs for Data Science

If the phone number has all the digits, it’s a valid value. But that doesn’t mean it’s true. When you have definitions for valid values, figuring out the invalid ones is easy. But that doesn’t help with checking the accuracy of the same. Checking the accuracy of your data values requires you to use third-party sources. 

This means you’ll have to rely on data sources different from the one you’re using currently. You’ll have to cross-check your data to figure out if it’s accurate or not. Data cleaning techniques don’t have many solutions for checking the accuracy of data values. 

However, depending on the kind of data you’re using, you might be able to find resources that could help you in this regard. You shouldn’t confuse accuracy with precision.

Accuracy vs Precision

While accuracy relies on establishing whether your entered data was correct or not, precision requires you to give more details about the same. A customer might enter a first name in your data field. But if there’s no last name, it’d be challenging to be more precise.

Another example can be of an address. Suppose you ask a person where he/she lives. They might say that they live in London. That could be true. However, that’s not a precise answer because you don’t know where they live in London.

A precise answer would be to give you a street address. 

Read our popular Data Science Articles

Completeness

It’s nearly impossible to have all the info you need. Completeness is the degree to which you know all the required values. Completeness is a little more challenging to achieve than accuracy or validity. That’s because you can’t assume a value. You only have to enter known facts.

You can try to complete your data by redoing the data gathering activities (approaching the clients again, re-interviewing people, etc.). But that doesn’t mean you’d be able to complete your data thoroughly. 

Suppose you re-interview people for the data you needed earlier. Now, this scenario has the problem of recall. If you ask them the same questions again, chances are, they might not remember what they had answered before. This can lead to them, giving you the wrong answer. 

You might ask him what books they were reading five months ago. And they might not remember. Similarly, you might need to enter every customer’s contact information. But some of them may not have email addresses. In this case, you’d have to leave those columns empty.

If you have a system which requires you to fill all columns, you can try to enter ‘missing’ or ‘unknown’ there. But entering such values doesn’t mean the data is complete. It would be still be referred to as incomplete. 

Top Data Science Skills to Learn in 2022

Consistency

Next to completeness comes consistency. You can measure consistency by comparing two similar systems. Or, you can check the data values within the same dataset to see if they are consistent or not. Consistency can be relational. For example, a customer’s age might be 15, which is a valid value and could be accurate, but they might also be stated senior-citizen in the same system.

In such cases, you’ll need to cross-check the data, similar to measuring accuracy, and see which value is true. Is the client a 15-year old? Or is the client a senior-citizen? Only one of these values could be true.

There are multiple ways to make your data consistent.

Check different systems:

You can take a look at another similar system to find whether the value you have is real or not. If two of your systems are contradicting each other, it might help to check the third one. 

In our previous example, suppose you check the third system and find the age of the customer is 65. This shows that the second system, which said the customer is a senior citizen, would hold.

Check the latest data:

Another way to improve the consistency of your data is to check the more recent value. It can be more beneficial to you in specific scenarios. You might have two different contact numbers for a customer in your record. The most recent one would probably be more reliable because it’s possible that the customer switched numbers. 

Check the source:

The most fool-proof way to check the reliability of the data is to contact the source simply. In our example of the customer’s age, you can opt to contact the customer directly and ask them their age. However, it’s not possible in every scenario and directly contacting the source can be highly tricky. Maybe the customer doesn’t respond, or their contact information isn’t available. 

Uniformity

You should ensure that all the values you’ve entered in your dataset are in the same units. If you’re entering SI units for measurements, you can’t use the Imperial system in some places. On the other hand, if at one place you’ve entered the time in seconds, then you should enter it in this format all across the dataset.

This may happen while formatting dates as well. Make sure to use the same date format for all your entries. If you are using the DD/MM/YYYY format, stick to that, do not change it to MM/DD/YYYY for some of the entries, this will contaminate the data and create problems.

Read: SQL for Data Science

Checking the uniformity of your records is quite easy. A simple inspection can reveal whether a particular value is in the required unit or not. The units you use for entering your data depend on your specific requirements. Checking for uniformity across datasets is one of the most important factors of data cleaning in data mining. 

Data Cleansing Techniques

Your choice of data cleaning techniques relies on a lot of factors. First, what kind of data are you dealing with? Are they numeric values or strings? Unless you have too few values to handle, you shouldn’t expect to clean your data with just one technique as well.

You might need to use multiple techniques for a better result. The more data types you have to handle, the more cleansing techniques you’ll have to use. The methods we are going to discuss are some of the most common data cleaning methods in data mining. Through them, you will be able to learn how to clean data before you start your analysation process. Being familiar with all of these methods will help you in rectifying errors and getting rid of useless data. 

1. Remove Irrelevant Values

The most basic methods of data cleaning in data mining include the removal of irrelevant values. The first and foremost thing you should do is remove useless pieces of data from your system. Any useless or irrelevant data is the one you don’t need. It might not fit the context of your issue. Usually, this is the type of data that does not fit into the problem you are trying to analyze. 

You might only have to measure the average age of your sales staff. Then their email address wouldn’t be required. Another example is you might be checking to see how many customers you contacted in a month. In this case, you wouldn’t need the data of people you reached in a prior month.

However, before you remove a particular piece of data, make sure that it is irrelevant because you might need it to check its correlated values later on (for checking the consistency). And if you can get a second opinion from a more experienced expert before removing data, feel free to do so. Make sure to only get rid of information that is irrelevant to your dataset when you are using data cleaning algorithms. 

You wouldn’t want to delete some values and regret the decision later on. But once you’re assured that the data is irrelevant, get rid of it. Getting rid of irrelevant data will make your dataset more manageable and more efficient. This is why data cleaning in data mining is so important. 

2. Get Rid of Duplicate Values

Duplicates are similar to useless values – You don’t need them. They only increase the amount of data you have and waste your time. Duplicate values are the most common poor data type found in a dataset. You can get rid of them with simple searches. Duplicate values could be present in your system for several reasons.

Maybe you combined the data of multiple sources. Or, perhaps the person submitting the data repeated a value mistakingly. Some user clicked twice on ‘enter’ when they were filling an online form. You should remove the duplicates as soon as you find them. The process of getting rid of duplicate data is known as de-duplication and it is one of the most important methods of data cleaning in data mining. 

3. Avoid Typos (and similar errors)

Typos are a result of human error and can be present anywhere. You can fix typos through multiple algorithms and techniques. You can map the values and convert them into the correct spelling. Typos are essential to fix because models treat different values differently. Strings rely a lot on their spellings and cases.

‘George’ is different from ‘george’ even though they have the same spelling. Similarly ‘Mike’ and ‘Mice’ are different from each other, also though they have the same number of characters. You’ll need to look for typos such as this and fix them appropriately. 

Another error similar to typos is of strings’ size. You might need to pad them to keep them in the same format. For example, your dataset might require you to have 5-digit numbers only. So if you have any value which only has four digits such as ‘3994’ you can add a zero in the beginning to increase its number of digits.

Its value would remain the same as ‘03994’, but it’ll keep your data uniform. An additional error with strings is of white spaces. Make sure you remove them from your strings to keep them consistent. 

4. Convert Data Types

Data types should be uniform across your dataset. A string can’t be numeric nor can a numeric be a boolean. Numerals are the most common type of data that has to be converted, because a lot of the time, numerals are written as words. But when they are being processed, they have to appear as numbers. This is especially true for dates. If a date is written as 7th May 2022, you have to convert it to 07/05/2022. There are several things you should keep in mind when it comes to converting data types:

  • Keep numeric values as numerics
  • Check whether a numeric is a string or not. If you entered it as a string, it would be incorrect. 
  • If you can’t convert a specific data value, you should enter ‘NA value’ or something of this sort. Make sure you add a warning as well to show that this particular value is wrong.
  • Keep the uniformity of the data. Make sure that all your strings and numerics follow a specific format so that there is no confusion later. 

5. Take Care of Missing Values

There would always be a piece of missing data. You can’t avoid it. So you should know how to handle them to keep your data clean and free from errors. A particular column in your dataset may have too many missing values. In that case, it would be wise to get rid of the entire column because it doesn’t have enough data to work with.

Point to note: You shouldn’t ignore missing values.

Ignoring missing values can be a significant mistake because they will contaminate your data, and you won’t get accurate results. There are multiple ways to deal with missing values. 

Imputing Missing Values:

You can impute missing values, which means, assuming the approximate value. You can use linear regression or median to calculate the missing value. However, this method has its implications because you can’t be sure if that would be the real value. 

Another method to impute missing values is to copy the data from a similar dataset. This method is called ‘Hot-deck imputation’. You’re adding value in your current record while considering some constraints such as data-type and range. 

Highlighting Missing Values:

Imputation isn’t always the best measure to take care of missing values. Many experts argue that it only leads to more mixed results as they are not ‘real’. So, you can take another approach and inform the model that the data is missing. Telling the model (or the algorithm) that the specific value is unavailable can be a piece of information as well. 

If random reasons aren’t responsible for your missing values, it can be beneficial to highlight or flag them. For example, your records may not have many answers to a specific question of your survey because your customer didn’t want to answer it in the first place. 

If the missing value is numeric, you can use 0. Just make sure that you ignore these values during statistical analysis. On the other hand, if the missing value is a categorical value, you can fill ‘missing’. 

6. Uniformity of Language 

One of the other important factors you need to be mindful of while data cleaning is that every bit of data is in written in the same language. Most of the times the NLP models that are used to analyze data can only process one language. These monolingual systems cannot process more than one languages. So you need to be mindful that every bit of data is written in the correct language. 

Summary

We hope you enjoyed going through our detailed walk-through of data cleaning techniques. There was undoubtedly a lot to learn. 

Learn more about data wrangling from our webinar video below.

If you have any questions regarding data cleansing, feel free to ask our experts. 

If you are curious to learn about data science, check out IIIT-B & upGrad’s Executive PG Programme in Data Science which is created for working professionals and offers 10+ case studies & projects, practical hands-on workshops, mentorship with industry experts, 1-on-1 with industry mentors, 400+ hours of learning and job assistance with top firms.

Why is inconsistency in data a problem?

How often should your data be cleaned?

The frequency with which you should spring clean your data is entirely dependent on your business requirements. A large company will acquire a lot of data quickly, thus data cleansing may be required every three to six months. It is suggested that smaller firms with less data clean their data at least once a year. It's advisable to plan a data cleanse if you ever suspect that filthy data is costing you money or negatively impacting your productivity, efficiency, or insights.

Is Tableau suitable for data cleansing?

Tableau Prep comes with a number of cleaning procedures that you can use to clean and shape your data right away. Cleaning up dirty data makes it simpler to integrate and analyze your data, as well as for others to comprehend your data when you share it.

Want to share this article?

Prepare for a Career of the Future

Leave a comment

Your email address will not be published. Required fields are marked *

Our Popular Data Science Course

Get Free Consultation

Leave a comment

Your email address will not be published. Required fields are marked *

×
Get Free career counselling from upGrad experts!
Book a session with an industry professional today!
No Thanks
Let's do it
Get Free career counselling from upGrad experts!
Book a Session with an industry professional today!
Let's do it
No Thanks