Blog_Banner_Asset
    Homebreadcumb forward arrow iconBlogbreadcumb forward arrow iconData Sciencebreadcumb forward arrow iconCross Validation in Python: Everything You Need to Know About

Cross Validation in Python: Everything You Need to Know About

Last updated:
14th Feb, 2020
Views
Read Time
10 Mins
share image icon
In this article
Chevron in toc
View All
Cross Validation in Python: Everything You Need to Know About

In Data Science, validation is probably one of the most important techniques used by Data Scientists to validate the stability of the ML model and evaluate how well it would generalize to new data. Validation ensures that the ML model picks up the right (relevant) patterns from the dataset while successfully canceling out the noise in the dataset. Essentially, the goal of validation techniques is to make sure ML models have a low bias-variance factor. 

Today we’re going to discuss at length on one such model validation technique – Cross-Validation.

What is Cross-Validation?

Cross-Validation is a validation technique designed to evaluate and assess how the results of statistical analysis (model) will generalize to an independent dataset. Cross-Validation is primarily used in scenarios where prediction is the main aim, and the user wants to estimate how well and accurately a predictive model will perform in real-world situations.

Cross-Validation seeks to define a dataset by testing the model in the training phase to help minimize problems like overfitting and underfitting. However, you must remember that both the validation and the training set must be extracted from the same distribution, or else it would lead to problems in the validation phase.

Learn data science certification course from the World’s top Universities. Earn Executive PG Programs, Advanced Certificate Programs, or Masters Programs to fast-track your career.

Benefits of Cross-Validation

  • It helps evaluate the quality of your model.
  • It helps to reduce/avoid problems of overfitting and underfitting.
  • It lets you select the model that will deliver the best performance on unseen data.

 Read: Python Projects for Beginners

What are Overfitting and Underfitting?

Overfitting refers to the condition when a model becomes too data-sensitive and ends up capturing a lot of noise and random patterns that do not generalize well to unseen data. While such a model usually performs well on the training set, its performance suffers on the test set.

Underfitting refers to the problem when the model fails to capture enough patterns in the dataset, thereby delivering a poor performance for both the training as well as the test set.

Going by these two extremities, the perfect model is one that performs equally well for both training and test sets.

 

Source 

Cross-Validation: Different Validation Strategies

Validation strategies are categorized based on the number of splits done in a dataset. Now, let’s look at the different Cross-Validation strategies in Python.

1. Validation set 

 This validation approach divides the dataset into two equal parts – while 50% of the dataset is reserved for validation, the remaining 50% is reserved for model training. Since this approach trains the model based on only 50% of a given dataset, there always remains a possibility of missing out on relevant and meaningful information hidden in the other 50% of the data. As a result, this approach generally creates a higher bias in the model.

Source 

Python code:

train, validation = train_test_split(data, test_size=0.50, random_state = 5)

2. Train/Test split 

In this validation approach, the dataset is split into two parts – training set and test set. This is done to avoid any overlapping between the training set and the test set (if the training and test sets overlap, the model will be faulty). Thus, it is crucial to ensure that the dataset used for the model must not contain any duplicated samples in our dataset. The train/test split strategy lets you retrain your model based on the whole dataset without altering any hyperparameters of the model.

Source 

However, this approach has one significant limitation – the model’s performance and accuracy largely depend on how it is split. For instance, if the split isn’t random, or one subset of the dataset has only a part of the complete information, it will lead to overfitting. With this approach, you cannot be sure which data points will be in which validation set, thereby creating different results for different sets. Hence, the train/test split strategy should only be used when you have enough data at hand.

Python code:

 >>> from sklearn.model_selection import train_test_split

>>> X, y = np.arange(10).reshape((5, 2)), range(5)

>>> X

array([[0, 1],

       [2, 3],

       [4, 5],

       [6, 7],

       [8, 9]])

>>> list(y)

[0, 1, 2, 3, 4]

Explore our Popular Data Science Courses

3. K-fold 

As seen in the previous two strategies, there is the possibility of missing out on important information in the dataset, which increases the probability of bias-induced error or overfitting. This calls for a method that reserves abundant data for model training while also leaving sufficient data for validation.

Enter the K-fold validation technique. In this strategy, the dataset is split into ‘k’ number of subsets or folds, wherein k-1 subsets are reserved for model training, and the last subset is used for validation (test set). The model is averaged against the individual folds and then finalized. Once the model is finalized, you can test it using the test set. 

Source 

Here, each data point appears in the validation set exactly once while remaining in the training set k-1 number of times. Since most of the data is used for fitting, the problem of underfitting significantly reduces. Similarly, the issue of overfitting is eliminated since a majority of data is also used in the validation set.

 Read: Python vs Ruby: Complete Side-by-side comparison

The K-fold strategy is best for instances where you have a limited amount of data, and there’s a substantial difference in the quality of folds or different optimal parameters between them. 

Python code:

 from sklearn.model_selection import KFold # import KFold

X = np.array([[1, 2], [3, 4], [1, 2], [3, 4]]) # create an array

y = np.array([1, 2, 3, 4]) # Create another array

kf = KFold(n_splits=2) # Define the split – into 2 folds

kf.get_n_splits(X) # returns the number of splitting iterations in the cross-validator

print(kf)

KFold(n_splits=2, random_state=None, shuffle=False)

Top Data Science Skills to Learn

Check out all trending Python tutorial concepts in 2024.

4. Leave one out

 The leave one out cross-validation (LOOCV) is a special case of K-fold when k equals the number of samples in a particular dataset. Here, only one data point is reserved for the test set, and the rest of the dataset is the training set. So, if you use the “k-1” object as training samples and “1” object as the test set, they will continue to iterate through every sample in the dataset. It is the most useful method when there’s too little data available.

Source 

 Since this approach uses all data points, the bias is typically low. However, as the validation process is repeated ‘n’ number of times (n=number of data points), it leads to greater execution time. Another notable constraint of the methods is that it may lead to a higher variation in testing model effectiveness as you test the model against one data point. So, if that data point is an outlier, it will create a higher variation quotient.

 Python code:  

>>> import numpy as np

>>> from sklearn.model_selection import LeaveOneOut

>>> X = np.array([[1, 2], [3, 4]])

>>> y = np.array([1, 2])

>>> loo = LeaveOneOut()

>>> loo.get_n_splits(X)

2

>>> print(loo)

LeaveOneOut()

>>> for train_index, test_index in loo.split(X):

…    print(“TRAIN:”, train_index, “TEST:”, test_index)

…    X_train, X_test = X[train_index], X[test_index]

…    y_train, y_test = y[train_index], y[test_index]

…    print(X_train, X_test, y_train, y_test)

TRAIN: [1] TEST: [0]

[[3 4]] [[1 2]] [2] [1]

TRAIN: [0] TEST: [1]

[[1 2]] [[3 4]] [1] [2]

5. Stratification

upGrad’s Exclusive Data Science Webinar for you –

Watch our Webinar on How to Build Digital & Data Mindset?

Typically, for the train/test split and the K-fold, the data is shuffled to create a random training and validation split. Thus, it allows for different target distribution in different folds. Similarly, stratification also facilitates target distribution over different folds while splitting the data.

In this process, data is rearranged in different folds in a way that ensures each fold to become a representative of the whole. So, if you are dealing with a binary classification problem where each class consists of 50% of the data, you can use stratification to arrange the data in a way that each class includes half of the instances.

Read our popular Data Science Articles

The stratification process is best suited for small and unbalanced datasets with multiclass classification.

Python code: 

from sklearn.model_selection import StratifiedKFold

skf = StratifiedKFold(n_splits=5, random_state=None)

# X is the feature set and y is the target

for train_index, test_index in skf.split(X,y): 

    print(“Train:”, train_index, “Validation:”, val_index) 

    X_train, X_test = X[train_index], X[val_index] 

    y_train, y_test = y[train_index], y[val_index]

Read: Data Frames in Python – Tutorial

When to Use each of these five Cross-Validation strategies?

As we mentioned before, each Cross-Validation technique has unique use cases, and hence, they perform best when applied correctly to the right scenarios. For instance, if you have enough data, and the scores and optimal parameters (of the model) for different splits are likely to be similar, the train/test split approach will work excellently.

However, if the scores and optimal parameters vary for different splits, the K-fold technique will be best. For instances where you have too little data, the LOOCV approach works best, whereas, for small and unbalanced datasets, stratification is the way to go. 

We hope this detailed article helped you gain an in-depth idea of Cross-Validation in Python.

If you are curious to learn about data science, check out IIIT-B & upGrad’s Executive PG Program in Data Science which is created for working professionals and offers 10+ case studies & projects, practical hands-on workshops, mentorship with industry experts, 1-on-1 with industry mentors, 400+ hours of learning and job assistance with top firms.

Profile

Rohit Sharma

Blog Author
Rohit Sharma is the Program Director for the UpGrad-IIIT Bangalore, PG Diploma Data Analytics Program.

Frequently Asked Questions (FAQs)

1What is the ‘permutation test’ in ML?

By generating a test statistic on the dataset and then for many random permutations of that data, a permutation test is used to assess the statistical significance of a model. The initial test statistic value should fall into one of the null hypothesis distribution's tails if the model is significant. To find the p-value, you need to just count the number of test-statistics that are as severe as or more extreme than the initial test statistics and then divide that number by the total number of test-statistics we computed. Given that the null hypothesis is true, the P-value is the chance of getting a result at least as severe as the test statistic.

2What are the disadvantages of cross validation in machine learning?

1. Cross Validation significantly lengthens the training period. Previously, you could only train your model on one training set; now, you can train it on several training sets using Cross Validation.

2.In most cases, the structure you're studying develops over time in predictive modelling. As a result, you may notice variations in the training and validation sets.

3. Cross Validation requires a lot of computing power.

3How can I detect overfitting in ML models?

Before you evaluate the data, detecting overfitting is nearly impossible. It can help with the difficulty of generalising data sets, which is an intrinsic feature of overfitting. As a result, the data may be divided into distinct subsets to make training and testing easier. The proportion of accuracy seen in both data sets can be used to determine whether or not overfitting is present. If the model performs better on the training set than on the test set, there are chances that it's overfitting.

Another suggestion is to begin with a very basic ML model to act as a baseline. Later, when you test complex algorithms, you'll have a benchmark against which to judge if the added complexity is worthwhile.

Validation measures such as accuracy and loss can also be used to detect overfitting. When the model is impacted by overfitting, the validation measures generally grow until they plateau or begin to decline.

Explore Free Courses

Suggested Blogs

Top 13 Highest Paying Data Science Jobs in India [A Complete Report]
905213
In this article, you will learn about Top 13 Highest Paying Data Science Jobs in India. Take a glimpse below. Data Analyst Data Scientist Machine
Read More

by Rohit Sharma

12 Apr 2024

Most Common PySpark Interview Questions & Answers [For Freshers & Experienced]
20905
Attending a PySpark interview and wondering what are all the questions and discussions you will go through? Before attending a PySpark interview, it’s
Read More

by Rohit Sharma

05 Mar 2024

Data Science for Beginners: A Comprehensive Guide
5066
Data science is an important part of many industries today. Having worked as a data scientist for several years, I have witnessed the massive amounts
Read More

by Harish K

28 Feb 2024

6 Best Data Science Institutes in 2024 (Detailed Guide)
5170
Data science training is one of the most hyped skills in today’s world. Based on my experience as a data scientist, it’s evident that we are in
Read More

by Harish K

28 Feb 2024

Data Science Course Fees: The Roadmap to Your Analytics Career
5075
A data science course syllabus covers several basic and advanced concepts of statistics, data analytics, machine learning, and programming languages.
Read More

by Harish K

28 Feb 2024

Inheritance in Python | Python Inheritance [With Example]
17627
Python is one of the most popular programming languages. Despite a transition full of ups and downs from the Python 2 version to Python 3, the Object-
Read More

by Rohan Vats

27 Feb 2024

Data Mining Architecture: Components, Types & Techniques
10801
Introduction Data mining is the process in which information that was previously unknown, which could be potentially very useful, is extracted from a
Read More

by Rohit Sharma

27 Feb 2024

6 Phases of Data Analytics Lifecycle Every Data Analyst Should Know About
80736
What is a Data Analytics Lifecycle? Data is crucial in today’s digital world. As it gets created, consumed, tested, processed, and reused, data goes
Read More

by Rohit Sharma

19 Feb 2024

Sorting in Data Structure: Categories & Types [With Examples]
139094
The arrangement of data in a preferred order is called sorting in the data structure. By sorting data, it is easier to search through it quickly and e
Read More

by Rohit Sharma

19 Feb 2024

Schedule 1:1 free counsellingTalk to Career Expert
icon
footer sticky close icon