Top Heart Disease Prediction Project in 2021

Welcome to this step-by-step tutorial of our heart disease prediction project. Here, you’ll create a machine learning model that predicts whether a patient can be diagnosed with heart disease or not. 

You should be familiar with the basics of machine learning and data analysis to work on this project. This project requires you to be familiar with multiple ML algorithms, including Random Forest, K-NN (K-nearest neighbour), and many others. 

We’ll perform data wrangling, filtering, and test six different ML algorithms to find which one offers the optimal results for our dataset. Let’s begin: 

The Objective of the Heart Disease Prediction Project

The goal of our heart disease prediction project is to determine if a patient should be diagnosed with heart disease or not, which is a binary outcome, so:

Positive result = 1, the patient will be diagnosed with heart disease.

Negative result = 0, the patient will not be diagnosed with heart disease. 

We have to find which classification model has the greatest accuracy and identify correlations in our data. Finally, we also have to determine which features are the most influential in our heart disease diagnosis. 


We use the following 13 features (X) to determine our predictor (Y):

  1. Age.
  2. Sex: 1 = Male, 0 = Female.
  3. (cp) chest pain type (4 values – Ordinal), 1st value: typical angina, 2nd value: atypical angina, 3rd value: non-anginal pain, 4th value: asymptomatic.
  4. (trestbps) resting blood pressure.
  5. (chol) serum cholesterol.
  6. (Fbs) – fasting blood sugar > 120 mg/dl. 
  7. (restecg) – resting electrocardiography results.
  8. (thalach) – maximum heart rate achieved. 
  9. (exang) – exercise-induced angina.
  10. (oldpeak) – ST depression caused by exercise relative to rest.
  11. (slope) – the slope of the peak exercise ST segment.
  12. (ca) – the number of major vessels colored by fluoroscopy.
  13. (thal) – maximum heart rate achieved (Ordinal), 3 = normal, 6 = fixed defect, 7 = reversible defect.

Step #1: Data Wrangling

We’ll first look at the dataset we are working with by converting it into a simpler and more understandable format. It would help us use the data more appropriately. 

import numpy as np

import pandas as pd

import matplotlib as plt

import seaborn as sns

import matplotlib.pyplot as plt

filePath = ‘/Users/upgrad/Downloads/datasets-33180-43520-heart.csv’

data = pd.read_csv(filePath)


age sex cp trestbps chol fbs restecg thalach exang oldpeak slope ca thal target
0 63 1 3 145 233 1 0 150 0 2.3 0 0 1 1
1 37 1 2 130 250 0 1 187 0 3.5 0 0 2 1
2 41 0 1 130 204 0 0 172 0 1.4 2 0 2 1
3 56 1 1 120 236 0 1 178 0 0.8 2 0 2 1
4 57 0 0 120 354 0 1 163 1 0.6 2 0 2 1

Just as the code above helped us display our data in tabular form, we will use the following code for further data wrangling:

print(“(Rows, columns): ” + str(data.shape))


The above code will show the total number of rows and columns and the column names in our dataset. The total number of rows and columns in our data is 303 and 14 respectively. Now we will find the number of unique values for every variable by using the following function:


Similarly, the following function summarizes the mean, count, standard deviation, minimum and maximum for the numeric variables:


Step #2: Conducting EDA 

Now that we have completed data wrangling, we can perform exploratory data analysis. Here are the primary tasks we will perform in this stage of our heart disease prediction project: 

Finding Correlations

We’ll create a correlation matrix that helps us see the correlations between different variables:

corr = data.corr()


sns.heatmap(corr, xticklabels=corr.columns, yticklabels=corr.columns, annot=True, cmap=sns.diverging_palette(220, 20, as_cmap=True))

sns.heatmap(corr, xticklabels=corr.columns,



           cmap=sns.diverging_palette(220, 20, as_cmap=True))

To find immediate correlations between features, we can also create pairplots. We’ll use small pairplots with only the continuous variables to look deeper into the relationships:

subData = data[[‘age’,’trestbps’,’chol’,’thalach’,’oldpeak’]]


Using Violin and Box Plots

With Violin and Box plots we can see the basic statistics and distribution of our data. You can use it to compare the distribution of a specific variable across different categories. It will help us identify outliers in the data as well. Use the following code:


sns.violinplot(x= ‘target’, y= ‘oldpeak’,hue=”sex”, inner=’quartile’,data= data )

plt.title(“Thalach Level vs. Heart Disease”,fontsize=20)

plt.xlabel(“Heart Disease Target”, fontsize=16)

plt.ylabel(“Thalach Level”, fontsize=16)

In the first Violin and Box plot, we find that the positive patients have a lower median for ST depression than the negative patients. So, we’ll use a plot to compare ST depression level and heart disease. 


sns.boxplot(x= ‘target’, y= ‘thalach’,hue=”sex”, data=data )

plt.title(“ST depression Level vs. Heart Disease”, fontsize=20)

plt.xlabel(“Heart Disease Target”,fontsize=16)

plt.ylabel(“ST depression induced by exercise relative to rest”, fontsize=16)

Here, the positive patients had a higher median for ST depression level in comparison to negative patients. 

Filtering Data

Now we’ll filter the data according to positive and negative heart disease patients. We’ll start with filtering data by Positive heart disease patients:

pos_data = data[data[‘target’]==1]


Similarly, we’ll filter the data according to negative heart disease patients:

pos_data = data[data[‘target’]==0]


Step #3: Using Machine Learning Algorithms


Here, we’ll prepare the data for training by assigning the features to X and the last column to the predictor Y:

X = data.iloc[:, :-1].values

Y = data.iloc[:, -1}.values

Then, we’ll split the data into two sets, training set and test set: 

from sklearn.model_selection import train_test_split

x_train, x_test, y_train, y_test = train_test_split(X,y,test_size = 0.2, random_state = 1)

Finally, we’ll normalize the data so its distribution will have a mean of 0:

from sklearn.preprocessing import StandardScaler

sc = StandardScaler()

x_train = sc.fit_transform(x_train)

x_test = sc.transform(x_test)

Training the Model

In this section, we’ll use multiple machine learning algorithms and find the one that offers the highest accuracy:

1st Model: Logistic Regression

from sklearn.metrics import classification_report

from sklearn.linear_model import LogisticRegression

model1 = LogisticRegression(random_state=1) # get instance of model, y_train) # Train/Fit model

y_pred1 = model1.predict(x_test) # get y predictions

print(classification_report(y_test, y_pred1)) # output accuracy

The accuracy of this model was 74%.

2nd Model: K-NN (K-Nearest Neighbours)

from sklearn.metrics import classification_report

from sklearn.neighbours import KNeighboursClassifier

model2 = KNeighboursClassifier() # get instance of model, y_train) # Train/Fit model

y_pred2 = model2.predict(x_test) # get y predictions

print(classification_report(y_test, y_pred2)) # output accuracy

The accuracy of this model was 75%. 

3rd Model: Support Vector Machine (SVM)

from sklearn.metrics import classification_report

from sklearn.svm import SVC

model3 = SVC(random_state=1) # get instance of model, y_train) # Train/Fit model

y_pred3 = model3.predict(x_test) # get y predictions

print(classification_report(y_test, y_pred3)) # output accuracy

The accuracy of this model was 75%. 

4th Model: Naive Bayes Classifier

from sklearn.metrics import classification_report

from sklearn.naive_bayes import GaussianNB

model4 = GaussianNB() # get instance of model, y_train) # Train/Fit model

y_pred4 = model4.predict(x_test) # get y predictions

print(classification_report(y_test, y_pred4)) # output accuracy

The accuracy of this model was 77%. 

5th Model: Random Forest

from sklearn.metrics import classification_report

from sklearn.ensemble import RandomForestClassifier

model6 = RandomForestClassifier(random_state=1)# get instance of model, y_train) # Train/Fit model

y_pred6 = model6.predict(x_test) # get y predictions

print(classification_report(y_test, y_pred6)) # output accuracy

This model had the highest accuracy of 80%. 

6th Model: XGBoost

from xgboost import XGBClassifier

model7 = XGBClassifier(random_state=1), y_train)

y_pred7 = model7.predict(x_test)

print(classification_report(y_test, y_pred7))

The accuracy of this model was 69%. 

After testing different ML algorithms, we found that the best one was Random Forest as it gave us the optimal accuracy of 80%. 

Keep in mind that any accuracy percentage higher than 80% is too good to be true, and it might be because of overfitting. That’s why 80% is the optimal number to reach. 

Step #4: Finding Feature Score

Here, we’ll find the Feature Score, which helps us make important decisions by telling us which feature was the most useful for our model:

# get importance

importance = model6.feature_importances_

# summarize feature importance

for i,v in enumerate(importance):

   print(‘Feature: %0d, Score: %.5f’ % (i,v))

We found that the top four features were chest pain type (cp), maximum heart rate achieved (thalach), number of major vessels (ca) and ST depression caused by exercise relative to rest (oldpeak). 


Congratulations, you have now successfully completed the heart disease prediction project. We had 13 features, out of which we found that the most important ones were chest pain type and maximum heart rate achieved. 

We tested out six different ML algorithms and found that the most accurate algorithm was Random Forest. You should test this model with the test set and see how well this model works. 

On the other hand, if you want to learn more about machine learning and AI, we recommend checking out our AI courses. You will study directly from industry experts and work on industry projects that let you test your knowledge. Do check them out if you’re interested in a career in machine learning and AI. 

If you’re interested to learn more about machine learning, check out IIIT-B & upGrad’s Executive PG Program in Machine Learning & AI which is designed for working professionals and offers 450+ hours of rigorous training, 30+ case studies & assignments, IIIT-B Alumni status, 5+ practical hands-on capstone projects & job assistance with top firms.

Lead the AI Driven Technological Revolution


Leave a comment

Your email address will not be published.

Accelerate Your Career with upGrad

Our Popular Machine Learning Course