Welcome to our credit card fraud detection project. Today, we’ll use Python and machine learning to detect fraud in a dataset of credit card transactions. Although we have shared the code for every step, it would be best to understand how each step works and then implement it.
Top Machine Learning and AI Courses Online
Let’s begin!
Credit Card Fraud Detection Project With Steps
In our credit card fraud detection project, we’ll use Python, one of the most popular programming languages available. Our solution would detect if someone bypasses the security walls of our system and makes an illegitimate transaction.
The dataset has credit card transactions, and its features are the result of PCA analysis. It has ‘Amount’, ‘Time’, and ‘Class’ features where ‘Amount’ shows the monetary value of every transaction, ‘Time’ shows the seconds elapsed between the first and the respective transaction, and ‘Class’ shows whether a transaction is legit or not.
In ‘Class’, value 1 represents a fraud transaction, and value 0 represents a valid transaction.
You can get the dataset and the entire source code here.
Trending Machine Learning Skills
Enrol for the Machine Learning Course from the World’s top Universities. Earn Masters, Executive PGP, or Advanced Certificate Programs to fast-track your career.
Step 1: Import Packages
We’ll start our credit card fraud detection project by installing the required packages. Create a ‘main.py’ file and import these packages:
import numpy as np
import pandas as pd
import sklearn
from scipy.stats import norm
from scipy.stats import multivariate_normal
from sklearn.preprocessing import MinMaxScaler
import matplotlib.pyplot as plt
import seaborn as sns
Step 2: Look for Errors
Before we use the dataset, we should look for any errors and missing values in it. The presence of missing values can cause your model to give faulty results, rendering it inefficient and ineffective. Hence, we’ll read the dataset and look for any missing values:
df = pd.read_csv(‘creditcardfraud/creditcard.csv’)
# missing values
print(“missing values:”, df.isnull().values.any())
We found no missing values in this dataset, so we can proceed to the next step.
Join the Artificial Intelligence Course online from the World’s top Universities – Masters, Executive Post Graduate Programs, and Advanced Certificate Program in ML & AI to fast-track your career.
Step 3: Visualization
In this step of our credit card fraud detection project, we’ll visualize our data. Visualization helps in understanding what our data shows and reveals any patterns which we might have missed. Let’s create a plot of our dataset:
# plot normal and fraud
count_classes = pd.value_counts(df[‘Class’], sort=True)
count_classes.plot(kind=’bar’, rot=0)
plt.title(“Distributed Transactions”)
plt.xticks(range(2), [‘Normal’, ‘Fraud’])
plt.xlabel(“Class”)
plt.ylabel(“Frequency”)
plt.show()
In our plot, we found that the data is highly imbalanced. This means we can’t use supervised learning algorithms as it will result in overfitting. Furthermore, we haven’t figured out what would be the best method to solve our problem, so we’ll perform more visualisation. Use the following to plot the heatmap:
# heatmap
sns.heatmap(df.corr(), vmin=-1)
plt.show()
Now, we’ll create data distribution graphs to help us understand where our data came from:
fig, axs = plt.subplots(6, 5, squeeze=False)
for i, ax in enumerate(axs.flatten()):
ax.set_facecolor(‘xkcd:charcoal’)
ax.set_title(df.columns[i])
sns.distplot(df.iloc[:, i], ax=ax, fit=norm,
color=”#DC143C”, fit_kws={“color”: “#4e8ef5”})
ax.set_xlabel(”)
fig.tight_layout(h_pad=-1.5, w_pad=-1.5)
plt.show()
With data distribution graphs, we found that nearly every feature comes from Gaussian distribution except ‘Time’.
So we’ll use multivariate Gaussian distribution to detect fraud. As only the ‘Time’ feature comes from the bimodal distribution (and note gaussian distribution), we’ll discard it. Moreover, our visualisation revealed that the ‘Time’ feature doesn’t have any extreme values like the others, which is another reason why we’ll discard it.
Add the following code to drop the features we discussed and scale others:
classes = df[‘Class’]
df.drop([‘Time’, ‘Class’, ‘Amount’], axis=1, inplace=True)
cols = df.columns.difference([‘Class’])
MMscaller = MinMaxScaler()
df = MMscaller.fit_transform(df)
df = pd.DataFrame(data=df, columns=cols)
df = pd.concat([df, classes], axis=1)
Step 4: Splitting the Dataset
Create a ‘functions.py’ file. Here, we’ll add functions to implement the different stages of our algorithm. However, before we add those functions, let’s split our dataset into two sets, the validation set and the test set.
import pandas as pd
import numpy as np
def train_validation_splits(df):
# Fraud Transactions
fraud = df[df[‘Class’] == 1]
# Normal Transactions
normal = df[df[‘Class’] == 0]
print(‘normal:’, normal.shape[0])
print(‘fraud:’, fraud.shape[0])
normal_test_start = int(normal.shape[0] * .2)
fraud_test_start = int(fraud.shape[0] * .5)
normal_train_start = normal_test_start * 2
val_normal = normal[:normal_test_start]
val_fraud = fraud[:fraud_test_start]
validation_set = pd.concat([val_normal, val_fraud], axis=0)
test_normal = normal[normal_test_start:normal_train_start]
test_fraud = fraud[fraud_test_start:fraud.shape[0]]
test_set = pd.concat([test_normal, test_fraud], axis=0)
Xval = validation_set.iloc[:, :-1]
Yval = validation_set.iloc[:, -1]
Xtest = test_set.iloc[:, :-1]
Ytest = test_set.iloc[:, -1]
train_set = normal[normal_train_start:normal.shape[0]]
Xtrain = train_set.iloc[:, :-1]
return Xtrain.to_numpy(), Xtest.to_numpy(), Xval.to_numpy(), Ytest.to_numpy(), Yval.to_numpy()
Step 5: Calculate Mean and Covariance Matrix
The following function will helps us calculate the mean and the covariance matrix:
def estimate_gaussian_params(X):
“””
Calculates the mean and the covariance for each feature.
Arguments:
X: dataset
“””
mu = np.mean(X, axis=0)
sigma = np.cov(X.T)
return mu, sigma
FYI: Free nlp course!
Step 6: Add the Final Touches
In our ‘main.py’ file, we’ll import and call the functions we implemented in the previous step for every set:
(Xtrain, Xtest, Xval, Ytest, Yval) = train_validation_splits(df)
(mu, sigma) = estimate_gaussian_params(Xtrain)
# calculate gaussian pdf
p = multivariate_normal.pdf(Xtrain, mu, sigma)
pval = multivariate_normal.pdf(Xval, mu, sigma)
ptest = multivariate_normal.pdf(Xtest, mu, sigma)
Now we have to refer to the epsilon (or the threshold). Usually, it’s best to initialise the threshold with the pdf’s minimum value and increase with every step until you reach the maximum pdf while saving every epsilon value in a vector.
After we create our required vector, we make a ‘for’ loop and iterate over the same. We compare the threshold with the pdf’s values that generate our predictions in every iteration.
We also calculate the F1 score according to our ground truth values and the predictions. If the found F1 score is higher than the previous one, we override a ‘best threshold’ variable.
Keep in mind that we can’t use ‘accuracy’ as a metric in our credit card fraud detection project. That’s because it would reflect all the transactions as normal with 99% accuracy, rendering our algorithm useless.
We’ll implement all of the processes we discussed above in our ‘functions.py’ file:
def metrics(y, predictions):
fp = np.sum(np.all([predictions == 1, y == 0], axis=0))
tp = np.sum(np.all([predictions == 1, y == 1], axis=0))
fn = np.sum(np.all([predictions == 0, y == 1], axis=0))
precision = (tp / (tp + fp)) if (tp + fp) > 0 else 0
recall = (tp / (tp + fn)) if (tp + fn) > 0 else 0
F1 = (2 * precision * recall) / (precision +
recall) if (precision + recall) > 0 else 0
return precision, recall, F1
def selectThreshold(yval, pval):
e_values = pval
bestF1 = 0
bestEpsilon = 0
for epsilon in e_values:
predictions = pval < epsilon
(precision, recall, F1) = metrics(yval, predictions)
if F1 > bestF1:
bestF1 = F1
bestEpsilon = epsilon
return bestEpsilon, bestF1
In the end, we’ll import the functions in the ‘main.py’ file and call them to return the F1 score and the threshold. It will allow us to evaluate our model on the test set:
(epsilon, F1) = selectThreshold(Yval, pval)
print(“Best epsilon found:”, epsilon)
print(“Best F1 on cross validation set:”, F1)
(test_precision, test_recall, test_F1) = metrics(Ytest, ptest < epsilon)
print(“Outliers found:”, np.sum(ptest < epsilon))
print(“Test set Precision:”, test_precision)
print(“Test set Recall:”, test_recall)
print(“Test set F1 score:”, test_F1)
Here are the results of all this effort:
Best epsilon found: 5e-324
Best F1 on cross validation set: 0.7852998065764023
Outliers found: 210
Test set Precision: 0.9095238095238095
Test set Recall: 0.7764227642276422
Test set F1 score: 0.837719298245614
Popular AI and ML Blogs & Free Courses
Conclusion
There you have it – a fully functional credit card fraud detection project!
If you have any questions or suggestions regarding this project, let us know by dropping a comment below. We’d love to hear from you.
With all the learnt skills you can get active on other competitive platforms as well to test your skills and get even more hands-on. If you are interested to learn more about the course, check out the page of the Execitive PG Program in Machine Learning & AI and talk to our career counsellor for more information.