Driver Drowsiness Detection Using Pretrained Model

By Rohit Sharma

Updated on Jul 30, 2025 | 12 min read | 1.36K+ views

Share:

Drowsy driving is one of the major causes of road accidents around the world. Even a momentary lapse in driver attention can lead to fatal consequences. That’s why detecting driver fatigue early can be life-saving. In this project, we will build a driver drowsiness detection system using deep learning techniques.

We will use the Driver Drowsiness Dataset (DDD) sourced from Kaggle. It contains 41,000+ preprocessed facial images of drivers categorized into two classes:

  • Drowsy
  • Non Drowsy

Instead of training a CNN from scratch, we will apply transfer learning using a pretrained MobileNetV2 model. This approach helps us leverage powerful image features learned from large-scale datasets and adapt them to our drowsiness detection task.

Check our Top 25+ Essential Data Science Projects GitHub to Explore in 2025 blog for more project ideas like this one.  

What Should You Know Beforehand?

It is better to have at least some background in:

Technologies and Libraries Used

For this project, the following tools and libraries will be used:

Tool / Library

Purpose

Python

Programming language to implement the project

Google Colab

Cloud-based environment 

Kaggle Hub

To download the dataset directly from Kaggle

NumPy

For numerical operations and array handling

Matplotlib

To visualize training performance & sample images

TensorFlow / Keras

To build, train, and evaluate the pretrained CNN (MobileNetV2)

OS / Glob / Pathlib

To navigate through dataset directories and manage image file paths

OpenCV (cv2)

For image preprocessing and visualization

Scikit-learn

For classification metrics and data preprocessing

Time Taken and Difficulty Level

On average, it will take about 4 to 5 hours to complete. Duration may increase/decrease depending on your familiarity with - Python, CNNs, image classification, & TensorFlow/Keras. It’s best for beginners to intermediate level.

How to Build a Driver Drowsiness Detection Model

Let’s start building the project from scratch. We will start by:

  • Downloading the Driver Drowsiness Dataset (DDD) using kagglehub
  • Exploring and preprocessing the dataset, including:
    • Resizing images to 224×224
    • Normalizing pixel values
    • Splitting the data into training and validation sets
  • Loading the MobileNetV2 pretrained model and modifying it for binary classification
  • Training the model on labeled facial images (Drowsy vs Non-Drowsy)
  • Evaluating its performance using - accuracy, loss curves, and confusion matrix
  • Visualizing predictions to test the model on unseen data
  • Saving the model for future use 

Without any further delay, let’s start!

Step 1: Download the Dataset Using kagglehub

First, we will be downloading the Driver Drowsiness Dataset (DDD) from Kaggle. Use the code given below to do so:

# Install kagglehub
!pip install kagglehub --quiet

# Import kagglehub
import kagglehub

# Download the dataset
path = kagglehub.dataset_download("ismailnasri20/driver-drowsiness-dataset-ddd")

print("Path to dataset files:", path)

Output:

Downloading from https://www.kaggle.com/api/v1/datasets/download/ismailnasri20/driver-drowsiness-dataset-ddd?dataset_version_number=1...

100%|██████████| 2.58G/2.58G [01:06<00:00, 41.5MB/s]Extracting files...

Path to dataset files: /root/.cache/kagglehub/datasets/ismailnasri20/driver-drowsiness-dataset-ddd/versions/1

The output shows you the path where the dataset has been downloaded and extracted.

Step 2: Explore and Preprocess the Dataset

In this step, we will:

  • Load all images directly from the folder structure (Drowsy and Non-Drowsy)
  • Resize them to 224×224, which is the required input size for MobileNetV2
  • Normalize pixel values by scaling them between 0 and 1
  • Automatically split the dataset into training (80%) and validation (20%) sets
  • Automatically assign labels based on folder names
  • Drowsy - 1
  • Non Drowsy - 0

Use the code given below to accomplish all this:

from tensorflow.keras.preprocessing.image import ImageDataGenerator
import os

# Set the dataset path
dataset_path = "/kaggle/input/driver-drowsiness-dataset-ddd/Driver Drowsiness Dataset (DDD)"

# Image data generators with validation split
train_datagen = ImageDataGenerator(
    rescale=1./255,
    validation_split=0.2  # 80% training, 20% validation
)

# Load training images
train_generator = train_datagen.flow_from_directory(
    dataset_path,
    target_size=(224, 224),
    batch_size=32,
    class_mode='binary',
    subset='training',
    shuffle=True
)

# Load validation images
val_generator = train_datagen.flow_from_directory(
    dataset_path,
    target_size=(224, 224),
    batch_size=32,
    class_mode='binary',
    subset='validation',
    shuffle=False
)

Output:

Found 33435 images belonging to 2 classes.
Found 8358 images belonging to 2 classes.

Step 3: Build the Model Using Transfer Learning

In this step, we will build our driver drowsiness detection model using MobileNetV2. Instead of training a CNN from scratch, we use transfer learning.  Here's what we would be doing in this step:

  • Load the MobileNetV2 model with ImageNet weights. Will exclude its top layers.
  • Freeze its base (convolutional layers). This will stop them from updating during training.
  • Add a GlobalAveragePooling2D layer. Doing so will help flatten the output feature maps.
  • Add Dropout layers to reduce overfitting.
  • Add a Dense layer with ReLU activation, followed by a final Dense layer with sigmoid activation for binary classification:
    • 0 - Non Drowsy
    • 1 - Drowsy
  • Compile the model using binary crossentropy loss and the Adam optimizer.

Use the below-mentioned code to achieve all this:

from tensorflow.keras.applications import MobileNetV2
from tensorflow.keras.models import Model
from tensorflow.keras.layers import GlobalAveragePooling2D, Dropout, Dense
from tensorflow.keras.optimizers import Adam

# Load MobileNetV2 base model (excluding top classifier layers)
base_model = MobileNetV2(
    weights='imagenet',
    include_top=False,
    input_shape=(224, 224, 3)
)

# Freeze the base model (we don't train its layers)
base_model.trainable = False

# Add custom layers on top
x = base_model.output
x = GlobalAveragePooling2D()(x)
x = Dropout(0.3)(x)
x = Dense(128, activation='relu')(x)
x = Dropout(0.3)(x)
output = Dense(1, activation='sigmoid')(x)

# Final model
model = Model(inputs=base_model.input, outputs=output)

# Compile the model
model.compile(
    optimizer=Adam(learning_rate=0.0001),
    loss='binary_crossentropy',
    metrics=['accuracy']
)

# Summary of the model
model.summary()

Output:

Model: "functional_1"

Layer (Type) Output Shape Param # Connected to
input_layer_3 (InputLayer) (None, 224, 224, 3) 0 -
Conv1 (Conv2D) (None, 112, 112, 32) 864 input_layer_3[0]…
bn_Conv1 (BatchNormalizatio) (None, 112, 112, 32) 128 Conv1[0][0]
Conv1_relu (ReLU) (None, 112, 112, 32) 0 bn_Conv1[0][0]
expanded_conv_dept… (DepthwiseConv2D) (None, 112, 112, 32) 288 Conv1_relu[0][0]
expanded_conv_dept… (BatchNormalizatio) (None, 112, 112, 32) 128 expanded_conv_de…
expanded_conv_dept… (ReLU) (None, 112, 112, 32) 0 expanded_conv_de…
expanded_conv_proj… (Conv2D) (None, 112, 112, 16) 512 expanded_conv_de…
expanded_conv_proj… (BatchNormalizatio) (None, 112, 112, 16) 64 expanded_conv_pr…
block_1_expand (Conv2D) (None, 112, 112, 96) 1,536 expanded_conv_pr…
block_1_expand_BN (BatchNormalizatio) (None, 112, 112, 96) 384 block_1_expand[0…
block_1_expand_relu (ReLU) (None, 112, 112, 96) 0 block_1_expand_B…
block_1_pad (ZeroPadding2D) (None, 113, 113, 96) 0 block_1_expand_r…
block_1_depthwise (DepthwiseConv2D) (None, 56, 56, 96) 864 block_1_pad[0][0]
block_1_depthwise_… (BatchNormalizatio) (None, 56, 56, 96) 384 block_1_depthwis…
block_1_depthwise_… (ReLU) (None, 56, 56, 96) 0 block_1_depthwis…
block_1_project (Conv2D) (None, 56, 56, 24) 2,304 block_1_depthwis…
block_1_project_BN (BatchNormalizatio) (None, 56, 56, 24) 96 block_1_project[…
block_2_expand (Conv2D) (None, 56, 56, 144) 3,456 block_1_project_…
block_2_expand_BN (BatchNormalizatio) (None, 56, 56, 144) 576 block_2_expand[0…
block_2_expand_relu (ReLU) (None, 56, 56, 144) 0 block_2_expand_B…
block_2_depthwise (DepthwiseConv2D) (None, 56, 56, 144) 1,296 block_2_expand_r…
block_2_depthwise_… (BatchNormalizatio) (None, 56, 56, 144) 576 block_2_depthwis…
block_2_depthwise_… (ReLU) (None, 56, 56, 144) 0 block_2_depthwis…
block_2_project (Conv2D) (None, 56, 56, 24) 3,456 block_2_depthwis…
block_2_project_BN (BatchNormalizatio) (None, 56, 56, 24) 96 block_2_project[…
block_2_add (Add) (None, 56, 56, 24) 0 block_1_project_…,block_2_project_…
block_3_expand (Conv2D) (None, 56, 56, 144) 3,456 block_2_add[0][0]
block_3_expand_BN (BatchNormalizatio) (None, 56, 56, 144) 576 block_3_expand[0…
block_3_expand_relu (ReLU) (None, 56, 56, 144) 0 block_3_expand_B…
block_3_pad (ZeroPadding2D) (None, 57, 57, 144) 0 block_3_expand_r…
block_3_depthwise (DepthwiseConv2D) (None, 28, 28, 144) 1,296 block_3_pad[0][0]
block_3_depthwise_… (BatchNormalizatio) (None, 28, 28, 144) 576 block_3_depthwis…
block_3_depthwise_… (ReLU) (None, 28, 28, 144) 0 block_3_depthwis…
block_3_project (Conv2D) (None, 28, 28, 32) 4,608 block_3_depthwis…
block_3_project_BN (BatchNormalizatio) (None, 28, 28, 32) 128 block_3_project[…
block_4_expand (Conv2D) (None, 28, 28, 192) 6,144 block_3_project_…
block_4_expand_BN (BatchNormalizatio) (None, 28, 28, 192) 768 block_4_expand[0…
block_4_expand_relu (ReLU) (None, 28, 28, 192) 0 block_4_expand_B…
block_4_depthwise (DepthwiseConv2D) (None, 28, 28, 192) 1,728 block_4_expand_r…
block_4_depthwise_… (BatchNormalizatio) (None, 28, 28, 192) 768 block_4_depthwis…
block_4_depthwise_… (ReLU) (None, 28, 28, 192) 0 block_4_depthwis…
block_4_project (Conv2D) (None, 28, 28, 32) 6,144 block_4_depthwis…
block_4_project_BN (BatchNormalizatio) (None, 28, 28, 32) 128 block_4_project[…
block_4_add (Add) (None, 28, 28, 32) 0 block_3_project_…,block_4_project_…
block_5_expand (Conv2D) (None, 28, 28, 192) 6,144 block_4_add[0][0]
block_5_expand_BN (BatchNormalizatio) (None, 28, 28, 192) 768 block_5_expand[0…
block_5_expand_relu (ReLU) (None, 28, 28, 192) 0 block_5_expand_B…
block_5_depthwise (DepthwiseConv2D) (None, 28, 28, 192) 1,728 block_5_expand_r…
block_5_depthwise_… (BatchNormalizatio) (None, 28, 28, 192) 768 block_5_depthwis…
block_5_depthwise_… (ReLU) (None, 28, 28, 192) 0 block_5_depthwis…
block_5_project (Conv2D) (None, 28, 28, 32) 6,144 block_5_depthwis…
block_5_project_BN (BatchNormalizatio) (None, 28, 28, 32) 128 block_5_project[…
block_5_add (Add) (None, 28, 28, 32) 0 block_4_add[0][0…,block_5_project_…
block_6_expand (Conv2D) (None, 28, 28, 192) 6,144 block_5_add[0][0]
block_6_expand_BN (BatchNormalizatio) (None, 28, 28, 192) 768 block_6_expand[0…
block_6_expand_relu (ReLU) (None, 28, 28, 192) 0 block_6_expand_B…
block_6_pad (ZeroPadding2D) (None, 29, 29, 192) 0 block_6_expand_r…
block_6_depthwise (DepthwiseConv2D) (None, 14, 14, 192) 1,728 block_6_pad[0][0]
block_6_depthwise_… (BatchNormalizatio) (None, 14, 14, 192) 768 block_6_depthwis…
block_6_depthwise_… (ReLU) (None, 14, 14, 192) 0 block_6_depthwis…
block_6_project (Conv2D) (None, 14, 14, 64) 12,288 block_6_depthwis…
block_6_project_BN (BatchNormalizatio) (None, 14, 14, 64) 256 block_6_project[…
block_7_expand (Conv2D) (None, 14, 14, 384) 24,576 block_6_project_…
block_7_expand_BN (BatchNormalizatio) (None, 14, 14, 384) 1,536 block_7_expand[0…
block_7_expand_relu (ReLU) (None, 14, 14, 384) 0 block_7_expand_B…
block_7_depthwise (DepthwiseConv2D) (None, 14, 14, 384) 3,456 block_7_expand_r…
block_7_depthwise_… (BatchNormalizatio) (None, 14, 14, 384) 1,536 block_7_depthwis…
block_7_depthwise_… (ReLU) (None, 14, 14, 384) 0 block_7_depthwis…
block_7_project (Conv2D) (None, 14, 14, 64) 24,576 block_7_depthwis…
block_7_project_BN (BatchNormalizatio) (None, 14, 14, 64) 256 block_7_project[…
block_7_add (Add) (None, 14, 14, 64) 0 block_6_project_…,block_7_project_…
block_8_expand (Conv2D) (None, 14, 14, 384) 24,576 block_7_add[0][0]
block_8_expand_BN (BatchNormalizatio) (None, 14, 14, 384) 1,536 block_8_expand[0…
block_8_expand_relu (ReLU) (None, 14, 14, 384) 0 block_8_expand_B…
block_8_depthwise (DepthwiseConv2D) (None, 14, 14, 384) 3,456 block_8_expand_r…
block_8_depthwise_… (BatchNormalizatio) (None, 14, 14, 384) 1,536 block_8_depthwis…
block_8_depthwise_… (ReLU) (None, 14, 14, 384) 0 block_8_depthwis…
block_8_project (Conv2D) (None, 14, 14, 64) 24,576 block_8_depthwis…
block_8_project_BN (BatchNormalizatio) (None, 14, 14, 64) 256 block_8_project[…
block_8_add (Add) (None, 14, 14, 64) 0 block_7_add[0][0…,block_8_project_…
block_9_expand (Conv2D) (None, 14, 14, 384) 24,576 block_8_add[0][0]
block_9_expand_BN (BatchNormalizatio) (None, 14, 14, 384) 1,536 block_9_expand[0…
block_9_expand_relu (ReLU) (None, 14, 14, 384) 0 block_9_expand_B…
block_9_depthwise (DepthwiseConv2D) (None, 14, 14, 384) 3,456 block_9_expand_r…
block_9_depthwise_… (BatchNormalizatio) (None, 14, 14, 384) 1,536 block_9_depthwis…
block_9_depthwise_… (ReLU) (None, 14, 14, 384) 0 block_9_depthwis…
block_9_project (Conv2D) (None, 14, 14, 64) 24,576 block_9_depthwis…
block_9_project_BN (BatchNormalizatio) (None, 14, 14, 64) 256 block_9_project[…
block_9_add (Add) (None, 14, 14, 64) 0 block_8_add[0][0…,block_9_project_…
block_10_expand (Conv2D) (None, 14, 14, 384) 24,576 block_9_add[0][0]
block_10_expand_BN (BatchNormalizatio) (None, 14, 14, 384) 1,536 block_10_expand[…
block_10_expand_re… (ReLU) (None, 14, 14, 384) 0 block_10_expand_…
block_10_depthwise (DepthwiseConv2D) (None, 14, 14, 384) 3,456 block_10_expand_…
block_10_depthwise… (BatchNormalizatio) (None, 14, 14, 384) 1,536 block_10_depthwi…
block_10_depthwise… (ReLU) (None, 14, 14, 384) 0 block_10_depthwi…
block_10_project (Conv2D) (None, 14, 14, 96) 36,864 block_10_depthwi…
block_10_project_BN (BatchNormalizatio) (None, 14, 14, 96) 384 block_10_project…
block_11_expand (Conv2D) (None, 14, 14, 576) 55,296 block_10_project…
block_11_expand_BN (BatchNormalizatio) (None, 14, 14, 576) 2,304 block_11_expand[…
block_11_expand_re… (ReLU) (None, 14, 14, 576) 0 block_11_expand_…
block_11_depthwise (DepthwiseConv2D) (None, 14, 14, 576) 5,184 block_11_expand_…
block_11_depthwise… (BatchNormalizatio) (None, 14, 14, 576) 2,304 block_11_depthwi…
block_11_depthwise… (ReLU) (None, 14, 14, 576) 0 block_11_depthwi…
block_11_project (Conv2D) (None, 14, 14, 96) 55,296 block_11_depthwi…
block_11_project_BN (BatchNormalizatio) (None, 14, 14, 96) 384 block_11_project…
block_11_add (Add) (None, 14, 14, 96) 0 block_10_project…,block_11_project_…
block_12_expand (Conv2D) (None, 14, 14, 576) 55,296 block_11_add[0][…
block_12_expand_BN (BatchNormalizatio) (None, 14, 14, 576) 2,304 block_12_expand[…
block_12_expand_re… (ReLU) (None, 14, 14, 576) 0 block_12_expand_…
block_12_depthwise (DepthwiseConv2D) (None, 14, 14, 576) 5,184 block_12_expand_…
block_12_depthwise… (BatchNormalizatio) (None, 14, 14, 576) 2,304 block_12_depthwi…
block_12_depthwise… (ReLU) (None, 14, 14, 576) 0 block_12_depthwi…
block_12_project (Conv2D) (None, 14, 14, 96) 55,296 block_12_depthwi…
block_12_project_BN (BatchNormalizatio) (None, 14, 14, 96) 384 block_12_project…
block_12_add (Add) (None, 14, 14, 96) 0 block_11_add[0][…,block_12_project_…
block_13_expand (Conv2D) (None, 14, 14, 576) 55,296 block_12_add[0][…
block_13_expand_BN (BatchNormalizatio) (None, 14, 14, 576) 2,304 block_13_expand[…
block_13_expand_re… (ReLU) (None, 14, 14, 576) 0 block_13_expand_…
block_13_pad (ZeroPadding2D) (None, 15, 15, 576) 0 block_13_expand_…
block_13_depthwise (DepthwiseConv2D) (None, 7, 7, 576) 5,184 block_13_pad[0][…
block_13_depthwise… (BatchNormalizatio) (None, 7, 7, 576) 2,304 block_13_depthwi…
block_13_depthwise… (ReLU) (None, 7, 7, 576) 0 block_13_depthwi…
block_13_project (Conv2D) (None, 7, 7, 160) 92,160 block_13_depthwi…
block_13_project_BN (BatchNormalizatio) (None, 7, 7, 160) 640 block_13_project…
block_14_expand (Conv2D) (None, 7, 7, 960) 153,600 block_13_project…
block_14_expand_BN (BatchNormalizatio) (None, 7, 7, 960) 3,840 block_14_expand[…
block_14_expand_re… (ReLU) (None, 7, 7, 960) 0 block_14_expand_…
block_14_depthwise (DepthwiseConv2D) (None, 7, 7, 960) 8,640 block_14_expand_…
block_14_depthwise… (BatchNormalizatio) (None, 7, 7, 960) 3,840 block_14_depthwi…
block_14_depthwise… (ReLU) (None, 7, 7, 960) 0 block_14_depthwi…
block_14_project (Conv2D) (None, 7, 7, 160) 153,600 block_14_depthwi…
block_14_project_BN (BatchNormalizatio) (None, 7, 7, 160) 640 block_14_project…
block_14_add (Add) (None, 7, 7, 160) 0 block_13_project…,block_14_project_…
block_15_expand (Conv2D) (None, 7, 7, 960) 153,600 block_14_add[0][…
block_15_expand_BN (BatchNormalizatio) (None, 7, 7, 960) 3,840 block_15_expand[…
block_15_expand_re… (ReLU) (None, 7, 7, 960) 0 block_15_expand_…
block_15_depthwise (DepthwiseConv2D) (None, 7, 7, 960) 8,640 block_15_expand_…
block_15_depthwise… (BatchNormalizatio) (None, 7, 7, 960) 3,840 block_15_depthwi…
block_15_depthwise… (ReLU) (None, 7, 7, 960) 0 block_15_depthwi…
block_15_project (Conv2D) (None, 7, 7, 160) 153,600 block_15_depthwi…
block_15_project_BN (BatchNormalizatio) (None, 7, 7, 160) 640 block_15_project…
block_15_add (Add) (None, 7, 7, 160) 0 block_14_add[0][…,block_15_project_…
block_16_expand (Conv2D) (None, 7, 7, 960) 153,600 block_15_add[0][…
block_16_expand_BN (BatchNormalizatio) (None, 7, 7, 960) 3,840 block_16_expand[…
block_16_expand_re… (ReLU) (None, 7, 7, 960) 0 block_16_expand_…
block_16_depthwise (DepthwiseConv2D) (None, 7, 7, 960) 8,640 block_16_expand_…
block_16_depthwise… (BatchNormalizatio) (None, 7, 7, 960) 3,840 block_16_depthwi…
block_16_depthwise… (ReLU) (None, 7, 7, 960) 0 block_16_depthwi…
block_16_project (Conv2D) (None, 7, 7, 320) 307,200 block_16_depthwi…
block_16_project_BN (BatchNormalizatio) (None, 7, 7, 320) 1,280 block_16_project…
Conv_1 (Conv2D) (None, 7, 7, 1280) 409,600 block_16_project…
Conv_1_bn (BatchNormalizatio) (None, 7, 7, 1280) 5,120 Conv_1[0][0]
out_relu (ReLU) (None, 7, 7, 1280) 0 Conv_1_bn[0][0]
global_average_poo… (GlobalAveragePool…) (None, 1280) 0 out_relu[0][0]
dropout_4 (Dropout) (None, 1280) 0 global_average_p…
dense_4 (Dense) (None, 128) 163,968 dropout_4[0][0]
dropout_5 (Dropout) (None, 128) 0 dense_4[0][0]
dense_5 (Dense) (None, 1) 129 dropout_5[0][0]

 

Total params: 2,422,081 (9.24 MB)
Trainable params: 164,097 (641.00 KB)
Non-trainable params: 2,257,984 (8.61 MB)

Step 4: Train the Driver Drowsiness Detection Model

The MobileNetV2 model is ready. Now we will: 

  • Feed the images into the model in batches.
  • Use 10 epochs (adjustable).
  • Monitor both training and validation accuracy.
  • Save training history for later evaluation and plotting.

Here is the code to do so:

# Train the model
history = model.fit(
    train_generator,
    epochs=10,
    validation_data=val_generator
)

Output:

usr/local/lib/python3.11/dist-packages/keras/src/trainers/data_adapters/py_dataset_adapter.py:121: UserWarning: Your `PyDataset` class should call `super().__init__(**kwargs)` in its constructor. `**kwargs` can include `workers`, `use_multiprocessing`, `max_queue_size`. Do not pass these arguments to `fit()`, as they will be ignored.

  self._warn_if_super_not_called()

Epoch 1/10

1045/1045 ━━━━━━━━━━━━━━━━━━━━ 1883s 2s/step - accuracy: 0.8021 - loss: 0.4217 - val_accuracy: 0.5983 - val_loss: 0.9446

Epoch 2/10

1045/1045 ━━━━━━━━━━━━━━━━━━━━ 1878s 2s/step - accuracy: 0.9733 - loss: 0.0939 - val_accuracy: 0.6145 - val_loss: 1.0695

Epoch 3/10

1045/1045 ━━━━━━━━━━━━━━━━━━━━ 1886s 2s/step - accuracy: 0.9879 - loss: 0.0482 - val_accuracy: 0.6338 - val_loss: 1.2522

Epoch 4/10

1045/1045 ━━━━━━━━━━━━━━━━━━━━ 1951s 2s/step - accuracy: 0.9922 - loss: 0.0315 - val_accuracy: 0.6071 - val_loss: 1.3467

Epoch 5/10

1045/1045 ━━━━━━━━━━━━━━━━━━━━ 1868s 2s/step - accuracy: 0.9940 - loss: 0.0223 - val_accuracy: 0.6102 - val_loss: 1.5587

Epoch 6/10

1045/1045 ━━━━━━━━━━━━━━━━━━━━ 1870s 2s/step - accuracy: 0.9953 - loss: 0.0173 - val_accuracy: 0.5946 - val_loss: 1.6377

Epoch 7/10

1045/1045 ━━━━━━━━━━━━━━━━━━━━ 1882s 2s/step - accuracy: 0.9966 - loss: 0.0131 - val_accuracy: 0.5836 - val_loss: 1.6422

Epoch 8/10

1045/1045 ━━━━━━━━━━━━━━━━━━━━ 1868s 2s/step - accuracy: 0.9977 - loss: 0.0095 - val_accuracy: 0.5844 - val_loss: 1.5191

Epoch 9/10

1045/1045 ━━━━━━━━━━━━━━━━━━━━ 1859s 2s/step - accuracy: 0.9974 - loss: 0.0088 - val_accuracy: 0.5873 - val_loss: 1.4839

Epoch 10/10

1045/1045 ━━━━━━━━━━━━━━━━━━━━ 1868s 2s/step - accuracy: 0.9987 - loss: 0.0059 - val_accuracy: 0.5518 - val_loss: 1.5908

What does the output mean?

The output tells us that - 

  • The training accuracy consistently increased and reached ~99.9%.
  • The validation accuracy peaked at ~63.4% (Epoch 3) and declined afterwards.
  • The validation loss increased. Indicating overfitting. The model is memorizing training data but not generalizing well to unseen data.

Here;s the code to visualize accuracy and loss:

import matplotlib.pyplot as plt

# Plot accuracy
plt.plot(history.history['accuracy'], label='Train Accuracy')
plt.plot(history.history['val_accuracy'], label='Validation Accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend()
plt.title('Model Accuracy over Epochs')
plt.show()

# Plot loss
plt.plot(history.history['loss'], label='Train Loss')
plt.plot(history.history['val_loss'], label='Validation Loss')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.title('Model Loss over Epochs')
plt.show()

Output:

Step 5: Save the Trained Model

Now, let’s save the dataset so that we can reuse it later. Either for real-time predictions or deployment, without retraining.

Here’s the code to do so:

# Save the model to a .h5 file
model.save('drowsiness_detection_model.h5')

print("Model saved successfully!")

Output:

Model saved successfully!

Conclusion

Training Accuracy (Last Epoch): 99.87%
Validation Accuracy (Last Epoch): 55.18%

The training accuracy is very high. But the validation accuracy is much lower and started decreasing after epoch 2. This suggests overfitting. The model performs well on the training data but struggles to generalize to unseen validation data.

Unlock the power of data with our popular Data Science courses, designed to make you proficient in analytics, machine learning, and big data!

background

Liverpool John Moores University

MS in Data Science

Dual Credentials

Master's Degree17 Months

Placement Assistance

Certification6 Months

Elevate your career by learning essential Data Science skills such as statistical modeling, big data processing, predictive analytics, and SQL!

Stay informed and inspired with our popular Data Science articles, offering expert insights, trends, and practical tips for aspiring data professionals!

Colab Link:
https://colab.research.google.com/drive/1dqhd8ICCtQ_x_COhV1xwMLBVC_fdAHjC?usp=sharing

Frequently Asked Questions (FAQs)

1. What is driver drowsiness detection, and why is it important?

2. What technologies are used in driver drowsiness detection systems?

3. How does computer vision help detect drowsiness in drivers?

4. What datasets can be used for training a drowsiness detection model?

5. Can driver drowsiness detection be integrated into real-time applications?

Rohit Sharma

802 articles published

Rohit Sharma is the Head of Revenue & Programs (International), with over 8 years of experience in business analytics, EdTech, and program management. He holds an M.Tech from IIT Delhi and specializes...

Speak with Data Science Expert

+91

By submitting, I accept the T&C and
Privacy Policy

Start Your Career in Data Science Today

Top Resources

Recommended Programs

upGrad Logo

Certification

3 Months

Liverpool John Moores University Logo
bestseller

Liverpool John Moores University

MS in Data Science

Dual Credentials

Master's Degree

17 Months

IIIT Bangalore logo
bestseller

The International Institute of Information Technology, Bangalore

Executive Diploma in Data Science & AI

Placement Assistance

Executive PG Program

12 Months