Homebreadcumb forward arrow iconBlogbreadcumb forward arrow iconData Sciencebreadcumb forward arrow iconTop 10 Data Science Projects on Github You Should Get Your Hands-on [2024]

Top 10 Data Science Projects on Github You Should Get Your Hands-on [2024]

Last updated:
9th Jan, 2021
Read Time
8 Mins
share image icon
In this article
Chevron in toc
View All
Top 10 Data Science Projects on Github You Should Get Your Hands-on [2024]

With Data Science taking the industry by storm, there’s a massive demand for skilled and qualified Data Science experts. Naturally, the competition in the present market is fierce. In such a scenario, employers not only look for formal education and professional certifications, but they also demand practical experience. And what better than Data Science projects to prove your worth and showcase your real-world Data Science skills to potential employers!

If you aspire to enter the Data Science domain, the best way to build your portfolio from the ground-up is to work on Data Science projects. We’ve created this post to inspire you to develop your own Data Science projects. 

Since GitHub is an excellent repository of Data Science project ideas, here is a list of Data Science projects on GitHub that you should check out! To gain more knowledge and practical applications, check out our data science courses from top universities. 

10 Best Data Science Projects on GitHub

1. Face Recognition

The face recognition project makes use of Deep Learning and the HOG (Histogram of Oriented Gradients) algorithm. This face recognition system is designed to find faces in an image (HOG algorithm), affine transformations (align faces using an ensemble of regression trees), face encoding (FaceNet), and make predictions (Linear SVM). 

Using the HOG algorithm, you will compute the weighted vote orientation gradients of 16×16 pixels squares, instead of computing gradients for each pixel of a particular image. This will generate a HOG image that represents the fundamental structure of a face. In the next step, you have to use the dlib Python library for creating and viewing HOG representations to find which part of the image bears the closest resemblance with the trained HOG pattern.

2. Kaggle Bike Sharing

Bike-sharing systems let you book and rent bicycles/motorbikes and return them as well, all through an automated system. This project is more like a Kaggle competition wherein you will have to combine historical usage patterns with weather data to predict the demand for bike rental services for the Capital Bikeshare program in Washington, D.C.

The primary aim of this Kaggle competition is to create an ML model (based explicitly on contextual features) that can predict the number of bikes rented. The challenge has two parts. While in the first part, you will focus on understanding, analyzing, and processing the datasets, the second part is all about designing the model by using an ML library.

3. Text Analysis of the Mexican Government Report

This project is an excellent application of NLP. On September 1, 2019, the Mexican government released an annual report in the form of a PDF. So, your aim in this project will be to extract text from the PDF, clean it, run it via an NLP pipeline, and visualize the results using graphical representations. 

For this project, you will have to use multiple Python libraries, including: 

  • PyPDF2 to extract text from PDF files.
  • SpaCy to pass the extracted text into an NLP pipeline.
  • Pandas to extract and analyze insights from datasets.
  • NumPy for speedy matrix operations.
  • Matplotlib for designing plot and graphs.
  • Seaborn to improve the style of plots/graphs.
  • Geopandas to plot maps. 


ALBERT is based on BERT, a Google project that brought about a radical change in the field of NLP. It is an enhanced implementation of BERT, designed for self-supervised learning language representations using TensorFlow. 

In BERT, the pre-trained models are enormous, and thus, it becomes challenging to unpack them, plug them in a model, and run them on local machines. This is why the need for ALBERT helps you achieve state-of-the-art performance on the main benchmarks with 30% parameters less. Although the albert_base_zh has merely 10% parameters compared to BERT, it still retains the original accuracy of BERT. 

Our learners also read: Top Python Courses for Free

Explore our Popular Data Science Courses

5. StringSifter

If cybersecurity interests you, you will love to work on this project! Launched by FireEye, StringSifter is an ML tool that can automatically rank strings based on their malware analysis relevance. 

Usually, standard malware programs include strings for performing specific operations such as creating the registry key, copying files from one location to another location, and so on. StringSifter is a fantastic solution for mitigating cyber threats. However, you must have Python version 3.6 or above for running and installing StringSifter.

6. Tiler

Given the fact that today, the Web and online platforms are flooded with images, there’s a vast scope for working with image data in the modern industry. So, imagine if you can create an image-oriented project, it will be a highly valued asset for many. 

Tiler is such an image tool that allows you to create unique images by combining many different kinds of smaller pictures or “tiles.” According to Tiler’s GitHub description, you can build an image “lines, waves, out of circles, cross stitches, Minecraft blocks, legos, letters, paper clips,” and much more. With Tiler, you will have endless possibilities to make innovative image creations. 

upGrad’s Exclusive Data Science Webinar for you –

ODE Thought Leadership Presentation

Read our popular Data Science Articles

7. DeepCTR

DeepCTR is an “easy-to-use, modular, and extendible package of Deep Learning-based CTR models.” It also includes numerous other vital elements and layers that can be very handy for building customized models. 

Originally, the DeepCTR project was designed on TensorFlow. While TensorFlow is a commendable tool, it is not everyone’s cup of tea. Hence, the DeepCTR-Torch repository was created. The new version includes the complete DeepCTR code in PyTorch. You can install DeepCTR via pip using the following statement:

pip install -U deepctr-torch

With DeepCTR, it becomes easy to use any complex model with and model.predict() functions.

Top Data Science Skills to Learn

8. TubeMQ

Ever wondered how tech giants and industry leaders store, extract, and manage their data? It is with the help of tools like TubeMQ, Tencent’s open-source, distributed messaging queue (MQ) system. 

TubeMQ has been functioning since 2013, and it delivers high-performance storage and transmission of large volumes of big data. Since it has amassed over seven years of data storage and transmission, TubeMQ has the upper hand over other MQ tools. It promises excellent performance and stability in production practice. Plus, it comes at a relatively low cost. The TubeMQ user guide provides detailed documentation about everything you need to know about the tool.

9. DeepPrivacy

While each one of us loves to indulge in the digital and social media world from time to time, one thing (that we all agree) is lacking from the digital world is privacy. Once you upload a selfie or a video online, you will be watched, analyzed, and criticized even. In worst-case scenarios, your videos and images may end up being manipulated. 

This is why we need tools like DeepPrivacy. It is a fully automatic anonymization technique for images that leverages GAN (generative adversarial network). DeepPrivacy’s GAN model does not view any private or sensitive information. However, it can generate a fully anonymous image. It can do so by studying and analyzing the original pose of the individual(s) and the background image. DeepPrivacy uses bounding box annotation to identify the privacy-sensitive area of an image. It further uses Mask R-CNN to sparse pose information of faces and DSFD to detect faces in the image. 

10. IMDb Movie Rating Prediction System

This Data Science project aims to rate a movie even before it releases. The project is divided into three parts. The first part seeks to parse the data accumulated from the IMDb website. This data will include information like directors, producers, casting production, movie description, awards, genres, budget, gross, and imdb_rating. You can create the movie_contents.json file by writing the following line:

python3 nb_elements

In the project’s second part, the aim is to analyze the data frames and observe the correlations between variables. For instance, whether or not the IMDb score is correlated to the number of awards and the worldwide gross. The final part will involve using Machine Learning (Random Forest) to predict the IMDb rating based on the most relevant variables.

Wrapping up

These are some of the most useful Data Science projects on GitHub that you can recreate to sharpen your real-world Data Science skills. The more time and effort you invest in building Data Science projects, the better you will get at model building. 

If you are curious to learn about data science, check out IIIT-B & upGrad’s Executive PG Programme in Data Science which is created for working professionals and offers 10+ case studies & projects, practical hands-on workshops, mentorship with industry experts, 1-on-1 with industry mentors, 400+ hours of learning and job assistance with top firms.


Rohit Sharma

Blog Author
Rohit Sharma is the Program Director for the UpGrad-IIIT Bangalore, PG Diploma Data Analytics Program.

Frequently Asked Questions (FAQs)

1How does contributing to open-source projects benefit us?

Open-source projects are those projects whose source code is open to all and anyone can access it to make modifications to it. Contributing to open-source projects is highly beneficial as it not only sharpens your skills but also gives you some big projects to put on your resume. As many big companies are shifting to open-source software, it will be profitable for you if you start contributing early. Some of the big names like Microsoft, Google, IBM, and Cisco have embraced open source one way or another. There is a large community of proficient open-source developers out there who are constantly contributing to make the software better and updated. The community is highly beginner-friendly and always ready to step up and welcome new contributors. There is good documentation that can guide your way to contributing to open source.

2What is the HOG algorithm?

Histogram of Oriented Gradients or HOG is an object detector used in computer visions. If you are familiar with the edge orientation histograms, you can relate to HOG. This method is used to measure the occurrences of the gradient orientations in a certain part of an image. HOG algorithm is also used to compute the weighted vote orientation gradients of 16×16 pixels squares, instead of computing gradients for each pixel of a particular image. The implementation of this algorithm is divided into 5 steps which are- gradient computation, orientation binning, descriptor blocks, block normalization, and object recognition.

3What are the steps required to build an ML model?

The following steps must be followed in order to develop an ML model: The first step is to gather the dataset for your model. 80% of this data will be used in the training and the rest of the 20% will be used in testing and model validation. Then, you need to select a suitable algorithm for your model. The algorithm selection totally depends on the problem type and the data set. Next comes the training of the model. It includes running the model against various inputs and re-adjusting it according to the results. This process is repeated until the most accurate results are achieved.

Explore Free Courses

Suggested Blogs

4 Types of Trees in Data Structures Explained: Properties & Applications
In this article, you will learn about the Types of Trees in Data Structures with examples, Properties & Applications. In my journey with data stru
Read More

by Rohit Sharma

31 May 2024

Searching in Data Structure: Different Search Methods Explained
The communication network is expanding, and so the people are using the internet! Businesses are going digital for efficient management. The data gene
Read More

by Rohit Sharma

29 May 2024

What is Linear Data Structure? List of Data Structures Explained
Data structures are the data structured in a way for efficient use by the users. As the computer program relies hugely on the data and also requires a
Read More

by Rohit Sharma

28 May 2024

4 Types of Data: Nominal, Ordinal, Discrete, Continuous
Summary: In this Article, you will learn about what are the 4 Types of Data in Statistics. Qualitative Data Type Nominal Ordinal Quantitative Data
Read More

by Rohit Sharma

28 May 2024

Python Developer Salary in India in 2024 [For Freshers & Experienced]
Wondering what is the range of Python developer salary in India? Before going deep into that, do you know why Python is so popular now? Python has be
Read More

by Sriram

21 May 2024

Binary Tree in Data Structure: Properties, Types, Representation & Benefits
Data structures serve as the backbone of efficient data organization and management within computer systems. They play a pivotal role in computer algo
Read More

by Rohit Sharma

21 May 2024

Data Analyst Salary in India in 2024 [For Freshers & Experienced]
Summary: In this Article, you will learn about Data Analyst Salary in India in 2024. Data Science Job roles Average Salary per Annum Data Scient
Read More

by Shaheen Dubash

20 May 2024

Python Free Online Course with Certification [2024]
Summary: In this Article, you will learn about python free online course with certification. Programming with Python: Introduction for Beginners Le
Read More

by Rohit Sharma

20 May 2024

13 Interesting Data Structure Projects Ideas and Topics For Beginners [2023]
 In the world of computer science, understanding data structures is essential, especially for beginners. These structures serve as the foundation for
Read More

by Rohit Sharma

20 May 2024

Schedule 1:1 free counsellingTalk to Career Expert
footer sticky close icon