With Data Science taking the industry by storm, there’s a massive demand for skilled and qualified Data Science experts. Naturally, the competition in the present market is fierce. In such a scenario, employers not only look for formal education and professional certifications, but they also demand practical experience. And what better than Data Science projects to prove your worth and showcase your real-world Data Science skills to potential employers!
If you aspire to enter the Data Science domain, the best way to build your portfolio from the ground-up is to work on Data Science projects. We’ve created this post to inspire you to develop your own Data Science projects.
Since GitHub is an excellent repository of Data Science project ideas, here is a list of Data Science projects on GitHub that you should check out! To gain more knowledge and practical applications, check out our data science courses from top universities.
10 Best Data Science Projects on GitHub
The face recognition project makes use of Deep Learning and the HOG (Histogram of Oriented Gradients) algorithm. This face recognition system is designed to find faces in an image (HOG algorithm), affine transformations (align faces using an ensemble of regression trees), face encoding (FaceNet), and make predictions (Linear SVM).
Using the HOG algorithm, you will compute the weighted vote orientation gradients of 16×16 pixels squares, instead of computing gradients for each pixel of a particular image. This will generate a HOG image that represents the fundamental structure of a face. In the next step, you have to use the dlib Python library for creating and viewing HOG representations to find which part of the image bears the closest resemblance with the trained HOG pattern.
Bike-sharing systems let you book and rent bicycles/motorbikes and return them as well, all through an automated system. This project is more like a Kaggle competition wherein you will have to combine historical usage patterns with weather data to predict the demand for bike rental services for the Capital Bikeshare program in Washington, D.C.
The primary aim of this Kaggle competition is to create an ML model (based explicitly on contextual features) that can predict the number of bikes rented. The challenge has two parts. While in the first part, you will focus on understanding, analyzing, and processing the datasets, the second part is all about designing the model by using an ML library.
This project is an excellent application of NLP. On September 1, 2019, the Mexican government released an annual report in the form of a PDF. So, your aim in this project will be to extract text from the PDF, clean it, run it via an NLP pipeline, and visualize the results using graphical representations.
For this project, you will have to use multiple Python libraries, including:
- PyPDF2 to extract text from PDF files.
- SpaCy to pass the extracted text into an NLP pipeline.
- Pandas to extract and analyze insights from datasets.
- NumPy for speedy matrix operations.
- Matplotlib for designing plot and graphs.
- Seaborn to improve the style of plots/graphs.
- Geopandas to plot maps.
ALBERT is based on BERT, a Google project that brought about a radical change in the field of NLP. It is an enhanced implementation of BERT, designed for self-supervised learning language representations using TensorFlow.
In BERT, the pre-trained models are enormous, and thus, it becomes challenging to unpack them, plug them in a model, and run them on local machines. This is why the need for ALBERT helps you achieve state-of-the-art performance on the main benchmarks with 30% parameters less. Although the albert_base_zh has merely 10% parameters compared to BERT, it still retains the original accuracy of BERT.
Our learners also read: Top Python Courses for Free
Explore our Popular Data Science Courses
If cybersecurity interests you, you will love to work on this project! Launched by FireEye, StringSifter is an ML tool that can automatically rank strings based on their malware analysis relevance.
Usually, standard malware programs include strings for performing specific operations such as creating the registry key, copying files from one location to another location, and so on. StringSifter is a fantastic solution for mitigating cyber threats. However, you must have Python version 3.6 or above for running and installing StringSifter.
Given the fact that today, the Web and online platforms are flooded with images, there’s a vast scope for working with image data in the modern industry. So, imagine if you can create an image-oriented project, it will be a highly valued asset for many.
Tiler is such an image tool that allows you to create unique images by combining many different kinds of smaller pictures or “tiles.” According to Tiler’s GitHub description, you can build an image “lines, waves, out of circles, cross stitches, Minecraft blocks, legos, letters, paper clips,” and much more. With Tiler, you will have endless possibilities to make innovative image creations.
upGrad’s Exclusive Data Science Webinar for you –
ODE Thought Leadership Presentation
Read our popular Data Science Articles
DeepCTR is an “easy-to-use, modular, and extendible package of Deep Learning-based CTR models.” It also includes numerous other vital elements and layers that can be very handy for building customized models.
Originally, the DeepCTR project was designed on TensorFlow. While TensorFlow is a commendable tool, it is not everyone’s cup of tea. Hence, the DeepCTR-Torch repository was created. The new version includes the complete DeepCTR code in PyTorch. You can install DeepCTR via pip using the following statement:
pip install -U deepctr-torch
With DeepCTR, it becomes easy to use any complex model with model.fit() and model.predict() functions.
Top Data Science Skills to Learn in 2022
Ever wondered how tech giants and industry leaders store, extract, and manage their data? It is with the help of tools like TubeMQ, Tencent’s open-source, distributed messaging queue (MQ) system.
TubeMQ has been functioning since 2013, and it delivers high-performance storage and transmission of large volumes of big data. Since it has amassed over seven years of data storage and transmission, TubeMQ has the upper hand over other MQ tools. It promises excellent performance and stability in production practice. Plus, it comes at a relatively low cost. The TubeMQ user guide provides detailed documentation about everything you need to know about the tool.
While each one of us loves to indulge in the digital and social media world from time to time, one thing (that we all agree) is lacking from the digital world is privacy. Once you upload a selfie or a video online, you will be watched, analyzed, and criticized even. In worst-case scenarios, your videos and images may end up being manipulated.
This is why we need tools like DeepPrivacy. It is a fully automatic anonymization technique for images that leverages GAN (generative adversarial network). DeepPrivacy’s GAN model does not view any private or sensitive information. However, it can generate a fully anonymous image. It can do so by studying and analyzing the original pose of the individual(s) and the background image. DeepPrivacy uses bounding box annotation to identify the privacy-sensitive area of an image. It further uses Mask R-CNN to sparse pose information of faces and DSFD to detect faces in the image.
This Data Science project aims to rate a movie even before it releases. The project is divided into three parts. The first part seeks to parse the data accumulated from the IMDb website. This data will include information like directors, producers, casting production, movie description, awards, genres, budget, gross, and imdb_rating. You can create the movie_contents.json file by writing the following line:
python3 parser.py nb_elements
In the project’s second part, the aim is to analyze the data frames and observe the correlations between variables. For instance, whether or not the IMDb score is correlated to the number of awards and the worldwide gross. The final part will involve using Machine Learning (Random Forest) to predict the IMDb rating based on the most relevant variables.
These are some of the most useful Data Science projects on GitHub that you can recreate to sharpen your real-world Data Science skills. The more time and effort you invest in building Data Science projects, the better you will get at model building.
If you are curious to learn about data science, check out IIIT-B & upGrad’s Executive PG Programme in Data Science which is created for working professionals and offers 10+ case studies & projects, practical hands-on workshops, mentorship with industry experts, 1-on-1 with industry mentors, 400+ hours of learning and job assistance with top firms.
How does contributing to open-source projects benefit us?
Open-source projects are those projects whose source code is open to all and anyone can access it to make modifications to it. Contributing to open-source projects is highly beneficial as it not only sharpens your skills but also gives you some big projects to put on your resume. As many big companies are shifting to open-source software, it will be profitable for you if you start contributing early. Some of the big names like Microsoft, Google, IBM, and Cisco have embraced open source one way or another. There is a large community of proficient open-source developers out there who are constantly contributing to make the software better and updated. The community is highly beginner-friendly and always ready to step up and welcome new contributors. There is good documentation that can guide your way to contributing to open source.
What is the HOG algorithm?
Histogram of Oriented Gradients or HOG is an object detector used in computer visions. If you are familiar with the edge orientation histograms, you can relate to HOG. This method is used to measure the occurrences of the gradient orientations in a certain part of an image. HOG algorithm is also used to compute the weighted vote orientation gradients of 16×16 pixels squares, instead of computing gradients for each pixel of a particular image. The implementation of this algorithm is divided into 5 steps which are- gradient computation, orientation binning, descriptor blocks, block normalization, and object recognition.
What are the steps required to build an ML model?
The following steps must be followed in order to develop an ML model: The first step is to gather the dataset for your model. 80% of this data will be used in the training and the rest of the 20% will be used in testing and model validation. Then, you need to select a suitable algorithm for your model. The algorithm selection totally depends on the problem type and the data set. Next comes the training of the model. It includes running the model against various inputs and re-adjusting it according to the results. This process is repeated until the most accurate results are achieved.