Blog_Banner_Asset
    Homebreadcumb forward arrow iconBlogbreadcumb forward arrow iconBig Databreadcumb forward arrow iconApache Spark Tutorial For Beginners: Learn Apache Spark With Examples

Apache Spark Tutorial For Beginners: Learn Apache Spark With Examples

Last updated:
26th Mar, 2020
Views
Read Time
11 Mins
share image icon
In this article
Chevron in toc
View All
Apache Spark Tutorial For Beginners: Learn Apache Spark With Examples

Introduction

Data is everywhere – from a small startup’s customer logs to a huge multinational company’s financial sheets. Companies use this generated data to understand how their business is performing and where they can improve. Peter Sondergaard, Senior Vice President of Gartner Research, said that information is the oil for the 21st century and analytics can be considered the combustion engine. 

But as the companies grow, so do their customers, stakeholders, business partners and products. So, the amount of data they have to handle becomes huge.

All this data has to be analyzed for creating better products for their customers. But terabytes of data produced per second cannot be handled using excel sheets and logbooks. Huge datasets can be handled by tools such as Apache Spark.

We will get into the details of the software through an introduction to Apache Spark.

Ads of upGrad blog

What is Apache Spark?

Apache Spark is an open-source cluster computing framework. It is basically a data processing system that is used for handling huge data workloads and data sets. It can process large data sets quickly and also distribute these tasks across multiple systems for easing the workload. It has a simple API that reduces the burden from the developers when they get overwhelmed by the two terms – big data processing and distributed computing!

The development of Apache Spark started off as an open-source research project at UC Berkeley’s AMPLab by Matei Zaharia, who is considered the founder of Spark. In 2010, under a BSD license, the project was open-sourced. Later on, it became an incubated project under the Apache Software Foundation in 2013. This became one of the top projects of the company in 2014.

In 2015, Spark had more than 1000 contributors to the project. This made it one of the most active projects in the Apache Software Foundation and also in the world of big data. Over 200 companies have been supporting this project since 2009. 

But why all this craziness over Spark?

This is because Spark is capable of handling tons of data and processing it at a time. This data can be distributed over thousands of connected virtual or physical servers. It has a huge set of APIs and libraries that work with several programming languages such as Python, R, Scala and Java.  It supports streaming of data, complicated tasks such as graph processing and also machine learning. Also, the game changing features of apache spark makes its demand sky high.

It supports a wide range of databases such as Hadoop’s HDFS, Amazon S3 and NoSQL databases such as MongoDB, Apache HBase, MapR Database and Apache Cassandra. It also supports Apache Kafka and MapR Event Store.

Explore our Popular Software Engineering Courses

Apache Spark Architecture

After exploring the introduction of Apache Spark, we will now learn about its structure. Learn more about Apache Architecture.

Its architecture is well-defined and has two primary components:

Resilient Distributed Datasets (RDD)

This is a collection of data items that are stored on the worker nodes of the Spark cluster. A cluster is a distributed collection of machines where you can install Spark. RDDs are called resilient, as they are capable of fixing the data in case of a failure. They are called distributed as they are spread across multiple nodes across a cluster.

Two types of RDDs are supported by Spark:

  • Hadoop datasets created from files on the HDFS (Hadoop Distributed File System)
  • Parallelized collections based on Scala collections

RDDs can be used for two types of operations that are:

  • Transformations – These operations are used for creating RDDs
  • Actions – These are used for instructing Spark to perform some computation and return the result to the driver. We will learn more about drivers in the upcoming sections 

DAG (Directed Acyclic Graph)

This can be considered as a sequence of actions on data. They are a combination of vertices and edges. Each vertex represents an RDD and each edge represents the computation that has to be performed on that RDD. This is a graph that contains all the operations applied to the RDD.

This is a directed graph as one node is connected to the other. The graph is acyclic as there is no loop or cycle within it. Once a transformation is performed, it cannot return to its original position. A transformation in Apache Spark is an action that transforms a data partition state from A to B.

So, how does this architecture work? Let us see.

The Apache Spark architecture has two primary daemons and a cluster manager. These are – master and worker daemon. A daemon is a program that is executed as a background process. A cluster in Spark can have many slaves but a single master daemon. 

Inside the master node, there is a driver program that executes the Spark application. The interactive shell you might use to run the code acts as the drive program. Inside the driver program, the Spark Context is created. This context and the driver program execute a job with the help of a cluster manager.

The job is then distributed on the worker node after it is split into many tasks. The tasks are run on the RDDs by the worker nodes. The result is given back to the Spark Context. When you increase the number of workers, the jobs can be divided into multiple partitions and run parallel over many systems. This will decrease the workload and improve the completion time of the job. 

Apache Spark: Benefits

These are the advantages of using Apache Spark:

Speed

While executing jobs, the data is first stored in RDDs. So, as this data is stored in memory, it is accessible quickly and the job will be executed faster. Along with in-memory caching, Spark also has optimized query execution. Through this, analytic queries can run faster. A very high data processing speed can be obtained. It can be 100 times faster than Hadoop for processing large scale data.

Handling multiple workloads

Apache Spark can handle multiple workloads at a time. These can be interactive queries, graph processing, machine learning and real-time analytics. A Spark application can incorporate many workloads easily.

Ease of use

Apache Spark has easy to use APIs for handling large datasets. This includes more than 100 operators that you can use to build parallel applications. These operators can transform data, and semi-structured data can be manipulated using data frame APIs.

Language support

Spark is a developer’s favourite as it supports multiple programming languages such as Java, Python, Scala and R. This gives you multiple options for developing your applications. The APIs are also very developer-friendly as they help them to hide the complicated distributed processing technology behind high-level operators that help in reducing the amount of code needed.

Efficiency

Lazy evaluation is carried out in Spark. This means that all the transformations made through the RDDS are lazy in nature. So, the results of these transformations are not produced straight away and a new RDD is created from an existing one. The user can organize the Apache program into several smaller operations, which increases the manageability of the programs.

Lazy evaluation increases the speed of the system and its efficiency.

In-Demand Software Development Skills

Community support

Being one of the largest open-source big data projects, it has more than 200 developers from different companies working on it. In 2009, the community was initiated and has been growing ever since. So, if you face a technical error, you are likely to find a solution online, posted by developers.

You might also find many freelance or full-time developers ready to assist you in your Spark project.

Real-time streaming

Spark is famous for streaming real-time data. This is made possible through Spark Streaming, which is an extension of the core Spark API. This allows data scientists to handle real-time data from various sources such as Amazon Kinesis and Kafka. The processed data can then be transferred to databases, file systems and dashboards.

The process is efficient in the sense that Spark Streaming can recover from data failures quickly. It performs better load balancing and uses resources efficiently.

Applications of Apache Spark

After introduction to Apache Spark and its benefits, we will learn more about its different applications:

Machine learning

Apache Spark’s ability to store the data in-memory and execute queries repeatedly makes it a good option for training ML algorithms. This is because running similar queries repeatedly will reduce the time required for determining the best possible solution.

Spark’s Machine Learning Library (MLlib) can do advanced analytics operations such as predictive analysis, classification, sentiment analysis, clustering and dimensionality reduction.

Data integration

Data that is produced across the different systems within an organization are not always clean and organized. Spark is a very efficient tool in performing ETL operations on this data. This means it executes, extracts, transforms and loads operations to pull data from different sources, clean and organize it. This data is then loaded into another system for analysis.

Explore Our Software Development Free Courses

Interactive analysis

This is a process through which users can perform data analytics on live data. With the help of the Structured Streaming feature in Spark, users can run interactive queries on live data. You can also run interactive queries on a live web session that will boost Web analytics. Machine learning algorithms can also be applied to these live data streams.

Fog computing

We know that IoT (Internet of things) deals with lots of data rising from various devices having sensors. This creates a network of interconnected devices and users. But as the IoT network begins to expand, there is a need for a distributed parallel processing system.

So, data processing and decentralizing storage are done through Fog Computing along with Spark. For this, Spark offers powerful components such as Spark Streaming, GraphX and MLlib. Learn more about the applications of apache spark.

Conclusion

Ads of upGrad blog

We have learnt that Apache Spark is fast, effective and feature-rich. That is why companies such as Huawei, Baidu, IBM, JP Morgan Chase, Lockheed Martin and Microsoft are using it to accelerate their business. It is now famous in various fields such as retail, business, financial services, healthcare management and manufacturing.

As the world becomes more dependent on data, Apache Spark will continue to be an important tool for data processing in future.

If you are interested to know more about Big Data, check out our Advanced Certificate Programme in Big Data from IIIT Bangalore.

Learn Software Development Courses online from the World’s top Universities. Earn Executive PG Programs, Advanced Certificate Programs or Masters Programs to fast-track your career.

Profile

Utkarsh Singh

Blog Author
Get Free Consultation

Selectcaret down icon
Select Area of interestcaret down icon
Select Work Experiencecaret down icon
By clicking 'Submit' you Agree to  
UpGrad's Terms & Conditions

Our Popular Big Data Course

Frequently Asked Questions (FAQs)

1How does Apache Spark work?

Apache Spark incorporates existing architecture very easily. There are four different installations in Apache Spark: Local, Standalone, YARN client, and YARN cluster. Each type of installation has its way of dealing with tasks. However, for all Big Data operations, tasks are divided into Spark Batch or Batch Streaming jobs. In Spark Batch jobs, data is collected in multiple data stores, and the batch jobs are responsible for analysing data. Moreover, Batch jobs take up data and information from repositories for further analysis. On the contrary, Spark Streaming jobs use the Spark analytics tool, which uses data in real-time. For effective data management by experts, the Spark analytics tool uses streaming and historical data. They are both very efficient in their tasks.

2What are Apache Spark’s benefits over MapReduce?

There are numerous benefits of Apache Spark over MapReduce. To begin with, the in-memory processing in Spark gives it the advantage of operating 100x faster than MapReduce. However, to work with data processing tasks, MapReduce uses persistence storage. Spark is powerful when it uses caching and in-memory data storage, whereas MapReduce has its operational data on disks. MapReduce only functions on batch processing. Spark has inbuilt libraries responsible for regulating many tasks at once using batch processing, SQL queries, streaming, and machine learning. Iterative computing is not present in MapReduce, whereas Spark is flexible with computations repeatedly.

3What do companies think about Hadoop and Apache Spark?

Companies, nowadays, are in the market competing head-to-head against each other. To ensure they are the best in the industry, they must work with the latest tools and technology. Many companies are already working with Spark to conduct their data processing operations. Regardless of how supreme it is as a platform that will take over many other platforms, it has certain limitations. Apache Spark is the future of Big Data and is not going anywhere. It still needs to develop and create ground-breaking results to utilise its potential continuously. So, if either of them suits the data processing requirements, companies will be happy to implement them.

4How does Apache Spark work?

Apache Spark incorporates existing architecture very easily. There are four different installations in Apache Spark: Local, Standalone, YARN client, and YARN cluster. Each type of installation has its way of dealing with tasks. However, for all Big Data operations, tasks are divided into Spark Batch or Batch Streaming jobs. In Spark Batch jobs, data is collected in multiple data stores, and the batch jobs are responsible for analysing data. Moreover, Batch jobs take up data and information from repositories for further analysis. On the contrary, Spark Streaming jobs use the Spark analytics tool, which uses data in real-time. For effective data management by experts, the Spark analytics tool uses streaming and historical data. They are both very efficient in their tasks.

5What are Apache Spark’s benefits over MapReduce?

There are numerous benefits of Apache Spark over MapReduce. To begin with, the in-memory processing in Spark gives it the advantage of operating 100x faster than MapReduce. However, to work with data processing tasks, MapReduce uses persistence storage. Spark is powerful when it uses caching and in-memory data storage, whereas MapReduce has its operational data on disks. MapReduce only functions on batch processing. Spark has inbuilt libraries responsible for regulating many tasks at once using batch processing, SQL queries, streaming, and machine learning. Iterative computing is not present in MapReduce, whereas Spark is flexible with computations repeatedly.

6What do companies think about Hadoop and Apache Spark?

Companies, nowadays, are in the market competing head-to-head against each other. To ensure they are the best in the industry, they must work with the latest tools and technology. Many companies are already working with Spark to conduct their data processing operations. Regardless of how supreme it is as a platform that will take over many other platforms, it has certain limitations. Apache Spark is the future of Big Data and is not going anywhere. It still needs to develop and create ground-breaking results to utilise its potential continuously. So, if either of them suits the data processing requirements, companies will be happy to implement them.

Explore Free Courses

Suggested Blogs

Top 10 Hadoop Commands [With Usages]
11947
In this era, with huge chunks of data, it becomes essential to deal with them. The data springing from organizations with growing customers is way lar
Read More

by Rohit Sharma

12 Apr 2024

Characteristics of Big Data: Types & 5V’s
5762
Introduction The world around is changing rapidly, we live a data-driven age now. Data is everywhere, from your social media comments, posts, and lik
Read More

by Rohit Sharma

04 Mar 2024

50 Must Know Big Data Interview Questions and Answers 2024: For Freshers & Experienced
7321
Introduction The demand for potential candidates is increasing rapidly in the big data technologies field. There are plenty of opportunities in this
Read More

by Mohit Soni

What is Big Data – Characteristics, Types, Benefits & Examples
185827
Lately the term ‘Big Data’ has been under the limelight, but not many people know what is big data. Businesses, governmental institutions, HCPs (Healt
Read More

by Abhinav Rai

18 Feb 2024

Cassandra vs MongoDB: Difference Between Cassandra & MongoDB [2023]
5468
Introduction Cassandra and MongoDB are among the most famous NoSQL databases used by large to small enterprises and can be relied upon for scalabilit
Read More

by Rohit Sharma

31 Jan 2024

13 Ultimate Big Data Project Ideas & Topics for Beginners [2024]
100356
Big Data Project Ideas Big Data is an exciting subject. It helps you find patterns and results you wouldn’t have noticed otherwise. This skill
Read More

by upGrad

16 Jan 2024

Be A Big Data Analyst – Skills, Salary & Job Description
899720
In an era dominated by Big Data, one cannot imagine that the skill set and expertise of traditional Data Analysts are enough to handle the complexitie
Read More

by upGrad

16 Dec 2023

12 Exciting Hadoop Project Ideas & Topics For Beginners [2024]
20858
Hadoop Project Ideas & Topics Today, big data technologies power diverse sectors, from banking and finance, IT and telecommunication, to manufact
Read More

by Rohit Sharma

29 Nov 2023

Top 10 Exciting Data Engineering Projects & Ideas For Beginners [2024]
40154
Data engineering is an exciting and rapidly growing field that focuses on building, maintaining, and improving the systems that collect, store, proces
Read More

by Rohit Sharma

21 Sep 2023

Schedule 1:1 free counsellingTalk to Career Expert
icon
footer sticky close icon