Blog_Banner_Asset
    Homebreadcumb forward arrow iconBlogbreadcumb forward arrow iconData Sciencebreadcumb forward arrow icon5 Spark Optimization Techniques Every Data Scientist Should Know About

5 Spark Optimization Techniques Every Data Scientist Should Know About

Last updated:
5th Oct, 2022
Views
Read Time
11 Mins
share image icon
In this article
Chevron in toc
View All
5 Spark Optimization Techniques Every Data Scientist Should Know About

Be it a small startup or a large corporation, data is everywhere. This data is collected from a variety of sources, such as customer logs, office bills, cost sheets, and employee databases. Companies collect and analyze these data chunks to determine patterns and trends. These patterns help them in making important decisions for the enhancement of the business.

But, this data analysis and number crunching are not possible only through excel sheets. This is where data processing software technologies come in. One of the fastest and widely used data processing frameworks is Apache SparkSpark optimization techniques are used for tuning its performance to make the most out of it.

We will learn about the techniques in a bit. Let us wrap our heads around the basics of this software framework.

What is Apache Spark?

Apache Spark is a world-famous open-source cluster computing framework that is used for processing huge data sets in companies. Processing these huge data sets and distributing these among multiple systems is easy with Apache Spark. It offers simple APIs that make the lives of programmers and developers easy.

Spark provides native bindings for programming languages, such as Python, R, Scala, and Java. It supports machine learning, graph processing, and SQL databases. Due to these amazing benefits, Spark is used in banks, tech firms, financial organizations, telecommunication departments, and government agencies. To learn more about apache spark, check out our data science courses from recognized universities. 

Spark is developed to encompass a broad range of workloads like iterative algorithms, batch applications, interactive queries, and streaming. Furthermore, it decreases the burden of maintaining distinct tools.

Let’s go through the features of Apache Spark that help in spark optimization:

Features of Apache Spark:

  1. Speed:

Spark streamlines running applications in the Hadoop cluster. It offers up to 100 times faster operation in terms of memory and ten times faster operation when running on disk. It can offer these fast speeds by decreasing the number of write/read operations to disk. Moreover, it stores the intermediate processing data in the memory.

  2. Supports multiple languages:

Spark offers built-in APIs in Python, Java, or Scala. So, you can write applications in various languages. Moreover, it supports 80 high-level operators for interactive querying.

  3. Advanced Analytics:

Besides supporting ‘reduce’ and‘Map’, Spark also supports Streaming data, SQL queries, Graph algorithms, and Machine learning (ML).

  4. Generality

Spark empowers a stack of libraries, including MLlib for machine learning, SQL and DataFrames, Spark Streaming, and GraphX. You can merge these libraries in the same application.

Before going into the details of spark optimization techniques, let’s go through the Apache Spark architecture:

Architecture of Apache Spark

The run-time architecture of Apache Spark consists of the following components:

Spark driver or master process

This converts programs into tasks and then schedules them for executors (slave processes). The task scheduler distributes these tasks to executors.

Our learners also read: Python free courses!

Cluster manager

The Spark cluster manager is responsible for launching executors and drivers. It schedules and allocates resources across several host machines for a cluster.

Executors

Executors, also called slave processes, are entities where tasks of a job are executed. After they are launched, they run until the lifecycle of the Spark application ends. The execution of a Spark job does not stop if an executor fails.

Resilient Distributed Datasets (RDD)

This is a collection of datasets that are immutable and are distributed over the nodes of a Spark cluster. Notably, a cluster is a collection of distributed systems where Spark can be installed. RDDs are divided into multiple partitions. And, they are called resilient as they can fix the data issues in case of data failure.

The types of RDDs supported by Spark are:

Explore our Popular Data Science Courses

DAG (Directed Acyclic Graph)

Spark creates a graph as soon as a code is entered into the Spark console. If some action (an instruction for executing an operation) is triggered, this graph is submitted to the DAGScheduler.

This graph can be considered as a sequence of data actions. DAG consists of vertices and edges. Vertices represent an RDD and edges represent computations to be performed on that specific RDD. It is called a directed graph as there are no loops or cycles within the graph.

upGrad’s Exclusive Data Science Webinar for you –

Why choose Spark compared to a SQL-only engine?

Apache Spark is a quick, universal cluster computation engine. It can be installed in a stand-alone mode or a Hadoop cluster. Using Spark, programmers can quickly write Scala, Java, R, Python, and SQL applications. So, these applications are accessible to data scientists, developers, and advanced business professionals possessing statistics experience. Moreover, Spark helps users to connect to any data source and exhibit it as tables to be used by SQL clients. Spark also allows the implementation of interactive machine learning algorithms.

When using a SQL-only engine such as Apache Impala/Apache Hive/Apache Drill, users can only use the SQL or SQL-like languages to query data stored over multiple databases. It implies that the frameworks are smaller than Spark. Now let’s go through different techniques for optimization in spark:

 

How to Build Digital & Data Mindset

 

Spark Optimization Techniques

Spark optimization techniques are used to modify the settings and properties of Spark to ensure that the resources are utilized properly and the jobs are executed quickly. All this ultimately helps in processing data efficiently.

The most popular Spark optimization techniques are listed below:

1. Data Serialization

Here, an in-memory object is converted into another format that can be stored in a file or sent over a network. This improves the performance of distributed applications.

It is the best spark optimization technique. Serialization improves any distributed application’s performance. By default, Spark uses the Java serializer over the JVM platform. Spark can also use a serializer known as Kryo rather than a Java serializer. The Kryo serializer provides better performance than the Java serializer.

The two ways to serialize data are:

  • Java serialization – The ObjectOutputStream framework is used for serializing objects. The java.io.Externalizable can be used to control the performance of the serialization. This process offers lightweight persistence.
  • Kyro serialization – Spark uses the Kryo Serialization library (v4) for serializing objects that are faster than Java serialization and is a more compact process. To improve the performance, the classes have to be registered using the registerKryoClasses method.Continue reading a few more pyspark optimization techniques. Kryo serializer is available in a compact binary format and provides approximately 10 times faster speed than the Java Serializer. You must set a configuration property i.e. org.apache.spark.serializer.KryoSerializer if you want to set the Kryo serializer as part of a Spark job.

2. Caching

This is an efficient technique that is used when the data is required more often. Cache() and persist() are the methods used in this technique. These methods are used for storing the computations of an RDD, DataSet, and DataFrame. But, cache() stores it in the memory, and persist() stores it in the user-defined storage level.

These methods can help in reducing costs and saving time as repeated computations are used.

Cache and Persist methods of this will store the data set into the memory when the requirement arises. They are useful when you want to store a small data set that is being used frequently in your program. RDD.Cache()would always store the data in memory. RDD.Persist() allows storage of some part of data into the memory and some part on the disk.  Caching technique offers efficient optimization in spark through Persist and Cache methods.

Read: Dataframe in Apache PySpark: Comprehensive Tutorial

Top Data Science Skills to Learn

3. Data Structure Tuning

We can reduce the memory consumption while using Spark, by tweaking certain Java features that might add overhead. This is possible in the following ways:

  • Use enumerated objects or numeric IDs in place of strings for keys.
  • Avoid using a lot of objects and complicated nested structures.
  • Set the JVM flag to xx:+UseCompressedOops if the memory size is less than 32 GB.

4. Garbage collection optimization

For optimizing garbage collectors, G1 and GC must be used for running Spark applications. The G1 collector manages growing heaps. GC tuning is essential according to the generated logs, to control the unexpected behavior of applications. But before this, you need to modify and optimize the program’s logic and code.

G1GC helps to decrease the execution time of the jobs by optimizing the pause times between the processes.

It is one of the best optimization techniques in spark when there is a huge garbage collection.

JVM garbage collection can be an issue when you use a huge collection of unused objects. GC tuning is the first step to collecting statistics by selecting verbose when submitting the spark jobs. The ideal condition states that GC overheads should be less than 10% of heap memory.

Spark job’s backend runs on the JVM platform. Hence, JVM garbage collection can be difficult when there is a huge collection of unused objects. Hence, the garbage collection tuning’s first step is to collect statistics by selecting the option in your Spark submit verbose.

Read our popular Data Science Articles

5. Memory Management

The memory used for storing computations, such as joins, shuffles, sorting, and aggregations, is called execution memory. The storage memory is used for caching and handling data stored in clusters. Both memories use a unified region M.

When the execution memory is not in use, the storage memory can use the space. Similarly, when storage memory is idle, execution memory can utilize the space. This is one of the most efficient Spark optimization techniques.

The executor owns a certain amount of total memory that is categorized into two parts i.e. the storage block and the execution block. This is controlled by two configuration options.

  1. spark.executor.memory: It is the total memory available to executors. By default, it is 1 gigabyte.
  2. spark.memory.fraction: It is the fraction of the total memory accessible for storage and execution.

These two types of memory were fixed in Spark’s early version.  Spark must spill data to disk if you want to occupy all the execution space. Consequently, it decreases the application’s performance.

Conversely, if your application significantly relies on caching and your job is occupied with all the storage space then Spark must push out the cache data. The same is accomplished through the least recently used(LRU) strategy. This technique frees up blocks with the earliest access time. An executor has certain memory available as free if your app doesn’t use caching a lot. However, the execution part is fixed, so you decrease the job’s performance anyhow.

Spark launched unified memory management with version 1.6. It suggests that storage and execution are not fixed. So, an executor can use the maximum available memory.

It is one of those optimization techniques in spark that efficiently manages memory without compromising performance. It uses two premises of unified memory management.

The first premise is -remove storage but not execution. It implies that when you discard the execution data, you would read it back soon. You can’t use the cached data anyhow if you remove the cached data.

The second premise defines that unified memory management permits the user to state the data’s minimum unremovable amount for applications that prominently depend on caching.

The data’s minimum unremovable amount is defined through spark.memory.storageFraction configuration option. It is half of the total memory, by default.

You may find Memory Management as one of the easy-to-use pyspark optimization techniques after understanding the following summary.

  • Spill to disk when the execution memory is full.
  • Discard LRU blocks when the storage memory gets full.
  • If your app depends on caching, tune the spark.memory.storageFraction.

Also Read: 6 Game Changing Features of Apache Spark

Conclusion

From the various Spark optimization techniques, we can understand how they help in cutting down processing time and process data faster. Developers and professionals apply these techniques according to the applications and the amount of data in question.   

If you are curious to learn about spark optimization, data science, check out IIIT-B & upGrad’s Executive PG Programme in Data Science which is created for working professionals and offers 10+ case studies & projects, practical hands-on workshops, mentorship with industry experts, 1-on-1 with industry mentors, 400+ hours of learning and job assistance with top firms.

Profile

Rohit Sharma

Blog Author
Rohit Sharma is the Program Director for the UpGrad-IIIT Bangalore, PG Diploma Data Analytics Program.

Frequently Asked Questions (FAQs)

1What are Spark Optimization techniques?

Apache Spark makes it easy for enterprises to process data quickly and solve complex data problems easily. It is obvious that during the development of any program, it is very much important to take care of its performance. Spark optimization techniques help out with in-memory data computations. The only thing that can hinder these computations is the memory, CPU, or any other resource.

Every spark optimization technique is used for a different purpose and performs certain specific actions. Some of the widely used spark optimization techniques are:

1. Serialization
2. API selection
3. Advance variable
4. Cache and persist
5. ByKey operation
6. File format selection
7. Garbage collection tuning
8. Level of parallelism

2When should you not consider using Spark?

Apache Spark has plenty of use cases, but there are certain specialized needs where you need other big data engines for fulfilling the purpose. In such cases, it is recommended to use other technology instead of going with Spark. Below are the use cases where you should not consider using Spark:

1. Low computing capacity – The default processing on Apache Spark takes place in the cluster memory. If your virtual machines or cluster has little computing capacity, you should go for other alternatives like Apache Hadoop.
2. Data ingestion in a publish-subscribe model – In this case, there are multiple sources as well as multiple destinations where millions of data are being moved in a short time. Here, you shouldn't use Spark, and instead, use Apache Kafka.

3Is Pandas faster than Apache Spark?

When you compare the computational speed of both Pandas DataFrame and the Spark DataFrame, you’ll notice that the performance of Pandas DataFrame is marginally better for small datasets. On the other hand, if the size of data increases, then it is found that the Spark DataFrame is capable enough to outperform the Pandas DataFrame. Thus, it will depend a lot on the amount of data.

Explore Free Courses

Suggested Blogs

Top 13 Highest Paying Data Science Jobs in India [A Complete Report]
905243
In this article, you will learn about Top 13 Highest Paying Data Science Jobs in India. Take a glimpse below. Data Analyst Data Scientist Machine
Read More

by Rohit Sharma

12 Apr 2024

Most Common PySpark Interview Questions & Answers [For Freshers & Experienced]
20922
Attending a PySpark interview and wondering what are all the questions and discussions you will go through? Before attending a PySpark interview, it’s
Read More

by Rohit Sharma

05 Mar 2024

Data Science for Beginners: A Comprehensive Guide
5068
Data science is an important part of many industries today. Having worked as a data scientist for several years, I have witnessed the massive amounts
Read More

by Harish K

28 Feb 2024

6 Best Data Science Institutes in 2024 (Detailed Guide)
5178
Data science training is one of the most hyped skills in today’s world. Based on my experience as a data scientist, it’s evident that we are in
Read More

by Harish K

28 Feb 2024

Data Science Course Fees: The Roadmap to Your Analytics Career
5075
A data science course syllabus covers several basic and advanced concepts of statistics, data analytics, machine learning, and programming languages.
Read More

by Harish K

28 Feb 2024

Inheritance in Python | Python Inheritance [With Example]
17645
Python is one of the most popular programming languages. Despite a transition full of ups and downs from the Python 2 version to Python 3, the Object-
Read More

by Rohan Vats

27 Feb 2024

Data Mining Architecture: Components, Types & Techniques
10803
Introduction Data mining is the process in which information that was previously unknown, which could be potentially very useful, is extracted from a
Read More

by Rohit Sharma

27 Feb 2024

6 Phases of Data Analytics Lifecycle Every Data Analyst Should Know About
80756
What is a Data Analytics Lifecycle? Data is crucial in today’s digital world. As it gets created, consumed, tested, processed, and reused, data goes
Read More

by Rohit Sharma

19 Feb 2024

Sorting in Data Structure: Categories & Types [With Examples]
139118
The arrangement of data in a preferred order is called sorting in the data structure. By sorting data, it is easier to search through it quickly and e
Read More

by Rohit Sharma

19 Feb 2024

Schedule 1:1 free counsellingTalk to Career Expert
icon
footer sticky close icon