Programs

Book a Free Counselling Session For Your Career Planning

Apache Pig Architecture in Hadoop: Features, Applications, Execution Flow

Why Apache Pig is so Popular

To analyze and process big data, Hadoop uses Map Reduce. Map Reduce is a program that is written in Java. But, developers find it challenging to write and maintain these lengthy Java codes. With Apache Pig, developers can quickly analyze and process large data sets without using complex Java codes. Apache Pig developed by Yahoo researchers executes Map Reduce jobs on extensive datasets and provides an easy interface for developers to process the data efficiently. 

Apache Pig emerged as a boon for those who do not understand Java programming. Today, Apache Pig has become very popular among developers as it offers flexibility, reduces code complexity, and requires less effort. If you are a beginner and interested to learn more about data science, check out our data science courses from top universities.

Map Reduce vs. Apache Pig

The following table summarizes the difference between Map Reduce and Apache Pig:

Apache Pig Map Reduce
Scripting language Compiled language
Provides a higher level of abstraction Provides a low level of abstraction
Requires a few lines of code (10 lines of code can summarize 200 lines of Map Reduce code) Requires a more extensive code (more lines of code)
Requires less development time and effort Requires more development time and effort
Lesser code efficiency Higher efficiency of code in comparison to Apache Pig

Apache Pig Features

Apache Pig offers the following features:

  • Allows programmers to write fewer lines of codes. Programmers can write 200 lines of Java code in only ten lines using the Pig Latin language.
  • Apache Pig multi-query approach reduces the development time.
  • Apache pig has a rich set of datasets for performing operations like join, filter, sort, load, group, etc.
  • Pig Latin language is very similar to SQL. Programmers with good SQL knowledge find it easy to write Pig script.
  • Allows programmers to write fewer lines of codes. Programmers can write 200 lines of Java code in only ten lines using the Pig Latin language.
  • Apache Pig handles both structured and unstructured data analysis.

Explore our Popular Data Science Courses

Apache Pig Applications

A few of the Apache Pig applications are:

  • Processes large volume of data
  • Supports quick prototyping and ad-hoc queries across large datasets
  • Performs data processing in search platforms
  • Processes time-sensitive data loads
  • Used by telecom companies to de-identify the user call data information.

What is Apache Pig?

Map Reduce requires programs to be translated into map and reduce stages. Since not all data analysts were familiar with Map Reduce, hence, Apache pig was introduced by Yahoo researchers to bridge the gap. The Pig was built on top of Hadoop that provides a high level of abstraction and enables programmers to spend less time writing complex Map Reduce programs. Pig is not an acronym; it was named after a domestic animal. As an animal pig eats anything, Pig can work upon any kind of data.

Source

Apache Pig Architecture in Hadoop

Apache Pig architecture consists of a Pig Latin interpreter that uses Pig Latin scripts to process and analyze massive datasets. Programmers use Pig Latin language to analyze large datasets in the Hadoop environment. Apache pig has a rich set of datasets for performing different data operations like join, filter, sort, load, group, etc.

Programmers must use Pig Latin language to write a Pig script to perform a specific task. Pig converts these Pig scripts into a series of Map-Reduce jobs to ease programmers’ work. Pig Latin programs are executed via various mechanisms such as UDFs, embedded, and Grunt shells.

upGrad’s Exclusive Data Science Webinar for you –

 

Apache Pig architecture is consisting of the following major components:

  • Parser
  • Optimizer
  • Compiler
  • Execution Engine
  • Execution Mode

Let us study all these Pig components in detail.

Our learners also read: Top Python Courses for Free

Pig Latin Scripts

Pig scripts are submitted to the Pig execution environment to produce the desired results. 

You can execute the Pig scripts by using one of the methods:

  • Grunt Shell 
  • Script file
  • Embedded script 

Parser

Parser handles all the Pig Latin statements or commands. Parser performs several checks on the Pig statements like syntax check, type check, and generates a DAG (Directed Acyclic Graph) output. DAG output represents all the logical operators of the scripts as nodes and data flow as edges.

Optimizer

Once parsing operation is completed and a DAG output is generated, the output is passed to the optimizer. The optimizer then performs the optimization activities on the output, such as split, merge, projection, pushdown, transform, and reorder, etc. The optimizer processes the extracted data and omits unnecessary data or columns by performing pushdown and projection activity and improves query performance. 

Top Data Science Skills to Learn in 2022

Compiler

The compiler compiles the output that is generated by the optimizer into a series of Map Reduce jobs. The compiler automatically converts Pig jobs into Map Reduce jobs and optimizes performance by rearranging the execution order. 

Execution Engine

After performing all the above operations, these Map Reduce jobs are submitted to the execution engine, which is then executed on the Hadoop platform to produce the desired results. You can then use the DUMP statement to display the results on screen or STORE statements to store the results in HDFS (Hadoop Distributed File System).

Execution Mode

Apache Pig is executed in two execution modes that are local and Map Reduce. The choice of execution mode depends on where the data is stored and where you want to run the Pig script. You can either store your data locally (in a single machine) or in a distributed Hadoop cluster environment.

  • Local Mode – You can use local mode if your dataset is small. In local mode, Pig runs in a single JVM using the local host and file system. In this mode, parallel mapper execution is impossible as all files are installed and run on the localhost. You can use pig -x local command to specify the local mode.
  • Map Reduce Mode – Apache Pig uses the Map Reduce mode by default. In Map Reduce mode, a programmer executes the Pig Latin statements on data that is already stored in the HDFS (Hadoop Distributed File System). You can use pig -x mapreduce command to specify the Map-Reduce mode. 

Source

Read our popular Data Science Articles

Pig Latin Data Model

Pig Latin data model allows Pig to handle any kind of data. Pig Latin data model is fully nested and can treat both atomic like integer, float, and non-atomic complex data types such as Map and tuple. 

Let us understand the data model in depth:

  • Atom – An atom is a single value stored in a string form and can be used as a number and string. Atomic values of Pig are integer, double, float, byte array, and char array. A single atomic value is also called a field. 

For example, “Kiara” or 27

  • Tuple – A tuple is a record that contains an ordered set of fields (any type). A tuple is very similar to a row in an RDBMS (Relational Database Management System).

For example, (Kiara, 27)

  • Bag – An atom is a single value stored in a string form and can be used as a number and string. Atomic values of Pig are integer, double, float, byte array, and char array. A single atomic value is also called a field.

For example, {(Kiara, 27), (Kehsav, 45)}

  • Map – A key-value pair set is known as a map. The key must be unique and should be of a char array type. However, the value can be of any kind.

For example, [name#Kiara, age#27]

  • Relation – A bag of tuples is called a relation. 

Execution Flow of a Pig Job

The following steps explain the execution flow of a Pig job:

  • The developer writes a Pig script using the Pig Latin language and stores it in the local file system.
  • After submitting the Pig scripts, Apache Pig establishes a connection with the compiler and generates a series of Map Reduce Jobs as the output.
  • Pig compiler receives raw data from HDFS perform operations and stores the results into HDFS after Map Reduce jobs are finished.

Also Read: Apache Pig Tutorial

Conclusion

In this blog, we have learned about the Apache Pig Architecture, Pig components, the difference between Map Reduce and Apache Pig, Pig Latin data model, and execution flow of a Pig job.

Apache Pig is a boon to programmers as it provides a platform with an easy interface, reduces code complexity, and helps them efficiently achieve results. Yahoo, eBay, LinkedIn, and Twitter is some of the companies that use Pig to process their large volumes of data.

If you are curious to learn about Apache Pig, data science, check out IIIT-B & upGrad’s Executive PG Programme in Data Science which is created for working professionals and offers 10+ case studies & projects, practical hands-on workshops, mentorship with industry experts, 1-on-1 with industry mentors, 400+ hours of learning and job assistance with top firms.

What are the features of Apache Pig?

Apache Pig is a very high-level tool or platform being used for the processing of large data sets. The data analysis codes are developed with the use of a high-level scripting language called Pig Latin. Firstly, the programmers will write Pig Latin scripts to process the data into a specific map and reduce the tasks. Apache Pig has plenty of features which makes it a very useful tool.

1. It provides a rich set of operators to perform different operations, such as sort, joins, filter, etc.
2. Apache Pig is considered to be a boon for SQL programmers as it is easy to learn, read, and write.
3. Making user-defined functions and processes is easy
4. Fewer lines of code are required for any process or function
5. Allows the users to perform analysis of both unstructured and structured data
6. Join and Split operations are pretty easy to perform

What are the available execution modes in Apache Pig?

Apache Pig can be executed in two different modes:

1. Local Mode - All the files will be installed and run from your local file system and local host in this mode. Usually, this mode is utilized for the purpose of testing. Here, you won't need HDFS or Hadoop.

2. MapReduce Mode - In the MapReduce mode, Apache Pig is used for loading and processing the data that is already existing in the Hadoop File System (HDFS). A MapReduce job will be invoked in the back-end whenever we try to execute a Pig Latin statement for processing the data. It will perform a particular operation on the already existing data in the HDFS.

What are some of the key applications of Apache Pig?

Apache Pig turned out to be a boon for all those programmers who weren't able to understand Java programming proficiently. The best thing about Apache Pig is that it offers flexibility, requires less effort than other platforms, and reduces the complexity of code.

Some of the key applications of Apache Pig are:

1. Processing a large volume of data from the datasets
2. Useful at every place where analytical insights are required with the use of sampling
3. For the collection of a large amount of datasets in the form of web crawls and search logs
4. Required for prototyping the processing algorithms of large datasets

Want to share this article?

Plan Your Data Science Career Today

Leave a comment

Your email address will not be published. Required fields are marked *

Leave a comment

Your email address will not be published. Required fields are marked *

×
Get Free career counselling from upGrad experts!
Book a session with an industry professional today!
No Thanks
Let's do it
Get Free career counselling from upGrad experts!
Book a Session with an industry professional today!
Let's do it
No Thanks