Apache Pig Architecture in Hadoop: Features, Applications, Execution Flow

Why Apache Pig is so Popular

To analyze and process big data, Hadoop uses Map Reduce. Map Reduce is a program that is written in Java. But, developers find it challenging to write and maintain these lengthy Java codes. With Apache Pig, developers can quickly analyze and process large data sets without using complex Java codes. Apache Pig developed by Yahoo researchers executes Map Reduce jobs on extensive datasets and provides an easy interface for developers to process the data efficiently. 

Apache Pig emerged as a boon for those who do not understand Java programming. Today, Apache Pig has become very popular among developers as it offers flexibility, reduces code complexity, and requires less effort.

Map Reduce vs. Apache Pig

The following table summarizes the difference between Map Reduce and Apache Pig:

Apache Pig Map Reduce
Scripting language Compiled language
Provides a higher level of abstraction Provides a low level of abstraction
Requires a few lines of code (10 lines of code can summarize 200 lines of Map Reduce code) Requires a more extensive code (more lines of code)
Requires less development time and effort Requires more development time and effort
Lesser code efficiency Higher efficiency of code in comparison to Apache Pig

Apache Pig Features

Apache Pig offers the following features:

  • Allows programmers to write fewer lines of codes. Programmers can write 200 lines of Java code in only ten lines using the Pig Latin language.
  • Apache Pig multi-query approach reduces the development time.
  • Apache pig has a rich set of datasets for performing operations like join, filter, sort, load, group, etc.
  • Pig Latin language is very similar to SQL. Programmers with good SQL knowledge find it easy to write Pig script.
  • Allows programmers to write fewer lines of codes. Programmers can write 200 lines of Java code in only ten lines using the Pig Latin language.
  • Apache Pig handles both structured and unstructured data analysis.

Apache Pig Applications

A few of the Apache Pig applications are:

  • Processes large volume of data
  • Supports quick prototyping and ad-hoc queries across large datasets
  • Performs data processing in search platforms
  • Processes time-sensitive data loads
  • Used by telecom companies to de-identify the user call data information.

What is Apache Pig?

Map Reduce requires programs to be translated into map and reduce stages. Since not all data analysts were familiar with Map Reduce, hence, Apache pig was introduced by Yahoo researchers to bridge the gap. The Pig was built on top of Hadoop that provides a high level of abstraction and enables programmers to spend less time writing complex Map Reduce programs. Pig is not an acronym; it was named after a domestic animal. As an animal pig eats anything, Pig can work upon any kind of data.

Source

Apache Pig Architecture in Hadoop

Apache Pig architecture consists of a Pig Latin interpreter that uses Pig Latin scripts to process and analyze massive datasets. Programmers use Pig Latin language to analyze large datasets in the Hadoop environment. Apache pig has a rich set of datasets for performing different data operations like join, filter, sort, load, group, etc.

Programmers must use Pig Latin language to write a Pig script to perform a specific task. Pig converts these Pig scripts into a series of Map-Reduce jobs to ease programmers’ work. Pig Latin programs are executed via various mechanisms such as UDFs, embedded, and Grunt shells.

 

Apache Pig architecture is consisting of the following major components:

  • Parser
  • Optimizer
  • Compiler
  • Execution Engine
  • Execution Mode

Let us study all these Pig components in detail.

Pig Latin Scripts

Pig scripts are submitted to the Pig execution environment to produce the desired results. 

You can execute the Pig scripts by using one of the methods:

  • Grunt Shell 
  • Script file
  • Embedded script 

Parser

Parser handles all the Pig Latin statements or commands. Parser performs several checks on the Pig statements like syntax check, type check, and generates a DAG (Directed Acyclic Graph) output. DAG output represents all the logical operators of the scripts as nodes and data flow as edges.

Optimizer

Once parsing operation is completed and a DAG output is generated, the output is passed to the optimizer. The optimizer then performs the optimization activities on the output, such as split, merge, projection, pushdown, transform, and reorder, etc. The optimizer processes the extracted data and omits unnecessary data or columns by performing pushdown and projection activity and improves query performance. 

Compiler

The compiler compiles the output that is generated by the optimizer into a series of Map Reduce jobs. The compiler automatically converts Pig jobs into Map Reduce jobs and optimizes performance by rearranging the execution order. 

Execution Engine

After performing all the above operations, these Map Reduce jobs are submitted to the execution engine, which is then executed on the Hadoop platform to produce the desired results. You can then use the DUMP statement to display the results on screen or STORE statements to store the results in HDFS (Hadoop Distributed File System).

Execution Mode

Apache Pig is executed in two execution modes that are local and Map Reduce. The choice of execution mode depends on where the data is stored and where you want to run the Pig script. You can either store your data locally (in a single machine) or in a distributed Hadoop cluster environment.

  • Local Mode – You can use local mode if your dataset is small. In local mode, Pig runs in a single JVM using the local host and file system. In this mode, parallel mapper execution is impossible as all files are installed and run on the localhost. You can use pig -x local command to specify the local mode.
  • Map Reduce Mode – Apache Pig uses the Map Reduce mode by default. In Map Reduce mode, a programmer executes the Pig Latin statements on data that is already stored in the HDFS (Hadoop Distributed File System). You can use pig -x mapreduce command to specify the Map-Reduce mode. 

Source

Pig Latin Data Model

Pig Latin data model allows Pig to handle any kind of data. Pig Latin data model is fully nested and can treat both atomic like integer, float, and non-atomic complex data types such as Map and tuple. 

Let us understand the data model in depth:

  • Atom – An atom is a single value stored in a string form and can be used as a number and string. Atomic values of Pig are integer, double, float, byte array, and char array. A single atomic value is also called a field. 

For example, “Kiara” or 27

  • Tuple – A tuple is a record that contains an ordered set of fields (any type). A tuple is very similar to a row in an RDBMS (Relational Database Management System).

For example, (Kiara, 27)

  • Bag – An atom is a single value stored in a string form and can be used as a number and string. Atomic values of Pig are integer, double, float, byte array, and char array. A single atomic value is also called a field.

For example, {(Kiara, 27), (Kehsav, 45)}

  • Map – A key-value pair set is known as a map. The key must be unique and should be of a char array type. However, the value can be of any kind.

For example, [name#Kiara, age#27]

  • Relation – A bag of tuples is called a relation. 

Execution Flow of a Pig Job

The following steps explain the execution flow of a Pig job:

  • The developer writes a Pig script using the Pig Latin language and stores it in the local file system.
  • After submitting the Pig scripts, Apache Pig establishes a connection with the compiler and generates a series of Map Reduce Jobs as the output.
  • Pig compiler receives raw data from HDFS perform operations and stores the results into HDFS after Map Reduce jobs are finished.

Also Read: Apache Pig Tutorial

Conclusion

In this blog, we have learned about the Apache Pig Architecture, Pig components, the difference between Map Reduce and Apache Pig, Pig Latin data model, and execution flow of a Pig job.

Apache Pig is a boon to programmers as it provides a platform with an easy interface, reduces code complexity, and helps them efficiently achieve results. Yahoo, eBay, LinkedIn, and Twitter is some of the companies that use Pig to process their large volumes of data.

If you are curious to learn about Apache Pig, data science, check out IIIT-B & upGrad’s PG Diploma in Data Science which is created for working professionals and offers 10+ case studies & projects, practical hands-on workshops, mentorship with industry experts, 1-on-1 with industry mentors, 400+ hours of learning and job assistance with top firms.

Plan Your Data Science Career Today

UPGRAD AND IIIT-BANGALORE'S PG DIPLOMA IN DATA SCIENCE
Apply Now

Leave a comment

Your email address will not be published. Required fields are marked *

×