Blog_Banner_Asset
    Homebreadcumb forward arrow iconBlogbreadcumb forward arrow iconData Science USbreadcumb forward arrow iconWhat is Hive in Hadoop? History and Its Components

What is Hive in Hadoop? History and Its Components

Last updated:
7th Oct, 2021
Views
Read Time
7 Mins
share image icon
In this article
Chevron in toc
View All
What is Hive in Hadoop? History and Its Components

 Apache Hive is an open-sourced warehousing system that is built on top of Hadoop. Hive is used for querying and analyzing massive datasets stored within Hadoop. It works by processing both structured and semi-structured data. 

Through this article, let’s talk in detail about Hive in Hadoop, its history, its importance, Hive architecture, some key features, a few limitations, and more! 

What is Hive?

Apache Hive is simply a data warehouse software built by using Hadoop as the base. Before Apache Hive, Big Data engineers had to write complex map-reduce jobs to perform querying tasks. With Hive, on the other hand, things drastically reduced as engineers now only need to know SQL. 

Hive works on a language known as HiveQL (similar to SQL), making it easier for engineers who have a working knowledge of SQL. HiveQL automatically translates your SQL queries into map-reduce jobs that Hadoop can execute.

Ads of upGrad blog

In doing so, Apache presents the concept of abstraction into the working of Hadoop and allows data experts to deal with complex datasets without learning the Java programming language for working with Hive. Apache Hive works on your workstation and converts SQL queries into map-reduce jobs to be executed on the Hadoop cluster. Hive categorizes all of your data into tables, thereby providing a structure to all the data present in HDFS. 

History of Apache Hive

The Data Infrastructure Team introduced Apache Hive at Facebook. It is one of the technologies that’s being proactively used on Facebook for numerous internal purposes. Over the years, Apache Hive has run thousands of jobs on the cluster with hundreds of users for a range of applications. 

The Hive-Hadoop cluster at Facebook stores more than 3PB of raw data. It can load 15TB of data in real-time daily. From there, Apache Hive grew even more in its use cases, and today, it is used by giants like IBM, Yahoo, Amazon, FINRA, Netflix, and more. 

Get your data science certification online from the World’s top Universities. Earn Executive PG Programs, Advanced Certificate Programs, or Masters Programs to fast-track your career.

Why the Need for Apache Hive?

Before coming up with Apache Hive, Facebook struggled with many challenges like the ever-increasing data size to analyze and the utter inconsistency in this large dataset. These challenges made it difficult for Facebook to handle its data-intensive tasks seamlessly. The traditional RDBMS-based structures were not enough to control the ever-increasing pressure. 

Facebook first introduced map-reduce to overcome these challenges but then simplified it further by offering Apache Hive, which works on HiveQL. 

Eventually, Apache Hive emerged as the much-needed saviour and helped Facebook overcome the various challenges. Now, using Apache Hive, Facebook was able to achieve the following: 

  • Evolution and flexibility of schema.
  • Partitioning and bucketing of tables. 
  • Defining Hive tables directly in HDFS.
  • Availability of ODBC/JDBC drivers. 

All in all, Apache Hive helped developers save a lot of time that would otherwise go into writing complex map-reduce jobs. Hive brings simplicity to summarization, analysis, querying, and exploring of data. 

Being reliant only on SQL, Apache Hive is a fast and scalable framework and is highly extensible. If you understand basic querying using SQL, you’ll be able to work with Apache Hive in no time! It also offers file access on different data stores like HBase and HDFS.

The Architecture of Apache Hive 

Now that you understand the importance and emergence of Apache Hive, let’s look at the major components of Apache Hive. The architecture of Apache Hive includes: 

1. Metastore 

This is used for storing metadata for each of the tables. The metadata generally consists of the location and schema. Metastore also consists of the partition metadata, which helps engineers track the progress of different datasets that have been distributed over the clusters. The data that is stored here is in the traditional RDBMS format. 

2. Driver 

Driver in Apache Hive is like a controller responsible for receiving the HiveQL statements. Then, it starts the execution of these statements by creating different sessions. The driver is also responsible for monitoring and managing the life cycle of the implementation and its progress along the way. Drivers hold all the important metadata that is generated when a HiveQL statement is executed. It also acts as a collection point of data obtained after the map-reduce operation.

3. Compiler 

The compiler is used for compiling the HiveQL queries. It converts the user-generated queries into a foolproof execution plan which contains all the tasks that need to be performed. The plan also includes all the steps and procedures required to follow map-reduce to get the required output. The Hive compiler converts the user-input query into AST (Abstract Syntax Tree) to check for compile-time errors or compatibility issues. The AST is transformed into a Directed Acyclic Graph (DAG) when none of the issues are encountered. 

4. Optimizer 

The optimizer does all the transformations on the execution plan required to reach the optimized DAG. It does so by aggregating all the transformations together, like converting an array of individual joins into a single joins – to enhance the performance. In addition, the optimizer can split different tasks by applying a transformation on data before the reduced operation is performed – again, to improve the overall performance. 

5. Executor –

Once Apache Hive has performed the compilation and optimization tasks, the executor performs the final executions. It takes care of pipelining the tasks and bringing them up to completion. 

6. CLI, UI, and Thrift Server 

Command-line interface (CLI) is used for providing the external user with a user interface to interact with the different features of Apache Hive. CLI is what makes up the UI of Hive for the end-users. On the other hand, the Thrift server allows external clients to interact with Hive over a network, similar to the ODBC or JDBC protocols.

Core Features of Apache Hive

As mentioned earlier, Apache Hive brought about a much-needed change in the way engineers worked on data jobs. No longer was Java the go-to language, and developers could work just by using SQL. Apart from that, there are several other essential features of Hive as well, such as : 

  • Apache Hive offers data summarization, analysis, and querying in a much more simplified manner. 
  • Hive supports internal and external tables, making it possible to work with external data without bringing it into the H DFS. 
  • Apache Hive works perfectly well for the low-level interface requirement of Hadoop. 
  • By supporting data partitioning at the level of the tables, Apache Hive helps improve the overall performance. 
  • It has a rule-based optimizer for optimizing different logical plans. 
  • It works on HiveQL, a language similar to SQL, which means developers don’t need to master another language to work with large datasets. 
  • Querying in Hive is extremely simple, similar to SQL.
  • We can also run Ad-hoc queries for the data analysis using Hive.

Limitation of Apache Hive

Since the world of Data Science is relatively new and ever-evolving, even the best tools available in the market have some limitations. Resolving those limitations is what will give us the next best tools. Here are a few limitations of working with Apache Hive for you to keep in mind: 

  • Hive does not offer row-level updates and real-time querying. 
  • Apache Hive provides acceptable latency for interactivity. 
  • It is not the best for working with online transactions. 
  • Latency in Hive queries is generally higher than average. 

In Conclusion

Apache Hive brought about drastic and amazing improvements in the way data engineers work on large datasets. Moreover, by completely eliminating the need for Java programming language, Apache Hive brought a familiar comfort to data engineers. Today, you can work smoothly with Apache Hive if you have the fundamental knowledge of SQL querying.

Ads of upGrad blog

As we mentioned earlier, Data Science is a dynamic and ever-evolving field. We’re sure that the coming years will bring forth new tools and frameworks to simplify things even further. If you are a data enthusiast looking to learn all the tools of the trade of Data Science, now is the best time to get handsy with Big Data tools like Hive. 

At upGrad, we have mentored and guided students from all over the world and helped people from different backgrounds establish a firm foothold in the Data Science industry. Our expert teachers, industry partnerships, placement assistance, and robust alumni network ensures that you’re never alone in this journey. So check out our Executive PG Program in Data Science, and get yourself enrolled in the one that’s right for you – we’ll take care of the rest! 

Profile

Rohit Sharma

Blog Author
Rohit Sharma is the Program Director for the UpGrad-IIIT Bangalore, PG Diploma Data Analytics Program.
Get Free Consultation

Select Coursecaret down icon
Selectcaret down icon
By clicking 'Submit' you Agree to  
UpGrad's Terms & Conditions

Our Best Data Science Courses

Frequently Asked Questions (FAQs)

1What is Apache Hive in Hadoop?

Apache Hive is a framework or system used for warehousing, querying, and analyzing large sets of data. Apache Hive was introduced by Facebook to enhance its internal operations and has since then been an integral part of the Data Science spectrum.

2Do I need to learn any particular language to work with Apache Hive in Hadoop?

No! Just the working knowledge of SQL will be enough for you to get started with Apache Hive!

3What is Apache Hive NOT used for?

Apache Hive is generally used for OLAP (batch processing) and is generally not used for OLTP because of the real-time operations on the database.

Explore Free Courses

Suggested Blogs

Top 10 Real-Time SQL Project Ideas: For Beginners & Advanced
14555
Thanks to the big data revolution, the modern business world collects and analyzes millions of bytes of data every day. However, regardless of the bus
Read More

by Pavan Vadapalli

28 Aug 2023

Python Free Online Course with Certification [US 2024]
5519
Data Science is now considered to be the future of technology. With its rapid emergence and innovation, the career prospects of this course are increa
Read More

by Pavan Vadapalli

14 Apr 2023

13 Exciting Data Science Project Ideas & Topics for Beginners in US [2024]
5474
Data Science projects are great for practicing and inheriting new data analysis skills to stay ahead of the competition and gain valuable experience.
Read More

by Rohit Sharma

07 Apr 2023

4 Types of Data: Nominal, Ordinal, Discrete, Continuous
5880
Data refers to the collection of information that is gathered and translated for specific purposes. With over 2.5 quintillion data being produced ever
Read More

by Rohit Sharma

06 Apr 2023

Best Python Free Online Course with Certification You Should Check Out [2024]
5593
Data Science is now considered to be the future of technology. With its rapid emergence and innovation, the career prospects of this course are increa
Read More

by Rohit Sharma

05 Apr 2023

5 Types of Binary Tree in Data Structure Explained
5385
A binary tree is a non-linear tree data structure that contains each node with a maximum of 2 children. The binary name suggests the number 2, so any
Read More

by Rohit Sharma

03 Apr 2023

42 Exciting Python Project Ideas & Topics for Beginners [2024]
5753
Python is an interpreted, high-level, object-oriented programming language and is prominently ranked as one of the top 5 most famous programming langu
Read More

by Rohit Sharma

02 Apr 2023

5 Reasons Why Python Continues To Be The Top Programming Language
5326
Introduction Python is an all-purpose high-end scripting language for programmers, which is easy to understand and replicate. It has a massive base o
Read More

by Rohit Sharma

01 Apr 2023

Why Should One Start Python Coding in Today’s World?
5210
Python is the world’s most popular programming language, used by both professional engineer developers and non-designers. It is a highly demanded lang
Read More

by Rohit Sharma

16 Feb 2023

Schedule 1:1 free counsellingTalk to Career Expert
icon
footer sticky close icon