Both structured and unstructured data are growing exponentially, and Apache Hadoop has proven its excellence in handling such vast data. The Apache Hadoop has, therefore, gained much traction in the big data world. However, there are certain limitations to Hadoop’s HDFS architecture.
HDFS outputs high latency operations and cannot handle a large volume of the read and write requests simultaneously. Another limitation is that HDFS is a write-once read many times architecture, meaning that it has to rewrite a file completely to alter a data set. These limitations of HDFS architecture raised the need for HBase architecture.
Table of Contents
What is HBase?
HBase is a column-oriented data storage architecture that is formed on top of HDFS to overcome its limitations. It leverages the basic features of HDFS and builds upon it to provide scalability by handling a large volume of the read and write requests in real-time. Although the HBase architecture is a NoSQL database, it eases the process of maintaining data by distributing it evenly across the cluster. This makes accessing and altering data in the HBase data model quick. Learn more about HBase.
What are the Components of the HBase Data Model?
Since the HBase data model is a NoSQL database, developers can easily read and write data as and when required, making it faster than the HDFS architecture. It consists of the following components:
1. HBase Tables: HBase architecture is column-oriented; hence the data is stored in tables that are in table-based format.
2. RowKey: A RowKey is assigned to every set of data that is recorded. This makes it easy to search for specific data in HBase tables.
3. Columns: Columns are the different attributes of a dataset. Each RowKey can have unlimited columns.
4. Column Family: Column families are a combination of several columns. A single request to read a column family gives access to all the columns in that family, making it quicker and easier to read data.
5. Column Qualifiers: Column qualifiers are like column titles or attribute names in a normal table.
6. Cell: It is a row-column tuple that is identified using RowKey and column qualifiers.
7. Timestamp: Whenever a data is stored in the HBase data model, it is stored with a timestamp.
What are the Components of HBase Architecture?
The HBase architecture comprises three major components, HMaster, Region Server, and ZooKeeper.
HMaster operates similar to its name. It is the master that assigns regions to Region Server (slave). HBase architecture uses an Auto Sharding process to maintain data. In this process, whenever an HBase table becomes too long, it is distributed by the system with the help of HMaster. Some of the typical responsibilities of HMaster includes:
- Control the failover
- Manage the Region Server and Hadoop cluster
- Handle the DDL operations such as creating and deleting tables
- Manage changes in metadata operations
- Manage and assign regions to Region Servers
- Accept requests and sends it to the relevant Region Server
2. Region Server
Region Servers are the end nodes that handle all user requests. Several regions are combined within a single Region Server. These regions contain all the rows between specified keys. Handling user requests is a complex task to execute, and hence Region Servers are further divided into four different components to make managing requests seamless.
- Write-Ahead Log (WAL): WAL is attached to every Region Server and stores sort of temporary data that is not yet committed to the drive.
- Block Cache: It is a read request cache; all the recently read data is stored in block cache. Data that is not used often is automatically removed from the stock when it is full.
- MemStore: It is a write cache responsible for storing data not written to the disk yet.
- HFile: The HFile stores all the actual data after the commitment.
ZooKeeper acts as the bridge across the communication of the HBase architecture. It is responsible for keeping track of all the Region Servers and the regions that are within them. Monitoring which Region Servers and HMaster are active and which have failed is also a part of ZooKeeper’s duties. When it finds that a Server Region has failed, it triggers the HMaster to take necessary actions. On the other hand, if the HMaster itself fails, it triggers the inactive HMaster that becomes active after the alert. Every user and even the HMaster need to go through ZooKeeper to access Region Servers and the data within. ZooKeeper stores a.Meta file, which contains a list of all the Region Servers. ZooKeeper’s responsibilities include:
- Establishing communication across the Hadoop cluster
- Maintaining configuration information
- Tracking Region Server and HMaster failure
- Maintaining Region Server information
How are Requests Handled in HBase architecture?
Now since we know the major components of the HBase architecture and their function, let’s delve deep into how requests are handled throughout the architecture.
1. Commence the Search in HBase Architecture
The steps to initialize the search are:
- The user retrieves the Meta table from ZooKeeper and then requests for the location of the relevant Region Server.
- Then the user will request the exact data from the Region Server with the help of RowKey.
2. Write Mechanism in HBase Architecture
The steps to write in the HBase architecture are:
- The client will first have to find the Region Server and then the data’s location for altering it. (This step is involved only for converting data and not for writing fresh information)
- The actual write request begins at the WAL, where the client writes the data.
- WAL transfers the data to MemStore and sends an acknowledgment to the user.
- When MemStore is filled with data, it commits the data to HFile, where it is stored.
3. Read Mechanism in HBase Architecture
To read any data, the user will first have to access the relevant Region Server. Once the Region Server is known, the other process includes:
- The first scan is made at the read cache, which is the Block cache.
- The next scan location is MemStore, which is the write cache.
- If the data is not found in block cache or MemStore, the scanner will retrieve the data from HFile.
How Does Data Recovery Operate in HBase Architecture?
The Hbase architecture breaks data through compaction and region split to reduce the data load in the cluster. However, if there is a crash and recovery is needed, this is how it is done:
- The ZooKeeper triggers HMaster when a server failure occurs.
- HMaster distributes crashed regions and WAL to active Region Servers.
- These Region Servers re-executes WAL and builds the MemStore.
- When all the Region Servers re-executes WAL, all the data along with the column families are recovered.
Checkout: Hadoop Ecosystem & Components
Data has become the new oil across various industries. Hence there are multiple career opportunities in Hadoop. You can learn all about Hadoop and Big Data at upGrad.
If you are interested to know more about Hbase, Big Data, check out our PG Diploma in Software Development Specialization in Big Data program which is designed for working professionals and provides 7+ case studies & projects, covers 14 programming languages & tools, practical hands-on workshops, more than 400 hours of rigorous learning & job placement assistance with top firms.
Learn Software Development Courses online from the World’s top Universities. Earn Executive PG Programs, Advanced Certificate Programs or Masters Programs to fast-track your career.