Impala is an open-source, native analytic database designed for clustered platforms like Apache Hadoop. It is an interactive SQL-like query engine that runs on top of the Hadoop Distributed File System (HDFS) to facilitate the processing of massive volumes of data at a lightning-fast speed. Also, impala is one of the top Hadoop tools to use big data. Today, we’re going to talk about all things Impala, and hence, we’ve designed this Impala tutorial for you!
This Impala Hadoop tutorial is specially intended for those who wish to learn Impala. However, to reap the maximum benefits of this Impala tutorial, it would help if you have an in-depth understanding of the fundamentals of SQL along with Hadoop and HDFS commands.
What is Impala?
Impala is an MPP (Massive Parallel Processing) SQL query engine written in C++ and Java. Its primary purpose is to process vast volumes of data stored in Hadoop clusters. Impala promises high performance and low latency, and it is to date the top-performing SQL engine (that offers an RDBMS-like experience) to provide the fastest way to access and process data stored in HDFS.
Another beneficial aspect of Impala is that it integrates with the Hive metastore to allow sharing of the table information between both components. It leverages the existing Apache Hive to perform batch-oriented, long-running jobs in SQL query format. The Impala-Hive integration allows you to use either of the two components – Hive or Impala for data processing or to create tables under a single shared file system (HDFS) without altering the table definition.
Impala combines the multi-user performance of a traditional analytic database and SQL support with the scalability and flexibility of Apache Hadoop. It does so by using standard Hadoop components like HDFS, HBase, YARN, Sentry, and Metastore. Since Impala uses the same metadata, user interface (Hue Beeswax), SQL syntax (Hive SQL), and ODBC (Open Database Connectivity) driver as Apache Hive, it creates a unified and familiar platform for batch-oriented and real-time queries.
Impala can read almost all the file formats used by Hadoop, including Parquet, Avro, and RCFile. Also, Impala is not built on MapReduce algorithms – it implements a distributed architecture based on daemon processes that handle and manage everything related to query execution running on the same machine/s. As a result, it helps reduce the latency of utilizing MapReduce. This is precisely what makes Impala much faster than Hive.
Explore our Popular Software Engineering Courses
Impala – Features
The main features of Impala are:
- It is available as an open-source SQL query engine under the Apache license.
- It lets you access data by using SQL-like queries.
- It supports in-memory data processing – it accesses and analyzes data stored on Hadoop data nodes.
- It allows you to store data in storage systems like HDFS, Apache HBase, and Amazon s3.
- It easily integrates with BI tools like Tableau, Pentaho, and Micro strategy.
- It supports various file formats including Sequence File, Avro, LZO, RCFile, and Parquet.
Impala – Key Advantages
Using Impala offers some significant advantages to the users, like:
- Since Impala supports in-memory data processing (processing occurs where the data resides – on Hadoop cluster), there’s no need for data transformation and data movement.
- To access data stored in HDFS, or HBase, or Amazon s3 with Impala, you do not need any prior knowledge of Java (MapReduce jobs) – you can easily access it using basic SQL queries.
- Generally, data has to undergo a complicated extract-transform-load (ETL) cycle while writing queries in business tools. However, with Impala, there’s no need for this. Impala replaces the time-consuming stages of loading & re-organizing with advanced techniques like exploratory data analysis & data discovery, thereby boosting the speed of the process.
- Impala is a pioneer for using the Parquet file format, which is a columnar storage layout optimized for large-scale queries found in data warehouses.
Impala – Drawbacks
Although Impala offers numerous benefits, it has certain limitations as well:
- It has no support for serialization and deserialization.
- It cannot read custom binary files; it can only read text files.
- Every time new records or files are added to the data directory in HDFS, you will need to refresh the data table.
In-Demand Software Development Skills
Impala – Architecture
Impala is decoupled from its storage engine (contrary to traditional storage systems). It includes three principal components – Impala Daemon (Impalad), Impala StateStore, and Impala Metadata & MetaStore.
Impala Daemon, a.k.a. Impalad runs on individual nodes where Impala is installed. It accepts queries from multiple interfaces (Impala shell, Hue browser, etc.) and processes them. Each time a query is submitted to an Impalad on a particular node, the node becomes a “coordinator node” for that query. In this way, multiple queries are served by Impalad running on other nodes.
Once the queries are accepted, Impalad reads and writes data files and parallelizes the queries by distributing the task to the other Impala nodes in the cluster. Users can either submit queries to a dedicated Impalad or in a load-balanced manner to other Impalad in the cluster, based on their requirements. These queries then start processing on the different Impalad instances and return the result to the primary coordinating node.
The Impala StateStore monitors and checks the health of each Impalad and also relays the health report of each Impala Daemon health to the other daemons. It can run on the same node where the Impala server is running or in another node in the cluster. In case there’s a node failure due to some reason, the Impala StateStore updates all the other nodes about the failure. In such an event, the other Impala daemons stop assigning any further queries to the failed node.
Impala Metadata & MetaStore
In Impala, all the crucial information, including table definitions, table and column information, etc., are stored within a centralized database known as the MetaStore. When dealing with substantial volumes of data containing multiple partitions, it becomes challenging to obtain table-specific metadata. This is where Impala comes to the rescue. Since individual Impala nodes cache all the metadata locally, it becomes easy to obtain specific information instantly.
Each time you update the table definition/table data, all Impala Daemons must also update their metadata cache by retrieving the latest metadata before they can issue a new query against a particular table.
Explore Our Software Development Free Courses
|Blockchain Technology||React for Beginners||Core Java Basics|
Impala – Installing Impala
Just like you need to install Hadoop and its ecosystem on Linux OS, you can do the same with Impala. Since it was Cloudera that first shipped Impala, you can easily access it via the Cloudera QuickStart VM.
Read: Hadoop Tutorial
How to download the Cloudera QuickStart VM
To download the Cloudera QuickStart VM, you must follow the steps outlined below.
Open the Cloudera homepage (http://www.cloudera.com/), and you will find something like this:
To register on Cloudera, you must click the “Register Now” option, which will open the Account Registration page. If you are already registered on Cloudera, you can click on the “Sign In” option on the page, and it will further redirect you to the sign-in page like so:
Once you’ve signed in, open the download page of the website by clicking on the “Downloads” option at the top left corner of the page, as shown below:
In this step, you need to download the Cloudera QuickStartVM by clicking on the “Download Now” option like so:
Clicking on the Download Now option will redirect you to the download page of QuickStart VM:
Then you have to select the GET ONE NOW option, accept the license agreement, and submit it as shown below:
After the download is complete, you will find three different Cloudera VM Compatible options – VMware, KVM, and VIRTUALBOX. You can choose your preferred option.
Impala – Query Processing Interfaces
Impala offers three interfaces for processing queries:
Impala-shell – Once you’ve installed and set up Impala using the Cloudera VM, you can activate Impala-shell by typing the command “impala-shell” in the editor.
Hue interface – The Hue browser allows you to process Impala queries. It has an Impala query editor where you can type and execute different Impala queries. However, to use the editor, first, you will need to log in to the Hue browser.
ODBC/JDBC drivers – As is true of every database, Impala also offers ODBC/JDBC drivers. These drivers let you connect to Impala through programming languages that support them (ODBC/JDBC drivers) and build applications that process queries in Impala using the same programming languages.
Query Execution Procedure
Whenever you pass a query using any Impala interfaces, an Impalad in the cluster usually accepts your query. This Impalad then becomes the coordinator node for that particular query. After receiving the query, the coordinator verifies whether or not the query is appropriate by using the Table Schema from the Hive Metastore.
After this, it gathers information about the location of the data that is needed for the query execution from the HDFS name node and forwards this information to other Impalads in the hierarchy to facilitate query execution. Once the Impalads read the specified data block, they process the query. When ll the Impalads in the cluster have processed the query, the coordinator node collects the result and delivers it to you.
Impala Shell Commands
If you are familiar with Hive Shell, you can easily figure out Impala Shell since both share a pretty similar structure – they allow to create databases and tables, insert data, and issue queries. Impala Shell commands fall under three broad categories: general commands, query specific options, and table and database-specific options.
The help command offers a list of useful commands available in Impala.
[quickstart.cloudera:21000] > help;
Documented commands (type help <topic>):
compute describe insert set unset with version
connect explain quit show values use
exit history profile select shell tip
alter create desc drop help load summary
This command provides you with the current version of Impala.
[quickstart.cloudera:21000] > version;
Shell version: Impala Shell v2.3.0-cdh5.5.0 (0c891d7) built on Mon Nov 9
12:18:12 PST 2015
Server version: impalad version 2.3.0-cdh5.5.0 RELEASE (build
This command displays the last ten commands executed in Impala Shell.
[quickstart.cloudera:21000] > history;
This command helps connect to a given instance of Impala. If you do not specify any instance, then by default, it will connect to the default port 21000.
[quickstart.cloudera:21000] > connect;
Connected to quickstart.cloudera:21000
Server version: impalad version 2.3.0-cdh5.5.0 RELEASE (build
As the name suggests, the exit/quit command lets you exit the Impala Shell.
[quickstart.cloudera:21000] > exit;
Query Specific Options
This command returns the execution plan for a particular query.
[quickstart.cloudera:21000] > explain select * from sample;
Query: explain select * from sample
| Explain String
| Estimated Per-Host Requirements: Memory = 48.00MB VCores = 1
| WARNING: The following tables are missing relevant table and/or column statistics. |
| my_db.customers |
| 01:EXCHANGE [UNPARTITIONED]
| 00:SCAN HDFS [my_db.customers] |
| partitions = 1/1 files = 6 size = 148B |
Fetched 7 row(s) in 0.17s
This command displays the low-level information about the recent/latest query. It is used for diagnosis and performance tuning of a query.
[quickstart.cloudera:21000] > profile;
Query Runtime Profile:
Session ID: e74927207cd752b5:65ca61e630ad3ad
Session Type: BEESWAX
Start Time: 2016–04–17 23:49:26.08148000 End Time: 2016–04–17 23:49:26.2404000
Query Type: EXPLAIN
Query State: FINISHED
Query Status: OK
Impala Version: impalad version 2.3.0–cdh5.5.0 RELEASE (build 0c891d77280e2129b)
Connected User: cloudera
Default Db: my_db
Sql Statement: explain select * from sample
Query Timeline: 167.304ms
– Start execution: 41.292us (41.292us) – Planning finished: 56.42ms (56.386ms)
– Rows available: 58.247ms (1.819ms)
– First row fetched: 160.72ms (101.824ms)
– Unregister query: 166.325ms (6.253ms)
– ClientFetchWaitTimer: 107.969ms
– RowMaterializationTimer: 0ns
Table and Database Specific Options
The alter command helps to change the structure and name of a table.
The describe command provides the metadata of a table. It contains information like columns and their data types.
The drop command helps to remove a construct, which can be a table, a view, or a database function.
The insert command helps to append data (columns) into a table and override the data of an existing table
The select command can be used to perform a specific operation on a particular dataset. It usually mentions the dataset on which the action is to be completed.
The show command displays the metastore of various constructs like tables and databases.
The use command helps to change the current context of a particular database.
Impala – Comments
In Impala, the comments are similar to those in the SQL language. Typically, there are two types of comments:
Each line that is followed by “—” becomes a comment in Impala.
— Hello, welcome to upGrad.
All the lines contained between /* and */ are multiline comments in Impala.
Hi this is an example
Of multiline comments in Impala
We hope that this detailed Impala tutorial helped you understand its intricacies and how it functions.
If you are interested to know more about Big Data, check out our PG Diploma in Software Development Specialization in Big Data program which is designed for working professionals and provides 7+ case studies & projects, covers 14 programming languages & tools, practical hands-on workshops, more than 400 hours of rigorous learning & job placement assistance with top firms.
Learn Software Development Courses online from the World’s top Universities. Earn Executive PG Programs, Advanced Certificate Programs or Masters Programs to fast-track your career.
What is Impala used for?
Impala is a massive parallel processing (MPP) SQL query engine which is developed to function with clustered applications such as Hadoop Distributed File System (HDFS). It is an open-source platform that has been built using the programming languages Java and C++. Impala is specifically used to process massive volumes of data with the help of interactive queries similar to SQL that help retrieve and work on data as per requirements. The in-memory computational capability of Impala makes it significantly faster for SQL operations such as accessing data. Impala can be used to connect to data visualization applications such as Tableau easily and offers high-end security using Kerberos Authentication.
What are the differences between Apache Hive and Impala?
Hive is a data warehouse application which is used to analyze vast sets of organized, i.e. structured data present in a distributed storage system. On the other hand, Impala is a query processing engine that is used for the parallel processing of massive volumes of data. Both Impala and Hive are built on Hadoop, but Hive does not support parallel processing. Hive supports fault tolerance, whereas Impala does not, and then Impala also does not support complex types while Hive supports them. Impala is written using C++, while Hive is created using Java. Besides, Impala offers tremendous speed of query processing, compared to the slower performance of Hive.
Is Hadoop a type of database system?
Hadoop is not essentially a database but an open-source software platform built mainly to handle huge volumes of data that can either be structured, unstructured or even semi-structured. It is not like a relational database, i.e. it does not use tables to store data or support ACID transactions. Instead, it uses the HDFS (Hadoop Distributed File System) and can even store unstructured data, which is not the case with relational databases. Hadoop supports parallel processing on a massive scale and offers excellent throughput and speed of performance when the volume of data increases.