Blog_Banner_Asset
    Homebreadcumb forward arrow iconBlogbreadcumb forward arrow iconBig Databreadcumb forward arrow iconApache Kafka Architecture: Comprehensive Guide For Beginners [2024]

Apache Kafka Architecture: Comprehensive Guide For Beginners [2024]

Last updated:
15th Jun, 2023
Views
Read Time
10 Mins
share image icon
In this article
Chevron in toc
View All
Apache Kafka Architecture: Comprehensive Guide For Beginners [2024]

Before we delve into the details of the Apache Kafka architecture, it is pertinent to shed some light on why Kafka makes headlines in the first place. To begin with, Apache Kafka mainly finds use in real-time streaming data architectures for providing real-time analytics. Durable, fast, scalable, and fault-tolerant, Kafka’s publish-subscribe messaging system has use cases for things like tracking IoT sensor data or tracking service calls.

Companies like LinkedIn, Netflix, Microsoft, Uber, Spotify, Goldman Sachs, Cisco, PayPal, and many others employ Apache Kafka for processing real-time streaming data. For example, LinkedIn, where Kafka originated, uses it to track operational metrics and activity data. Likewise, for Netflix, Apache Kafka is the de-facto standard for its messaging, eventing, and stream processing needs. 

Learn Online software development training from the World’s top Universities. Earn Executive PG Programs, Advanced Certificate Programs, or Masters Programs to fast-track your career.

The utility of Apache Kafka is better appreciated with an understanding of the Apache Kafka architecture and its underlying components. So, let’s explore the details of Kafka’s architecture.

Ads of upGrad blog

Fundamental Kafka Architecture Concepts

The following concepts are basic to understanding the Apache Kafka architecture:

1. Topics

Kafka topics define the channels through which data is streamed. Thus, producers publish messages to the topics, and consumers read messages from the topics they subscribe. There is no limitation on the number of topics created within a Kafka cluster, and a unique name identifies each topic.

2. Brokers

Brokers are servers in a Kafka cluster that work as containers and hold multiple topics with distinct partitions. A unique integer ID identifies brokers in a Kafka cluster, and a connection with any one of these brokers means connecting with the entire cluster. 

Explore our Popular Software Engineering Courses

3. Partitions

Kafka topics are divided into many parts known as partitions. Partitions are separated in order and allow multiple consumers to read data from a particular topic parallelly. The partitions of a topic are distributed across several servers in the Kafka cluster, and each server manages the data and requests for its lot of partitions. Messages reach the broker and a key, and the key determines the partition to which the particular message will go. Hence, messages with the same key go to the same partition. In case the key is unspecified, the partition is decided following a round-robin approach. 

4. Replicas 

In Kafka, replicas are like partition backups to ensure no data loss in case of a planned shutdown or failure. In other words, replicas are copies of partitions.

5. Partition Offsets

Since messages or records in Kafka are assigned to partitions, each record is provided with an offset to specify its position within the partition. Thus, the offset value associated with a record helps in its easy identification within the partition. A partition offset holds meaning within that particular partition only, and since records are added to partition ends, older records will have lower offset values.

Explore Our Software Development Free Courses

6. Producers

Kafka producers publish messages to one or more topics and send data to the Kafka cluster. As soon as a producer publishes a message to a Kafka topic, the broker receives the message and adds it to a specific partition. Then, producers can choose the partition where they want to publish their message.

7. Consumers and Consumer Groups

Consumers read messages from the Kafka cluster. When a consumer is ready to receive the message, the data is pulled from the broker. Consumers belong to a consumer group, and each consumer within a particular group is responsible for reading a subset of the partitions of every topic it is subscribed.

8. Leader and Follower

Every Kafka partition has one server playing the role of leader. The leader performs all the read-and-write tasks for that particular partition. On the other hand, the job of the follower is to replicate the leader’s data. When a leader in a specific partition fails, one of the follower nodes assumes the role of the leader. A partition can have none or many followers.

9. Kafka Cluster

A Kafka cluster consists of one or more servers that are called brokers. A broker is a container that can hold multiple topics with different partitions. A unique integer ID is used to identify brokers in the Kafka cluster. The main goal of a Kafka cluster is to spread workloads evenly across replicas and partitions. Kafka clusters can scale without interruption by adding or removing brokers.

In-Demand Software Development Skills

The following diagram is a simplified presentation of the interrelationships between the Apache Kafka architecture components discussed above.

Source

Apache Kafka Cluster Architecture

Here’s a detailed look at the main Kafka architectural components:

1. Kafka Brokers

Kafka clusters typically contain multiple nodes known as brokers. The brokers maintain the load balance. Each Kafka broker can handle hundreds and thousands of reads and writes every second. A broker serves as the leader for one particular partition. The leader has one or several followers, with the data on the leader replicated across the followers of that particular partition. 

Followers need to stay updated with the leader’s data. The leader, in turn, keeps track of the followers that are in sync with it. If a follower does not catch up with the leader or is no longer alive, it is removed from the in-sync replica list associated with the particular leader. A new leader is elected from among the followers upon the leader’s death, and the ZooKeeper supervises the election. Since the brokers are stateless, the ZooKeeper maintains its cluster state. The nodes in a cluster send heartbeat messages to the ZooKeeper to inform the latter that they are alive.  

Read our Popular Articles related to Software Development

2. Kafka Producers

Kafka producers directly send data to the brokers that play the role of a leader for a particular partition. The brokers or nodes of the Kafka clusters help the producers send direct messages. They do so by answering requests for metadata on which servers are alive and the live status of the partition leaders of a topic, enabling the producer to direct its requests accordingly. The producer decides which partition it wants to publish messages. Messages in Kafka are sent in batches, called record batches. Producers collect messages in memory and send them in batches either after a fixed period has elapsed or after a certain number of messages have accumulated.

3. Kafka Consumers

Kafka consumers issue requests to brokers denoting the partitions it wants to consume. The consumer specifies the partition offset in its request and receives a piece of log (starting from the offset position) from the broker. A log contains the records for a configurable period known as the retention period.

Consumers may also re-consume data as long as the log contains the data. Kafka consumers work on a pull-based approach which means that the brokers do not immediately push data onto the consumers. Instead, first, consumers send requests to brokers signalling that they are ready to consume data. Hence, the pull-based system ensures that the consumers are not overwhelmed with messages and can catch up if they fall behind. 

Following is a simplified Apache Kafka architecture diagram:

Source

Learn more about Apache Kafka.

Apache Kafka API Architecture

Apache Kafka has four key APIs – the Streams API, Connector API, Producer API, and Consumer API. Let’s see what role each has to play in enhancing the capabilities of Apache Kafka:

1. Streams API

The Streams API of Kafka allows an application to process data using a streams processing algorithm. Using the Streams API, applications can consume input streams from one or several topics, process them with stream operations, produce output streams, and eventually send them to one or more topics. Thus, the Streams API facilitates the transformation of input streams to output streams.

2. Connector API

The Connector API of Kafka is helpful for building, running, and managing reusable producers and consumers that connect Kafka topics to existing data systems or applications. For instance, a connector to a relational database could capture all updates and make sure the changes are available within a Kafka topic.

3. Producer API

The Producer API of Kafka allows applications to publish a stream of records to Kafka topics.

4. Consumer API

The Consumer API of Kafka Allows applications to subscribe to Kafka topics. It also enables applications to process record streams that are produced to those Kafka topics.

Role of ZooKeeper in Kafka

ZooKeeper is responsible for managing the configuration, coordination, and synchronisation of Kafka brokers within a Kafka cluster architecture –

  • Cluster Coordination

ZooKeeper maintains information about the Kafka cluster architecture, including the metadata about topics, partitions, brokers, and consumers. It serves as a centralized registry where brokers register themselves and consumers discover the current state of the Kafka cluster. ZooKeeper enables coordination and synchronization among the various components in the cluster.

  • Leader Election

Kafka relies on ZooKeeper for leader election within each partition. Each partition in Kafka has one broker designated as the leader, responsible for handling all read and write operations for that partition. If the leader fails, ZooKeeper is used to elect a new leader from the available replicas. By providing a consistent view of the leader for each partition, ZooKeeper ensures high availability and fault tolerance.

  • Broker Registration and Health Monitoring

When a Kafka broker starts up, it registers itself with ZooKeeper, providing details such as its hostname and port. ZooKeeper maintains a live list of available brokers, which is crucial for metadata management and load balancing. ZooKeeper also monitors the health of brokers by regularly checking their status. If a broker becomes unresponsive or fails, ZooKeeper updates the broker list accordingly.

How is the heartbeat managed for Kafka Brokers?

The heartbeat for Kafka brokers is managed by the following parameters –

  • heartbeat.interval.ms

The expected time between heartbeats to the consumer coordinator when using Kafka’s group management facilities. Heartbeats are used to ensure that the consumer’s session stays active and to facilitate rebalancing when new consumers join or leave the group. 

  • session.timeout.ms

The timeout is used to detect client failures when using Kafka’s group management facility. The client sends periodic heartbeats to indicate its liveness to the broker. If no heartbeats are received by the broker before the expiration of this session timeout, then the broker will remove this client from the group and initiate a rebalance

  • bootstrap.servers

bootstrap.servers is a configuration parameter that specifies a list of host and port pairs that are the addresses of the Kafka brokers in a “bootstrap” Kafka cluster that a Kafka client connects to initially to bootstrap itself. A host and port pair uses as the separator, for example, localhost:9092. The bootstrap.servers parameter is used for the initial connection to the Kafka cluster, but after that, Kafka will return advertised.listeners info which is a list of IP addresses that can be used to connect to the Kafka brokers

Way Forward

Ads of upGrad blog

If you are interested to know more about Big Data, check out our Advanced Certificate Programme in Big Data from IIIT Bangalore.

Here’s an overview of the program with some key highlights:

  • Executive PGP from IIIT Bangalore with certifications in Data Science and Cloud Infrastructure
  • Online sessions and live lectures with 400+ hours of content
  • 7+ case studies and projects
  • 14+ programming languages and tools
  • 360-degree career support
  • Peer and industry networking

Sign up for more details about the course!

Profile

upGrad

Blog Author
We are an online education platform providing industry-relevant programs for professionals, designed and delivered in collaboration with world-class faculty and businesses. Merging the latest technology, pedagogy and services, we deliver an immersive learning experience for the digital world – anytime, anywhere.
Get Free Consultation

Select Coursecaret down icon
Selectcaret down icon
By clicking 'Submit' you Agree to  
UpGrad's Terms & Conditions

Our Popular Big Data Course

Frequently Asked Questions (FAQs)

1What is Kafka used for?

Apache Kafka is mainly used for building real-time streaming data pipelines and applications adapting to those data streams. It allows both storage and analysis of real-time and historical data through a combination of messaging, storage, and stream processing.

2Is Kafka a framework?

Apache Kafka is an open-source software that provides a framework for storing, reading, and analysing streaming data. Since it is open-source, Kafka is free to use with many developers and users contributing towards new features, updates, and support for new users.

3Why do we need Kafka streams?

Kafka Streams is a client library for building microservices and streaming applications where the input data and output data are stored in the Apache Kafka cluster. On the one hand, it offers the benefits of Apache Kafka’s server-side cluster technology. On the other, it simplifies writing and deploying standard Scala and Java applications on the client side.

Explore Free Courses

Suggested Blogs

Top 10 Hadoop Commands [With Usages]
11935
In this era, with huge chunks of data, it becomes essential to deal with them. The data springing from organizations with growing customers is way lar
Read More

by Rohit Sharma

12 Apr 2024

Characteristics of Big Data: Types & 5V’s
5718
Introduction The world around is changing rapidly, we live a data-driven age now. Data is everywhere, from your social media comments, posts, and lik
Read More

by Rohit Sharma

04 Mar 2024

50 Must Know Big Data Interview Questions and Answers 2024: For Freshers & Experienced
7290
Introduction The demand for potential candidates is increasing rapidly in the big data technologies field. There are plenty of opportunities in this
Read More

by Mohit Soni

What is Big Data – Characteristics, Types, Benefits & Examples
185786
Lately the term ‘Big Data’ has been under the limelight, but not many people know what is big data. Businesses, governmental institutions, HCPs (Healt
Read More

by Abhinav Rai

18 Feb 2024

Cassandra vs MongoDB: Difference Between Cassandra & MongoDB [2023]
5467
Introduction Cassandra and MongoDB are among the most famous NoSQL databases used by large to small enterprises and can be relied upon for scalabilit
Read More

by Rohit Sharma

31 Jan 2024

13 Ultimate Big Data Project Ideas & Topics for Beginners [2024]
100285
Big Data Project Ideas Big Data is an exciting subject. It helps you find patterns and results you wouldn’t have noticed otherwise. This skill
Read More

by upGrad

16 Jan 2024

Be A Big Data Analyst – Skills, Salary & Job Description
899705
In an era dominated by Big Data, one cannot imagine that the skill set and expertise of traditional Data Analysts are enough to handle the complexitie
Read More

by upGrad

16 Dec 2023

12 Exciting Hadoop Project Ideas & Topics For Beginners [2024]
20830
Hadoop Project Ideas & Topics Today, big data technologies power diverse sectors, from banking and finance, IT and telecommunication, to manufact
Read More

by Rohit Sharma

29 Nov 2023

Top 10 Exciting Data Engineering Projects & Ideas For Beginners [2024]
40136
Data engineering is an exciting and rapidly growing field that focuses on building, maintaining, and improving the systems that collect, store, proces
Read More

by Rohit Sharma

21 Sep 2023

Schedule 1:1 free counsellingTalk to Career Expert
icon
footer sticky close icon