Before we delve into the details of the Apache Kafka architecture, it is pertinent to shed some light on why Kafka makes headlines in the first place. To begin with, Apache Kafka mainly finds use in real-time streaming data architectures for providing real-time analytics. Durable, fast, scalable, and fault-tolerant, Kafka’s publish-subscribe messaging system has use cases for things like tracking IoT sensor data or tracking service calls.
Companies like LinkedIn, Netflix, Microsoft, Uber, Spotify, Goldman Sachs, Cisco, PayPal, and many others employ Apache Kafka for processing real-time streaming data. For example, LinkedIn, where Kafka originated, uses it to track operational metrics and activity data. Likewise, for Netflix, Apache Kafka is the de-facto standard for its messaging, eventing, and stream processing needs.
Learn Online software development training from the World’s top Universities. Earn Executive PG Programs, Advanced Certificate Programs, or Masters Programs to fast-track your career.
The utility of Apache Kafka is better appreciated with an understanding of the Apache Kafka architecture and its underlying components. So, let’s explore the details of Kafka’s architecture.
Fundamental Kafka Architecture Concepts
The following concepts are basic to understanding the Apache Kafka architecture:
1. Topics
Kafka topics define the channels through which data is streamed. Thus, producers publish messages to the topics, and consumers read messages from the topics they subscribe. There is no limitation on the number of topics created within a Kafka cluster, and a unique name identifies each topic.
2. Brokers
Brokers are servers in a Kafka cluster that work as containers and hold multiple topics with distinct partitions. A unique integer ID identifies brokers in a Kafka cluster, and a connection with any one of these brokers means connecting with the entire cluster.
Explore our Popular Software Engineering Courses
3. Partitions
Kafka topics are divided into many parts known as partitions. Partitions are separated in order and allow multiple consumers to read data from a particular topic parallelly. The partitions of a topic are distributed across several servers in the Kafka cluster, and each server manages the data and requests for its lot of partitions. Messages reach the broker and a key, and the key determines the partition to which the particular message will go. Hence, messages with the same key go to the same partition. In case the key is unspecified, the partition is decided following a round-robin approach.
4. Replicas
In Kafka, replicas are like partition backups to ensure no data loss in case of a planned shutdown or failure. In other words, replicas are copies of partitions.
5. Partition Offsets
Since messages or records in Kafka are assigned to partitions, each record is provided with an offset to specify its position within the partition. Thus, the offset value associated with a record helps in its easy identification within the partition. A partition offset holds meaning within that particular partition only, and since records are added to partition ends, older records will have lower offset values.
Explore Our Software Development Free Courses
6. Producers
Kafka producers publish messages to one or more topics and send data to the Kafka cluster. As soon as a producer publishes a message to a Kafka topic, the broker receives the message and adds it to a specific partition. Then, producers can choose the partition where they want to publish their message.
7. Consumers and Consumer Groups
Consumers read messages from the Kafka cluster. When a consumer is ready to receive the message, the data is pulled from the broker. Consumers belong to a consumer group, and each consumer within a particular group is responsible for reading a subset of the partitions of every topic it is subscribed.
8. Leader and Follower
Every Kafka partition has one server playing the role of leader. The leader performs all the read-and-write tasks for that particular partition. On the other hand, the job of the follower is to replicate the leader’s data. When a leader in a specific partition fails, one of the follower nodes assumes the role of the leader. A partition can have none or many followers.
9. Kafka Cluster
A Kafka cluster consists of one or more servers that are called brokers. A broker is a container that can hold multiple topics with different partitions. A unique integer ID is used to identify brokers in the Kafka cluster. The main goal of a Kafka cluster is to spread workloads evenly across replicas and partitions. Kafka clusters can scale without interruption by adding or removing brokers.
In-Demand Software Development Skills
The following diagram is a simplified presentation of the interrelationships between the Apache Kafka architecture components discussed above.
Apache Kafka Cluster Architecture
Here’s a detailed look at the main Kafka architectural components:
1. Kafka Brokers
Kafka clusters typically contain multiple nodes known as brokers. The brokers maintain the load balance. Each Kafka broker can handle hundreds and thousands of reads and writes every second. A broker serves as the leader for one particular partition. The leader has one or several followers, with the data on the leader replicated across the followers of that particular partition.
Followers need to stay updated with the leader’s data. The leader, in turn, keeps track of the followers that are in sync with it. If a follower does not catch up with the leader or is no longer alive, it is removed from the in-sync replica list associated with the particular leader. A new leader is elected from among the followers upon the leader’s death, and the ZooKeeper supervises the election. Since the brokers are stateless, the ZooKeeper maintains its cluster state. The nodes in a cluster send heartbeat messages to the ZooKeeper to inform the latter that they are alive.
Read our Popular Articles related to Software Development
Why Learn to Code? How Learn to Code? | How to Install Specific Version of NPM Package? | Types of Inheritance in C++ What Should You Know? |
2. Kafka Producers
Kafka producers directly send data to the brokers that play the role of a leader for a particular partition. The brokers or nodes of the Kafka clusters help the producers send direct messages. They do so by answering requests for metadata on which servers are alive and the live status of the partition leaders of a topic, enabling the producer to direct its requests accordingly. The producer decides which partition it wants to publish messages. Messages in Kafka are sent in batches, called record batches. Producers collect messages in memory and send them in batches either after a fixed period has elapsed or after a certain number of messages have accumulated.
3. Kafka Consumers
Kafka consumers issue requests to brokers denoting the partitions it wants to consume. The consumer specifies the partition offset in its request and receives a piece of log (starting from the offset position) from the broker. A log contains the records for a configurable period known as the retention period.
Consumers may also re-consume data as long as the log contains the data. Kafka consumers work on a pull-based approach which means that the brokers do not immediately push data onto the consumers. Instead, first, consumers send requests to brokers signalling that they are ready to consume data. Hence, the pull-based system ensures that the consumers are not overwhelmed with messages and can catch up if they fall behind.
Following is a simplified Apache Kafka architecture diagram:
Learn more about Apache Kafka.
Apache Kafka API Architecture
Apache Kafka has four key APIs – the Streams API, Connector API, Producer API, and Consumer API. Let’s see what role each has to play in enhancing the capabilities of Apache Kafka:
1. Streams API
The Streams API of Kafka allows an application to process data using a streams processing algorithm. Using the Streams API, applications can consume input streams from one or several topics, process them with stream operations, produce output streams, and eventually send them to one or more topics. Thus, the Streams API facilitates the transformation of input streams to output streams.
2. Connector API
The Connector API of Kafka is helpful for building, running, and managing reusable producers and consumers that connect Kafka topics to existing data systems or applications. For instance, a connector to a relational database could capture all updates and make sure the changes are available within a Kafka topic.
3. Producer API
The Producer API of Kafka allows applications to publish a stream of records to Kafka topics.
4. Consumer API
The Consumer API of Kafka Allows applications to subscribe to Kafka topics. It also enables applications to process record streams that are produced to those Kafka topics.
Role of ZooKeeper in Kafka
ZooKeeper is responsible for managing the configuration, coordination, and synchronisation of Kafka brokers within a Kafka cluster architecture –
- Cluster Coordination
ZooKeeper maintains information about the Kafka cluster architecture, including the metadata about topics, partitions, brokers, and consumers. It serves as a centralized registry where brokers register themselves and consumers discover the current state of the Kafka cluster. ZooKeeper enables coordination and synchronization among the various components in the cluster.
- Leader Election
Kafka relies on ZooKeeper for leader election within each partition. Each partition in Kafka has one broker designated as the leader, responsible for handling all read and write operations for that partition. If the leader fails, ZooKeeper is used to elect a new leader from the available replicas. By providing a consistent view of the leader for each partition, ZooKeeper ensures high availability and fault tolerance.
- Broker Registration and Health Monitoring
When a Kafka broker starts up, it registers itself with ZooKeeper, providing details such as its hostname and port. ZooKeeper maintains a live list of available brokers, which is crucial for metadata management and load balancing. ZooKeeper also monitors the health of brokers by regularly checking their status. If a broker becomes unresponsive or fails, ZooKeeper updates the broker list accordingly.
How is the heartbeat managed for Kafka Brokers?
The heartbeat for Kafka brokers is managed by the following parameters –
- heartbeat.interval.ms
The expected time between heartbeats to the consumer coordinator when using Kafka’s group management facilities. Heartbeats are used to ensure that the consumer’s session stays active and to facilitate rebalancing when new consumers join or leave the group.
- session.timeout.ms
The timeout is used to detect client failures when using Kafka’s group management facility. The client sends periodic heartbeats to indicate its liveness to the broker. If no heartbeats are received by the broker before the expiration of this session timeout, then the broker will remove this client from the group and initiate a rebalance
- bootstrap.servers
bootstrap.servers is a configuration parameter that specifies a list of host and port pairs that are the addresses of the Kafka brokers in a “bootstrap” Kafka cluster that a Kafka client connects to initially to bootstrap itself. A host and port pair uses as the separator, for example, localhost:9092. The bootstrap.servers parameter is used for the initial connection to the Kafka cluster, but after that, Kafka will return advertised.listeners info which is a list of IP addresses that can be used to connect to the Kafka brokers
Way Forward
If you are interested to know more about Big Data, check out our Advanced Certificate Programme in Big Data from IIIT Bangalore.
Here’s an overview of the program with some key highlights:
- Executive PGP from IIIT Bangalore with certifications in Data Science and Cloud Infrastructure
- Online sessions and live lectures with 400+ hours of content
- 7+ case studies and projects
- 14+ programming languages and tools
- 360-degree career support
- Peer and industry networking
Sign up for more details about the course!