Kubernetes Cheat Sheet: Architecture, Components, Command Sheet
Updated on Aug 20, 2025 | 6 min read | 14.18K+ views
Share:
For working professionals
For fresh graduates
More
Updated on Aug 20, 2025 | 6 min read | 14.18K+ views
Share:
In the world of modern technology, Kubernetes is the undisputed king of container orchestration. It’s the essential tool for deploying and scaling complex applications, but its powerful features come with a steep learning curve. How can you keep all the critical commands and concepts at your fingertips?
That's where this Kubernetes Cheat Sheet comes in. It’s a quick and handy reference designed for both beginners who are just getting started and experienced professionals who need a fast way to recall specific commands. Packed with essentials, this Kubernetes commands Cheat Sheet will help you work faster, smarter, and with more confidence.
Ready to master cutting-edge technologies like Kubernetes and AI? Enroll in our Artificial Intelligence & Machine Learning Courses today and take your skills to the next level!
Popular AI Programs
An open-source platform for automatic deployment and scaling containers across the clusters of hosts to provide container-centric infrastructure is known as Kubernetes (also known as “Kube” or k8s). It allows easy and efficient management of different hosts running Linux containers by clustering them.
Kubernetes is a platform that is designed for managing the life cycle of containerized applications and services completely. A Kubernetes user can define the ways in which an application should run and interact with different applications.
Explore our specialized programs to advance your expertise in AI and Machine Learning:
Users can switch traffic between different versions of applications, perform updates, scale up and down the services, etc. with Kubernetes. It offers users a high degree of flexibility, reliability, and power in managing applications.
Some of the major features of Kubernetes are:
Read: Deep Learning Algorithm
The architecture of Kubernetes consists of layers: Higher and lower layers. The complexity of abstracting the higher layer can be found in the lower layers. The individual physical or virtual machines are brought together into a cluster. A shared network is used for communication between each server. So, just like other distributed platforms, Kubernetes has one master (at least), and multiple compute nodes.
Let’s have a look at the purpose and components of master and nodes in the Kubernetes architecture.
Master
The master maintains the desired state of the cluster. Since it manages the whole cluster, it is called master. It contains:
Nodes
It contains necessary services that are important for running the pods. The master manages the nodes. It is also called Minion. It contains:
Now, let’s understand the important commands of Kubernetes.
Kubectl Commands
Kubectl is the command-line tool for Kubernetes. The basic Kubectl commands can be divided into:
Pods and Container Introspection
Functionality | Command |
For describing pod names | Kubectl describe pod<name> |
For listing all current pods | Kubectl get pods |
For listing all replication controllers | Kubectl get rc |
For showing the replication controller name | Kubectl describe rc <name> |
For listing replication controllers in a namespace | Kubectl get rc –namespace=”namespace” |
For showing a service name | Kubectl describe svc<name> |
For listing services | Kubectl get cvc |
For watching nodes continuously. | Kubectl get nodes -w |
For deleting a pod | Kubectl delete pod<name> |
Cluster Introspection
Functionality | Command |
For getting version-related information | Kubectl version |
For getting configuration details | Kubectl config g view |
For getting cluster-related information | Kubectl cluster-info |
For getting information about a node | Kubectl describe node<node> |
Debugging Commands
Functionality | Command |
For displaying metrics for a pod | Kubectl top pod |
For displaying metrics for a node | Kubectl top node |
For watching Kubelet logs | Watch -n 2 cat/var/log/kublet.log |
For getting logs from the service for the container | Kubectl logs -f<name>>[-c< $container>] |
For the execution of the command on service by selecting a container | Kubectl exec<service><commands>[-c< $container>] |
Quick Commands
The below quick commands are often used and hence, very useful.
Functionality | Command |
For launching a pod with a name and an image. | Kubectl run<name> — image=<image-name> |
For creating a service described in <manifest.yaml> | Kubectl create -f <manifest.yaml> |
For scaling the replication counter to count the number of instances. | Kubectl scale –replicas=<count>rc<name> |
For mapping the external port to the internal replication port. | Expose rc<name> –port=<external>–target-port=<internal> |
For stopping all pods in <n> | Kubectl drain<n>– delete-local-data–force–ignore-daemonset |
For creating a namespace. | Kubectl create namespace <namespace> |
For allowing the master node to run pods. | Kubectltaintnodes –all-node-role.kuernetes.io/master- |
Objects
Some of the familiar objects used in Kubernetes are as follows:
List of Common Objects | |
All | Controller revisions |
cm= conf gmaps | Cluster role bindings |
Cronjobs | cs=component statuses |
Deploy=deployments | limits=limit ranges |
ev= events | hpa= horizontal pod autoscaling |
jobs | ds= daemon sets |
No = nodes | ns= namespaces |
po= pods | Pod preset |
Psp= pod security policies | Pv= persistent volumes |
quota= resource quotas | rs= replica sets |
roles | rc= replication controllers |
sc= storage classes | pdb= pod distribution budgets |
clusterroles | secrets |
crd=custom resource definition | Pod templates |
csr= certificate signing requests | sa= service accounts |
Netpol- network policies | Role bindings |
ing= ingress | pvc= persistent volume claims |
ep=end points | sts= stateful sets |
Also Read: Regularization in Deep Learning
All the basic information about Kubernetes, it’s architecture and commands are shown in below Kubernetes cheat sheet:
In conclusion, Kubernetes is a powerful but complex system, and having a quick reference is essential for any developer or DevOps professional. This Kubernetes Cheat Sheet is designed to be your go-to guide, helping you save time and work more efficiently.
Bookmark this page and use it as your trusted Kubernetes commands Cheat Sheet to streamline your daily workflow, from launching your first pod to managing a large-scale cluster. Keep it handy, and happy orchestrating!
If you’re interested to learn more about deep learning techniques, machine learning, check out IIIT-B & upGrad’s Executive Diploma in Machine Learning & AI which is designed for working professionals and offers 450+ hours of rigorous training, 30+ case studies & assignments, IIIT-B Alumni status, 5+ practical hands-on capstone projects & job assistance with top firms.
Expand your expertise with the best resources available. Browse the programs below to find your ideal fit in Best Machine Learning and AI Courses Online.
Discover in-demand Machine Learning skills to expand your expertise. Explore the programs below to find the perfect fit for your goals.
Subscribe to upGrad's Newsletter
Join thousands of learners who receive useful tips
Discover popular AI and ML blogs and free courses to deepen your expertise. Explore the programs below to find your perfect fit.
Kubernetes, often abbreviated as K8s, is a powerful open-source platform designed for automating the deployment, scaling, and management of containerized applications. A container is a lightweight, ready-to-run software package that includes everything needed to run an application: code, runtime, and system libraries. Kubernetes groups these containers into logical units called Pods, making it easy to manage and discover them. By deploying and scaling these containers across a cluster of host machines, it provides a robust, container-centric infrastructure and manages the entire lifecycle of your applications.
The Kubernetes architecture consists of a Control Plane (formerly master nodes) and one or more Worker Nodes. The Control Plane acts as the brain of the cluster, making global decisions and managing the overall state. Its key components include the API Server (the entry point for all commands), etcd (a reliable key-value store for cluster data), the Scheduler (which assigns Pods to Nodes), and the Controller Manager (which runs controllers to maintain the desired state). The Worker Nodes are the machines where your applications actually run. Each Worker Node contains a kubelet (to manage Pods), a kube-proxy (for networking), and a container runtime like Docker.
A Pod is the smallest and simplest deployable unit in the Kubernetes object model. It represents a single instance of a running process in your cluster and can contain one or more tightly coupled containers. These containers share the same network namespace and storage volumes, allowing them to communicate with each other as if they were on the same machine. It's the basic building block because all other higher-level objects in Kubernetes, like Deployments and Stateful Sets, are ultimately responsible for creating and managing Pods.
A container is a single, isolated package of software. A Pod is a higher-level abstraction that can hold a group of one or more containers. While it's common to run a single container within a Pod, you can also run multiple containers together in a "sidecar" pattern. In this case, the containers inside the Pod are scheduled on the same Worker Node and can share resources like storage and networking, which is useful when they need to work together closely.
A Kubernetes Service is an abstraction that defines a logical set of Pods and a policy by which to access them. Since Pods are ephemeral and can be created or destroyed, their IP addresses are not stable. A Service provides a single, stable IP address and DNS name that can be used to direct traffic to the correct group of Pods, even as they are scaled or replaced. This is essential for enabling communication between different parts of your application and for exposing your application to the outside world.
A Deployment is a higher-level Kubernetes object that manages the lifecycle of your application's Pods. Its primary job is to ensure that a specified number of replica Pods are running at all times. If a Pod crashes, the Deployment's controller will automatically replace it. Deployments also provide a declarative way to manage updates to your application, allowing you to perform rolling updates with zero downtime, which is a core feature for modern application management.
A ReplicaSet is a lower-level object whose sole purpose is to maintain a stable set of replica Pods running at any given time. A Deployment is a higher-level controller that manages ReplicaSets and provides more advanced features, most importantly, the ability to perform declarative updates and rollbacks. In modern Kubernetes usage, you almost always interact with a Deployment and let it manage the underlying ReplicaSets for you, rather than creating ReplicaSets directly.
A Namespace is a way to create a virtual cluster inside your physical Kubernetes cluster. It provides a scope for names, meaning that resource names (like for a Pod or Service) only need to be unique within a Namespace, not across the entire cluster. Namespaces are used to organize clusters into isolated environments, for example, creating separate development, testing, and production namespaces, or for separating the resources of different teams or projects.
ConfigMaps are used to store non-confidential configuration data, such as application settings or command-line arguments, as key-value pairs. This allows you to decouple your configuration from your application code, making your application more portable. Secrets are similar but are specifically designed for storing sensitive information, such as passwords, API keys, or TLS certificates. The data in a Secret is stored in a base64 encoded format and can be mounted into Pods as files or environment variables.
etcd is the consistent and highly-available key-value store that serves as the single source of truth for the entire Kubernetes cluster. It stores all of the cluster's configuration data, state, and metadata. The API Server is the only component that communicates directly with etcd. Because it holds the complete state of the cluster, backing up etcd is a critical administrative task for disaster recovery.
The kubelet is an agent that runs on each Worker Node in the cluster. Its primary job is to ensure that the containers described in the PodSpecs (which it receives from the API Server) are running and healthy. The kube-proxy is a network proxy that also runs on each node and is responsible for implementing the Kubernetes Service concept. It maintains network rules on the node that allow for network communication to your Pods from both inside and outside the cluster.
For stateful applications that require persistent data, Kubernetes provides a powerful storage subsystem built around Volumes, PersistentVolumes (PVs), and PersistentVolumeClaims (PVCs). A Volume provides storage that lives as long as the Pod itself. For data that needs to persist beyond the life of a Pod, a developer creates a PersistentVolumeClaim to request storage, and an administrator provides that storage by creating a PersistentVolume. This decouples the storage from the Pod lifecycle.
Helm is often referred to as "the package manager for Kubernetes." It is a tool that simplifies the process of deploying and managing complex applications on a Kubernetes cluster. Helm uses a packaging format called charts, which are collections of pre-configured Kubernetes resource files. Using Helm, you can install, upgrade, and manage even the most complex applications with a single command, which is much simpler than managing dozens of individual YAML files.
While a Service of type LoadBalancer can expose an application to the internet, it typically requires a dedicated public IP for each service. An Ingress is a more powerful and flexible way to manage external access. It acts as an API object that manages external access to the services in a cluster, typically HTTP. Ingress can provide load balancing, SSL termination, and name-based virtual hosting, allowing you to route traffic to different services based on the request's hostname or path.
The Horizontal Pod Autoscaler (HPA) automatically scales the number of Pods in a Deployment or ReplicaSet based on observed CPU utilization or other custom metrics. You define a target metric (e.g., "keep the average CPU usage across all Pods at 50%"), and the HPA controller will periodically check the metric and automatically increase or decrease the number of replicas to match the target. This is a key feature for building applications that can handle variable traffic loads.
A container runtime is the underlying software that is responsible for running containers. While Kubernetes is responsible for orchestrating (managing) the containers, it needs a container runtime on each Worker Node to actually perform tasks like pulling container images from a registry, starting the containers, and stopping them. Docker is the most well-known container runtime, but Kubernetes also supports other runtimes like containerd and CRI-O.
The main benefits of Kubernetes are scalability, as it can automatically scale your applications up or down based on demand; high availability, as it can automatically restart failed containers and reschedule them on healthy nodes; and portability, as it provides a consistent abstraction layer that allows your applications to run on any cloud provider or on-premise data center that supports Kubernetes.
The most effective way to learn is through a combination of structured learning and hands-on practice. A comprehensive program, like the DevOps courses offered by upGrad, can provide a strong foundation by teaching you the core concepts and guiding you through real-world projects. Supplement this with hands-on practice using tools like Minikube or Docker Desktop to run a local Kubernetes cluster. A good Kubernetes Cheat Sheet can also be an invaluable resource for remembering key commands.
kubectl is the primary command-line tool for interacting with a Kubernetes cluster. As a cluster administrator or application developer, you use kubectl to deploy applications, inspect and manage cluster resources, and view logs. Almost all operations that can be performed on a Kubernetes cluster can be done through the kubectl CLI. Having a good Kubernetes commands Cheat Sheet is very helpful for mastering its many options.
Kubernetes is used across a wide range of industries for many different use cases. Some of the most common include deploying and managing microservices architectures, migrating traditional on-premise applications to the cloud to make them more scalable and resilient, and creating robust environments for CI/CD (Continuous Integration/Continuous Deployment) pipelines to automate the software delivery process. It is also increasingly being used to manage large-scale data processing and machine learning workloads.
900 articles published
Pavan Vadapalli is the Director of Engineering , bringing over 18 years of experience in software engineering, technology leadership, and startup innovation. Holding a B.Tech and an MBA from the India...
Speak with AI & ML expert
By submitting, I accept the T&C and
Privacy Policy
Top Resources