Kubernetes Explained: A 2026 Hands-On Guide
April 2, 2026
TL;DR
Kubernetes is the industry-standard container orchestration platform that automates deployment, scaling, and management of containerized applications. This comprehensive guide covers everything from core concepts to advanced production practices, including:
- Core architecture and components of Kubernetes
- Networking fundamentals (Services, Ingress, Network Policies)
- Security best practices with RBAC and Pod Security
- Detailed comparison with Docker Swarm and HashiCorp Nomad
- Step-by-step AWS EKS deployment tutorial
- Advanced features like Horizontal Pod Autoscaling and StatefulSets
- Real-world use cases across industries
Complete with working examples, visual diagrams, and production-ready configurations, this guide provides everything you need to master Kubernetes in practice.
What is Kubernetes and Why Do You Need It?
Modern applications are increasingly built as distributed systems composed of containerized microservices. While containers (like Docker) package applications and dependencies, managing these containers across multiple servers presents significant challenges. This is where Kubernetes (K8s) comes in - it's an open-source platform designed to automate containerized applications' deployment, scaling, and management.
The fundamental problem Kubernetes solves is orchestrating containers across a cluster of machines. Without an orchestrator, you'd need to manually manage:
- Which servers run which containers
- How containers communicate
- Scaling containers based on load
- Handling container failures
- Rolling out updates without downtime
Kubernetes provides a declarative approach to container management. Instead of scripting imperative commands, you declare the desired state of your application, and Kubernetes works continuously to maintain that state. For example, if you specify that you want three replicas of your web application running, Kubernetes will automatically create and maintain exactly three instances, restarting failed containers or moving them to healthy nodes as needed.
Core concepts in Kubernetes include:
-
Pods: The smallest deployable units that can be created and managed in Kubernetes. A Pod represents a single instance of a running process and can contain one or more containers that share storage and network resources.
-
Deployments: A higher-level abstraction that manages the desired state for Pods and ReplicaSets. Deployments enable declarative updates for Pods and ReplicaSets.
-
Services: An abstraction that defines a logical set of Pods and a policy to access them. Services enable network access to a set of Pods.
-
Nodes: The worker machines that run containerized applications. Each node runs the kubelet, which communicates with the control plane.
-
Control Plane: The container orchestration layer that exposes the API and interfaces to define, deploy, and manage the lifecycle of containers.
Kubernetes has become the de facto standard for container orchestration, with adoption by 96% of organizations using containers in production1. Its rich ecosystem, vendor neutrality, and strong community support make it an essential tool for modern application deployment.
Kubernetes Core Components
Understanding Kubernetes architecture is crucial for effective deployment and troubleshooting. A Kubernetes cluster consists of two main parts: the control plane and worker nodes.
Control Plane Components
The control plane makes global decisions about the cluster and responds to cluster events. Its components include:
-
API Server: The front-end for the Kubernetes control plane, exposing the Kubernetes API. All communication between components goes through the API server.
-
etcd: A consistent and highly-available key-value store used as Kubernetes' backing store for all cluster data. It stores the entire configuration and state of the cluster.
-
Scheduler: Watches for newly created Pods with no assigned node and selects a node for them to run on based on resource requirements, policies, and other constraints.
-
Controller Manager: Runs controller processes that regulate the state of the cluster. Examples include the Node Controller, Replication Controller, and Endpoints Controller.
-
Cloud Controller Manager: Links your cluster into your cloud provider's API, separating the components that interact with the cloud platform from those that only interact with your cluster.
Node Components
Worker nodes run the actual applications. Each node runs:
-
Kubelet: An agent that ensures containers are running in a Pod. It takes a set of PodSpecs and ensures that the described containers are running and healthy.
-
Kube-proxy: Maintains network rules on nodes, enabling network communication to your Pods from network sessions inside or outside your cluster.
-
Container Runtime: The software responsible for running containers (e.g., Docker, containerd, CRI-O).
Here's a visual representation of the Kubernetes architecture:
+-------------------------------------------------+
| Control Plane |
| +-----------+ +------+ +----------------+ |
| | API | | etcd | | Controller | |
| | Server | | | | Manager | |
| +-----------+ +------+ +----------------+ |
| \ | / |
| \ | / |
| \ | / |
| \ | / |
| \ | / |
| \ | / |
| \ | / |
| \ | / |
| \ | / |
| \ | / |
| \ | / |
| \|/ |
| +------+ |
| | Scheduler | |
| +------+ |
| \ |
+--------------------------|----------------------+
|
v
+-------------------------------------------------+
| Worker Nodes |
| +-------------------------------------------+ |
| | Node | |
| | +-------------+ +------------------+ | |
| | | Kubelet | | Kube-proxy | | |
| | +-------------+ +------------------+ | |
| | +------------------------------------+ | |
| | | Container Runtime (e.g., containerd)| | |
| | +------------------------------------+ | |
| +-------------------------------------------+ |
| +-------------------------------------------+ |
| | Node | |
| | +-------------+ +------------------+ | |
| | | Kubelet | | Kube-proxy | | |
| | +-------------+ +------------------+ | |
| | +------------------------------------+ | |
| | | Container Runtime (e.g., containerd)| | |
| | +------------------------------------+ | |
| +-------------------------------------------+ |
+-------------------------------------------------+
Declarative vs. Imperative Management
Kubernetes uses a declarative model where you describe the desired state in YAML or JSON configuration files. For example, here's a simple Deployment configuration:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.19.10
ports:
- containerPort: 80
This configuration declares that we want three replicas of an nginx container running at all times. The Kubernetes control plane will continuously work to maintain this desired state.
Kubernetes Networking
Kubernetes networking enables communication between containers, Pods, Services, and external clients. Understanding its networking model is crucial for building and operating distributed applications.
Core Networking Concepts
-
Pod Networking: Every Pod gets its own IP address. Containers within a Pod share the same network namespace and can communicate via localhost.
-
Services: A Service is an abstraction that defines a logical set of Pods and a policy to access them. Services enable loose coupling between dependent Pods.
-
Ingress: Manages external access to services in a cluster, typically HTTP/HTTPS. Ingress provides load balancing, SSL termination, and name-based virtual hosting.
-
Network Policies: Specify how groups of Pods are allowed to communicate with each other and other network endpoints.
Service Types
Kubernetes offers several Service types:
-
ClusterIP: Exposes the Service on a cluster-internal IP. This is the default ServiceType.
-
NodePort: Exposes the Service on each Node's IP at a static port.
-
LoadBalancer: Exposes the Service externally using a cloud provider's load balancer.
-
ExternalName: Maps the Service to the contents of the externalName field by returning a CNAME record.
Here's an example Service definition:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-app
ports:
- protocol: TCP
port: 80
targetPort: 9376
type: LoadBalancer
Ingress Controllers
An Ingress controller is responsible for fulfilling the Ingress, typically with a load balancer. Popular options include:
- Nginx Ingress Controller
- Traefik
- HAProxy
- AWS ALB Ingress Controller
Example Ingress resource:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: myapp.example.com
http:
paths:
- path: /app1
pathType: Prefix
backend:
service:
name: app1-service
port:
number: 80
- path: /app2
pathType: Prefix
backend:
service:
name: app2-service
port:
number: 80
Network Policies
Network Policies allow you to control traffic flow at the IP address or port level. For example, to restrict traffic between frontend and backend services:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: backend-policy
spec:
podSelector:
matchLabels:
role: backend
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
role: frontend
ports:
- protocol: TCP
port: 6379
Kubernetes Security Best Practices
Securing a Kubernetes cluster requires a defense-in-depth approach. Here are key security practices every cluster administrator should implement:
1. Role-Based Access Control (RBAC)
RBAC regulates access to Kubernetes resources based on roles assigned to users. Always follow the principle of least privilege.
Example Role and RoleBinding:
# Role definition
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]
# Role binding
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-pods
namespace: default
subjects:
- kind: User
name: jane
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.io
2. Pod Security Contexts
Define security contexts to restrict container capabilities and privileges:
apiVersion: v1
kind: Pod
metadata:
name: security-context-demo
spec:
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
containers:
- name: sec-ctx-demo
image: busybox
command: ["sh", "-c", "sleep 1h"]
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: ["ALL"]
readOnlyRootFilesystem: true
3. Network Policies
Implement network segmentation using Network Policies to control traffic between pods:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
4. Image Security
- Use trusted container registries
- Scan images for vulnerabilities
- Use image pull secrets for private registries
- Implement image signing and verification
5. Secrets Management
Never store sensitive data in container images or configuration files. Use Kubernetes Secrets:
apiVersion: v1
kind: Secret
metadata:
name: db-secret
type: Opaque
data:
username: YWRtaW4= # base64 encoded "admin"
password: cGFzc3dvcmQ= # base64 encoded "password"
6. Regular Updates and Patching
- Keep Kubernetes components updated
- Regularly patch worker node operating systems
- Monitor for CVEs affecting your cluster components
Kubernetes vs. Alternatives: Docker Swarm & Nomad
While Kubernetes dominates the container orchestration landscape, it's not the only option. Let's compare Kubernetes with Docker Swarm and HashiCorp Nomad.
Kubernetes
Strengths:
- Rich feature set for complex deployments
- Extensive ecosystem and community support
- Mature and battle-tested in production
- Strong support for stateful applications
- Advanced networking and storage options
Weaknesses:
- Steeper learning curve
- Higher operational complexity
- More resource-intensive
Best For: Large-scale, complex deployments requiring advanced features and high availability.
Docker Swarm
Strengths:
- Simple to set up and use
- Tight integration with Docker ecosystem
- Lower resource overhead
- Familiar syntax for Docker users
Weaknesses:
- Limited scalability compared to Kubernetes
- Fewer features for complex deployments
- Less active development since Docker's shift to Kubernetes
Best For: Small to medium deployments where simplicity is prioritized over advanced features.
HashiCorp Nomad
Strengths:
- Lightweight and simple architecture
- Supports non-containerized workloads
- Flexible scheduling
- Easy to operate and maintain
Weaknesses:
- Smaller ecosystem and community
- Fewer built-in features than Kubernetes
- Less mature for complex microservices
Best For: Mixed workloads (containers and non-containers) in smaller environments.
Feature Comparison
| Feature | Kubernetes | Docker Swarm | Nomad |
|---|---|---|---|
| Service Discovery | Yes | Yes | Yes |
| Load Balancing | Yes | Yes | Yes |
| Auto-scaling | Yes | Limited | Yes |
| Rolling Updates | Yes | Yes | Yes |
| Self-healing | Yes | Yes | Yes |
| Storage Orchestration | Yes | Limited | Yes |
| Secret Management | Yes | Yes | Yes |
| Multi-cloud Support | Excellent | Good | Good |
| Learning Curve | Steep | Gentle | Moderate |
| Community Support | Excellent | Good | Good |
| Production Readiness | Excellent | Good | Good |
Hands-On Kubernetes Tutorial: Deploying an Application on AWS EKS
This step-by-step tutorial will guide you through deploying a sample application on Amazon Elastic Kubernetes Service (EKS).
Prerequisites
- AWS account with appropriate permissions
- AWS CLI installed and configured
eksctlinstalledkubectlinstalled- Docker installed
Step 1: Create an EKS Cluster
# Create cluster configuration file (cluster.yaml)
cat <<EOF > cluster.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: nerdlevel-cluster
region: us-west-2
version: "1.27"
managedNodeGroups:
- name: ng-1
instanceType: t3.medium
desiredCapacity: 2
minSize: 1
maxSize: 3
EOF
# Create the cluster
eksctl create cluster -f cluster.yaml
This will take 10-15 minutes to complete. Verify the cluster:
kubectl get nodes
Step 2: Deploy a Sample Application
Create a deployment and service:
# app-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nerdlevel-app
spec:
replicas: 3
selector:
matchLabels:
app: nerdlevel-app
template:
metadata:
labels:
app: nerdlevel-app
spec:
containers:
- name: nginx
image: nginx:1.25
ports:
- containerPort: 80
resources:
requests:
cpu: "100m"
memory: "128Mi"
limits:
cpu: "200m"
memory: "256Mi"
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 5
periodSeconds: 10
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 2
periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
name: nerdlevel-service
spec:
selector:
app: nerdlevel-app
ports:
- protocol: TCP
port: 80
targetPort: 80
type: LoadBalancer
Apply the configuration:
kubectl apply -f app-deployment.yaml
Step 3: Verify the Deployment
Check the status of your deployment:
kubectl get deployments
kubectl get pods
kubectl get services
Get the external IP of your LoadBalancer:
kubectl get svc nerdlevel-service -o jsonpath='{.status.loadBalancer.ingress[0].hostname}'
Open the provided URL in your browser to see the NGINX welcome page.
Step 4: Set Up Horizontal Pod Autoscaling
Create an HPA for your deployment:
# hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: nerdlevel-app-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: nerdlevel-app
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 50
Apply the HPA:
kubectl apply -f hpa.yaml
Step 5: Clean Up
When you're done, delete the cluster to avoid unnecessary charges:
eksctl delete cluster --name nerdlevel-cluster --region us-west-2
Advanced Kubernetes Concepts
Horizontal Pod Autoscaler (HPA)
HPA automatically scales the number of Pods in a deployment based on observed CPU utilization or other custom metrics. The previous tutorial included an HPA example.
StatefulSets
StatefulSets are used for stateful applications that require stable network identifiers and persistent storage. Example:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "nginx"
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.25
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "gp2"
resources:
requests:
storage: 1Gi
DaemonSets
DaemonSets ensure that all (or some) nodes run a copy of a Pod. Common use cases include log collection and node monitoring.
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
namespace: kube-system
labels:
k8s-app: fluentd-logging
spec:
selector:
matchLabels:
name: fluentd
template:
metadata:
labels:
name: fluentd
spec:
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v1.16-debian-elasticsearch8-1
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "elasticsearch-logging"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
Custom Resource Definitions (CRDs)
CRDs extend the Kubernetes API to create custom resources. Here's an example of creating a CRD for a custom resource:
apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
name: websites.example.com
spec:
group: example.com
versions:
- name: v1
served: true
storage: true
schema:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
url:
type: string
content:
type: string
scope: Namespaced
names:
plural: websites
singular: website
kind: Website
shortNames:
- ws
Kubernetes Use Cases: Real-World Applications
Kubernetes is used across various industries to solve different challenges. Here are some real-world examples:
1. E-commerce: Handling Traffic Spikes
Company: Major online retailer
Challenge: Handle 10x traffic spikes during holiday seasons
Solution: Kubernetes auto-scaling handles traffic spikes by automatically adding more instances of their application during peak times.
Results: 99.99% uptime during peak seasons, reduced infrastructure costs by 40% during off-peak times.
2. Financial Services: Microservices Architecture
Company: Global bank
Challenge: Modernize legacy systems while maintaining security and compliance
Solution: Kubernetes enables a gradual transition to microservices with strong network policies and RBAC.
Results: Reduced deployment times from weeks to hours, improved system resilience, and better compliance tracking.
3. Healthcare: Processing Medical Images
Company: Medical imaging provider
Challenge: Process large medical images with varying computational requirements
Solution: Kubernetes manages GPU-accelerated nodes for image processing and standard nodes for other services.
Results: Reduced processing time by 70%, improved resource utilization, and better patient outcomes through faster diagnosis.
4. Media Streaming: Global Content Delivery
Company: Video streaming platform
Challenge: Deliver content globally with low latency
Solution: Kubernetes clusters deployed in multiple regions with intelligent traffic routing.
Results: 50% reduction in latency, improved viewer experience, and 30% reduction in bandwidth costs.
5. IoT: Edge Computing
Company: Industrial IoT provider
Challenge: Process data at the edge with limited connectivity
Solution: Lightweight Kubernetes distributions (like K3s) deployed on edge devices.
Results: Reduced data transfer costs by 80%, improved real-time processing capabilities, and better reliability in remote locations.
Kubernetes Glossary
- Cluster: A set of nodes that run containerized applications managed by Kubernetes.
- Container: A lightweight, standalone, executable package that includes everything needed to run a piece of software.
- Deployment: A Kubernetes object that manages a replicated application.
- Ingress: An API object that manages external access to services in a cluster.
- Kubelet: An agent that runs on each node and ensures containers are running in a Pod.
- Namespace: A virtual cluster within a physical cluster, used for dividing cluster resources between multiple users.
- Node: A worker machine in Kubernetes, which may be a VM or physical machine.
- Pod: The smallest deployable unit in Kubernetes, which can contain one or more containers.
- ReplicaSet: Ensures that a specified number of pod replicas are running at any given time.
- Service: An abstraction that defines a logical set of Pods and a policy to access them.
- Volume: A directory containing data, accessible to containers in a Pod.
Footnotes
-
CNCF Annual Survey 2022 ↩