Kubernetes & Container Orchestration

Docker Internals: Namespaces, Cgroups, and Images

4 min read

Understanding container internals separates senior from junior candidates. Let's explore what makes containers work.

Linux Building Blocks

Containers aren't magic—they're built on Linux kernel features:

Feature Purpose What It Isolates
Namespaces Isolation Process IDs, network, mounts, users
Cgroups Resource limits CPU, memory, I/O, network bandwidth
Union FS Layered storage File system layers

Namespaces Deep Dive

Linux namespaces provide isolation:

# View namespaces for a process
ls -la /proc/<pid>/ns/

# Types of namespaces:
# PID    - Process isolation (container sees PID 1)
# NET    - Network stack (own interfaces, routing)
# MNT    - Mount points (own filesystem view)
# UTS    - Hostname (own hostname)
# IPC    - Inter-process communication
# USER   - User/group IDs (root in container ≠ root on host)
# CGROUP - Cgroup root (new in kernel 4.6)

Interview question: "How does a container have PID 1 while the host sees a different PID?"

Answer: PID namespaces. The container has its own PID namespace where the main process is PID 1. The host sees the actual PID in the global namespace. Use docker top <container> to see host PIDs.

Cgroups (Control Groups)

Cgroups limit and account for resource usage:

# View cgroup limits for a container
docker inspect <container> | jq '.[0].HostConfig.Memory'

# On host, cgroups v2 location
ls /sys/fs/cgroup/

# Container's cgroup
cat /sys/fs/cgroup/docker/<container-id>/memory.max

Resource Limits in Docker

# CPU limit (1.5 cores)
docker run --cpus="1.5" nginx

# Memory limit (512MB)
docker run --memory="512m" nginx

# Memory + swap
docker run --memory="512m" --memory-swap="1g" nginx

# I/O limits
docker run --device-read-bps /dev/sda:10mb nginx

Docker Image Layers

Images are built from layers using Union FS:

FROM python:3.11-slim          # Layer 1: Base image
WORKDIR /app                   # Layer 2: Metadata
COPY requirements.txt .        # Layer 3: Small file
RUN pip install -r requirements.txt  # Layer 4: Dependencies
COPY . .                       # Layer 5: Application code
+---------------------------+
|  Layer 5: COPY . .       | (writable in container)
+---------------------------+
|  Layer 4: RUN pip install | (cached if req unchanged)
+---------------------------+
|  Layer 3: COPY req.txt    |
+---------------------------+
|  Layer 2: WORKDIR         |
+---------------------------+
|  Layer 1: python:3.11-slim|
+---------------------------+

Layer Optimization

# BAD: Invalidates cache on any code change
COPY . .
RUN pip install -r requirements.txt

# GOOD: Dependencies cached separately
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .

Multi-Stage Builds

Reduce image size dramatically:

# Build stage
FROM golang:1.21 AS builder
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 go build -o /app/server

# Runtime stage
FROM gcr.io/distroless/static-debian12
COPY --from=builder /app/server /server
ENTRYPOINT ["/server"]

Result: Final image is ~10MB instead of ~1GB

Docker Networking

# Network modes
docker run --network=bridge nginx   # Default, NAT
docker run --network=host nginx     # Host network stack
docker run --network=none nginx     # No networking
docker run --network=container:other nginx  # Share with another

# Create custom network
docker network create --driver bridge my-network
docker run --network=my-network --name web nginx
docker run --network=my-network --name api my-api
# 'web' and 'api' can communicate by name

Interview Questions

Q: "A container is using too much memory and gets OOM killed. How do you prevent this?"

# Set memory limit
docker run --memory="512m" --memory-swap="512m" app

# In Kubernetes
resources:
  limits:
    memory: "512Mi"
  requests:
    memory: "256Mi"

Q: "Explain the difference between ENTRYPOINT and CMD"

Aspect ENTRYPOINT CMD
Purpose Defines the executable Default arguments
Override --entrypoint flag Append to docker run
Combination CMD becomes arguments to ENTRYPOINT
# Example: CMD as default args to ENTRYPOINT
ENTRYPOINT ["python", "app.py"]
CMD ["--port", "8080"]

# docker run myapp          → python app.py --port 8080
# docker run myapp --debug  → python app.py --debug

Q: "How would you debug a container that crashes immediately?"

# Override entrypoint to get a shell
docker run -it --entrypoint /bin/sh myimage

# Check logs
docker logs <container>

# Inspect the image
docker history myimage
docker inspect myimage

# Run with more verbose output
docker run --rm myimage 2>&1

Next, we'll explore Kubernetes architecture—the orchestration layer on top of containers. :::

Quiz

Module 4: Kubernetes & Container Orchestration

Take Quiz