Pod Fundamentals and Architecture
A Kubernetes pod is the smallest unit you can deploy in Kubernetes. Pods are ephemeral resources that can be created and destroyed rapidly, unlike traditional container engines.
What Pods Actually Are
Each pod gets its own IP address within the cluster. All containers inside the same pod communicate via localhost and share storage volumes. Think of a pod as a wrapper around one or more containers that work together.
Single-container pods run one application container. Multi-container pods typically include a main application plus sidecar containers for logging, monitoring, or networking.
Pod Lifecycle Phases
Understand these five phases for the CKA exam:
- Pending: Waiting for resources to become available
- Running: At least one container is active and running
- Succeeded: All containers exited successfully
- Failed: At least one container exited with an error
- Unknown: Kubernetes cannot determine the pod state
Critical Architecture Concepts
Kubernetes automatically creates pause containers to hold pod network namespaces. This happens behind the scenes but is important for understanding networking.
Pods interact with nodes through kubelet, the node agent. CNI (Container Network Interface) plugins handle pod-to-pod networking across your cluster.
Know when to use raw pods versus higher-level objects like Deployments and StatefulSets. Most production workloads use Deployments, not raw pods. Practice creating pods from scratch with kubectl and YAML manifests to build real fluency.
Pod Manifest Syntax and Configuration
Creating pods requires mastery of YAML manifest structure. Every pod manifest needs four essential sections.
Required Manifest Fields
Every pod manifest starts with these fields:
- apiVersion: Set to v1 for pods
- kind: Set to Pod
- metadata: Contains name and labels for identification
- spec: Contains all container definitions and pod settings
Within spec, you define containers as an array. Each container requires a name and image field at minimum.
Container Configuration Options
Common fields you'll use frequently:
- ports: Define containerPort and protocol for networking
- resources: Set requests and limits for CPU and memory
- volumeMounts: Access shared storage volumes
- env: Define environment variables for the application
- livenessProbe/readinessProbe: Configure health checks
Understanding Resources
Resource requests define the minimum resources Kubernetes reserves for your pod. This affects scheduling decisions. Resource limits cap maximum resource usage to prevent runaway consumption.
CPU uses millicores (m), where 1000m equals one full CPU core. Memory uses binary units like Mi (mebibytes) and Gi (gibibytes).
Advanced Configuration
Init containers run before main containers finish, useful for setup tasks like downloading configuration. Container security context lets you set runAsUser, fsGroup, and other security controls.
Study image pull policies (Always, IfNotPresent, Never) and how imagePullSecrets work for private registries. Understand restart policies (Always, OnFailure, Never) for appropriate pod behavior.
Probe Types
Common probe types include:
- httpGet: Send HTTP requests to check health
- tcpSocket: Open TCP connections to verify availability
- exec: Run commands inside the container
Practice writing minimal yet functional manifests from memory, as the exam requires pod creation under time pressure.
Pod Lifecycle and Advanced Management
Understanding the complete pod lifecycle is essential for the CKA exam. The journey from creation to deletion involves multiple stages.
Pod Creation and Scheduling
When you create a pod, the API server validates the manifest and stores it in etcd. The scheduler then assigns the pod to an appropriate node based on resource requests, taints/tolerations, and affinity rules.
Once scheduled, kubelet on the target node pulls the container image and starts containers in order.
Health Checking During Execution
Liveness probes check if containers are still running. Failed liveness probes trigger container restarts. Readiness probes determine when a container is ready to accept traffic.
Startup probes handle applications with long initialization periods. They delay liveness and readiness checks until the application finishes starting up.
For multi-container pods, containers start in the specified order. However, they may not complete initialization in that order. Configure probes properly to handle this.
Termination Process
When you delete a pod, Kubernetes sends SIGTERM to containers. A grace period (default 30 seconds) allows containers to clean up connections and persist state.
If the pod doesn't stop within the grace period, Kubernetes forces termination with SIGKILL. Understanding this process helps you design graceful shutdown logic.
Container Patterns
Sidecar containers augment the main application. Common sidecars handle logging, service mesh proxies, or monitoring.
Ephemeral containers can be added to running pods for debugging without modifying the pod spec. This is a powerful exam troubleshooting technique.
Practical Commands
Master these kubectl commands for the exam:
- kubectl exec: Access running pods interactively
- kubectl logs: View container output and messages
- kubectl describe: Get detailed pod state and events
- kubectl port-forward: Forward local ports to container ports
Understanding how to track pod events and interpret status conditions is vital for troubleshooting during the exam.
Resource Management and Scheduling
Resource management is central to Kubernetes operation and critical for CKA exam success. Proper configuration affects scheduling, performance, and cluster stability.
Understanding Requests and Limits
Resource requests tell the scheduler how much resources a pod needs to function properly. This influences which nodes can accept the pod. Resource limits prevent pods from consuming excessive resources and starving the cluster.
When a pod exceeds its memory limit, the kernel OOMKills the container and restarts it. When CPU limits are exceeded, the kernel throttles the process instead of killing it.
CPU and Memory Measurements
CPU is measured in millicores (m). One full CPU core equals 1000m. Memory uses binary units like Mi (mebibytes) and Gi (gibibytes).
For example, a pod with 500m CPU and 256Mi memory requests half a CPU and 256 mebibytes of RAM.
QoS Classes and Eviction
QoS (Quality of Service) classes determine pod eviction priority during resource pressure. Know these three tiers:
- Guaranteed: Requests equal limits, highest priority, evicted last
- Burstable: Requests less than limits, medium priority, evicted second
- BestEffort: No requests or limits, lowest priority, evicted first
When nodes become resource-constrained, Kubernetes evicts BestEffort and Burstable pods before Guaranteed pods. This makes proper configuration essential for critical applications.
Scheduling Controls
Pod affinity and anti-affinity rules control scheduling relative to other pods. Node affinity lets you prefer or require specific nodes based on labels.
Taints on nodes combined with pod tolerations prevent inappropriate placement. For example, taint a GPU node so only GPU workloads can run there.
Autoscaling Considerations
Horizontal Pod Autoscalers (HPA) scale replica counts based on resource metrics. Proper resource specification is essential for autoscaling to function correctly.
Calculate total resource capacity and understand bin-packing algorithms that schedulers use to efficiently place pods on nodes.
Troubleshooting and Debugging Pods
Pod troubleshooting is a critical CKA exam skill that appears in practical scenarios. Master the systematic approach to diagnosing issues.
Starting Point: Pod Status
Begin with kubectl get pods to view pod status at a glance. Then use kubectl describe pod (name) to examine detailed information including events, conditions, and resource allocation.
Recognize these common status indicators:
- CrashLoopBackOff: Containers repeatedly failing and restarting
- ImagePullBackOff: Cannot retrieve the container image
- Pending: Scheduling or resource issues preventing placement
Viewing Logs and Output
Use kubectl logs (pod) to view application output. Add the -f flag for streaming logs in real time.
For restarted containers, use the --previous flag to examine logs from the previous run. For multi-container pods, specify the container name with -c flag to view specific container logs.
Interactive Debugging
kubectl exec -it (pod) -- (command) allows interactive access to running containers. Use this for real-time diagnosis and testing.
kubectl run quickly creates test pods for network testing and verification. This helps isolate whether issues are application-specific or cluster-wide.
Investigating Common Issues
For pods that won't start:
- Check node capacity with kubectl top nodes
- Review scheduling events in pod description
- Verify resource requests match available capacity
For network connectivity issues:
- Verify ClusterIP and service endpoints exist
- Test DNS resolution from inside a pod
- Check network policies aren't blocking traffic
Advanced Techniques
Inspect resource definitions with kubectl get pod (name) -o yaml to identify configuration errors. Pod disruption budgets protect pods during maintenance.
Check cluster-level events with kubectl get events to see issues affecting scheduling or execution. For performance issues, examine resource usage against requests and limits.
Ephemeral containers allow attaching debugging containers to running pods without modifying original specifications. This is a powerful advanced troubleshooting technique.
For init container failures, use kubectl logs (pod) --previous since init containers exit after completion and their logs require the previous flag.
