Understanding Kubernetes Architecture and Core Resources

Containerization and Orchestration

Containerization allows applications to be packaged with their dependencies, ensuring consistency across different computing environments. While technologies like Docker provide isolation for file systems, CPU, and memory, managing individual containers at scale introduces significant challenges. Specifically, handling container failures, scaling horizontally during traffic spikes, and discovering services becomes complex without a dedicated orchestration layer.

To address these orchestration needs, several tools have emerged, including Docker Swarm, Apache Mesos (often paired with Marathon), and Kubernetes. Among these, Kubernetes (K8s) has become the industry standard. Originating from Google's internal Borg system, K8s is an open-source platform designed to automate the deployment, scaling, and operation of application containers across clusters of hosts.

Kubernetes Core Functions

Kubernetes operates as a cluster management system that maintains the desired state of applications. Its primary capabilities include:

  • Self-Healing: Automatically restarts containers that fail, replaces or reschedules containers when nodes die, and kills containers that don't respond to user-defined health checks.
  • Auto-scaling: Automatically scales applications up or down based on CPU usage or other custom metrics.
  • Service Discovery: Allows containers to find each other automatically without manual intervention.
  • Load Balancing: Distributes network traffic across multiple container instances to ensure stability.
  • Rolling Updates and Rollbacks: Facilitates gradual updates to application code without downtime and allows quick reversion if issues arise.
  • Storage Orchestration: Automatically mounts the storage system of your choice, such as local storage or public cloud providers.

Cluster Architecture

A Kubernetes cluster consists of a set of worker machines, called Nodes, that run containerized applications. Every cluster has at least one worker node. The worker node(s) host the Pods that are the components of the application workload. The control plane manages the worker nodes and the Pods in the cluster.

Control Plane Components

The control plane is responsible for managing the state of the cluster. It comprises the following components:

  • API Server: The central management entity that receives all REST commands for the cluster (users, CLI, or API). It acts as the gateway to the cluster, validating and configuring data for API objects.
  • Scheduler: Watches for newly created Pods with no assigned node and selects a node for them to run on based on resource requirements, hardware/software/policy constraints, and affinity/anti-affinity specifications.
  • Controller Manager: Runs controller processes that regulate the state of the cluster. It monitors the current state and attempts to move the actual state towards the desired state. Examples include the Node Controller (managing node availability) and the Replication Controller (maintaining the correct number of Pods).
  • Etcd: A consistent and highly-available key-value store used as Kubernetes' backing store for all cluster data.

Node Components

Node components run on every node, maintaining running pods and providing the Kubernetes runtime environment.

  • Kubelet: The primary agent that runs on each node. It ensures that containers described in PodSpecs are running and healthy.
  • Kube-proxy: Maintains network rules on each node. These network rules allow network communication to your Pods from network sessions inside or outside of your cluster.
  • Container Runtime: The software responsible for running containers (e.g., Docker, containerd).

Workflow Example

Consider a scenario where a user deploys a web server (e.g., Nginx):

  1. The user sends a deployment request via kubectl to the API Server.
  2. The API Server stores the request in Etcd.
  3. The Scheduler detects the new resource and, using an algorithm, selects the most suitable Worker Node based on available resources. It informs the API Server of the decision.
  4. The Controller Manager detects the API Server update and instructs the Kubelet on the target node.
  5. The Kubelet communicates with the Container Runtime (Docker) to pull the image and start the container within a Pod.
  6. Kube-proxy configures network rules to allow external traffic to reach the new Pod.

Key Concepts and Terminology

  • Pod: The smallest deployable unit in Kubernetes. A Pod encapsulates one or more containers sharing storage and network resources.
  • Controller: A logic loop that monitors the cluster state. It ensures the actual number of Pods matches the desired number defined in the deployment spec.
  • Service: An abstraction which defines a logical set of Pods and a policy by which to access them. Services provide a stable IP address and DNS name for a set of Pods.
  • Label: Key-value pairs attached to objects (like Pods) intended for specifying identifying attributes of objects that are meaningful and relevant to users.
  • Namespace: A mechanism to partition cluster resources between multiple users or teams. It provides a scope for names.

Resource Management Strategies

Kubernetes manages all state via resource objects. There are three primary ways to interact with these resources:

  1. Imperative Commands: Operating directly on live objects using CLI commands.
  2. Imperative Object Configuration: Operating on live objects using configuration files (YAML/JSON) passed via commands.
  3. Declarative Object Configuration: Operating on object configuration files passed via commands, where the cluster ensures the current state matches the file configuration.

Practical Implementation

Namespace Management

Namespaces isolate resources for different teams or projects.

Commands

# List all namespaces
kubectl get ns

# Create a namespace named 'project-alpha'
kubectl create namespace project-alpha

# Describe a specific namespace
kubectl describe ns project-alpha

# Delete a namespace
kubectl delete ns project-alpha

Pod Operations

Pods are transient. Controllers usually manage them, but they can be created individually.

# Run a simple nginx pod in 'project-alpha'
kubectl run web-server --image=nginx:1.21 --namespace=project-alpha

# List pods in the namespace
kubectl get pods -n project-alpha

# View detailed pod information
kubectl describe pod web-server -n project-alpha

# Access the pod (via IP)
kubectl get pods -n project-alpha -o wide
# curl [POD_IP]

# Delete the pod (Note: if managed by a controller, it will restart)
kubectl delete pod web-server -n project-alpha

Labeling and Selecting

Labels are used to organize and select subsets of objects.

# Add a label to a pod
kubectl label pod web-server tier=frontend -n project-alpha

# Update a label
kubectl label pod web-server tier=backend -n project-alpha --overwrite

# List pods with a specific label
kubectl get pods -l tier=backend -n project-alpha

# Remove a label
kubectl label pod web-server tier- -n project-alpha

Deployment Management

Deployments manage the lifecycle of replicated Pods (scaling, updating).

# Create a deployment with 3 replicas
kubectl create deployment app-backend --image=nginx:1.21 --replicas=3 -n project-alpha

# List deployments
kubectl get deploy -n project-alpha

# Scale the deployment
kubectl scale deployment app-backend --replicas=5 -n project-alpha

# Delete the deployment (this deletes associated pods)
kubectl delete deployment app-backend -n project-alpha

Service Configuration

Services expose applications to the network.

# Expose a deployment as a ClusterIP (internal only)
kubectl expose deployment app-backend --name=svc-backend --port=80 --target-port=80 -n project-alpha

# Expose a deployment as a NodePort (external access)
kubectl expose deployment app-backend --name=svc-backend-external --port=80 --target-port=80 --type=NodePort -n project-alpha

# List services
kubectl get svc -n project-alpha

# Delete a service
kubectl delete svc svc-backend -n project-alpha

Configuration Example (YAML)

The following YAML defines a Namespace, a Pod, a Service, and a Deployment.

apiVersion: v1
kind: Namespace
metadata:
  name: prod-environment
---
apiVersion: v1
kind: Pod
metadata:
  name: example-pod
  namespace: prod-environment
  labels:
    app: web-app
spec:
  containers:
  - name: nginx-container
    image: nginx:1.21
    ports:
    - containerPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: backend-deploy
  namespace: prod-environment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: backend-app
  template:
    metadata:
      labels:
        app: backend-app
    spec:
      containers:
      - name: backend
        image: redis:latest
        ports:
        - containerPort: 6379
---
apiVersion: v1
kind: Service
metadata:
  name: backend-service
  namespace: prod-environment
spec:
  selector:
    app: backend-app
  ports:
    - protocol: TCP
      port: 6379
      targetPort: 6379
  type: ClusterIP

Applying this configuration:

kubectl apply -f k8s-config.yaml

Tags: kubernetes k8s container-orchestration devops cloud-native

Posted on Fri, 08 May 2026 07:57:58 +0000 by Mikemcs