Cluster Architecture
Provision three CentOS 7 instances with the following roles and specifications. The control plane node requires at least 2 vCPUs; otherwise, kubeadm initialization will fail.
| Hostname | IP Address | Role | OS | Specs |
|---|---|---|---|---|
| ctrl-plane | 192.168.10.10 | Control Plane | CentOS 7 | 2 vCPU / 4 GiB |
| worker-1 | 192.168.10.11 | Worker | CentOS 7 | 2 vCPU / 4 GiB |
| worker-2 | 192.168.10.12 | Worker | CentOS 7 | 2 vCPU / 4 GiB |
Node Preparation (All Nodes)
Configure hostnames on each instance:
# Control plane
hostnamectl set-hostname ctrl-plane
# Worker 1
hostnamectl set-hostname worker-1
# Worker 2
hostnamectl set-hostname worker-2
Establish internal DNS resolution by appending entries to /etc/hosts:
cat <<EOF >> /etc/hosts
192.168.10.10 ctrl-plane
192.168.10.11 worker-1
192.168.10.12 worker-2
EOF
Bridge Netfilter and Packet Forwarding
Enable bridge traffic processing through iptables and activate IPv4 forwarding. Persist the settings under /etc/sysctl.d/99-kubernetes.conf:
cat <<EOF > /etc/sysctl.d/99-kubernetes.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
Load the br_netfilter kernel module and apply the parameters:
modprobe br_netfilter
sysctl --system
IPVS Load Balancing Modules
Unlike the default iptables proxy, IPVS offers configurable scheduling algorithms and health checking for Services. Install the management utilities:
yum install -y ipset ipvsadm
Create a modules manifest so the required kernel extensions load at boot:
cat <<EOF > /etc/modules-load.d/ipvs.conf
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
EOF
Apply the configuration immediately:
systemctl enable --now systemd-modules-load.service
lsmod | grep ip_vs
Disable Swap
The kubelet component cannot operate with swap enabled. Disable it permanently:
swapoff -a
sed -i.bak '/swap/s/^/#/' /etc/fstab
free -h | grep -i swap
Install Docker Engine
Add the Aliyun Docker CE repository and install a validated release:
yum install -y yum-utils
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum install -y docker-ce-20.10.9-3.el7
Configure the Docker daemon to use the systemd cgroup driver:
mkdir -p /etc/docker
cat <<EOF > /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
}
}
EOF
Start and enable the service:
systemctl enable --now docker
docker info --format '{{.CgroupDriver}}'
Install Kubernetes Components
Create the package repository for version 1.23.0:
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
Install the toolchain:
yum install -y kubeadm-1.23.0-0 kubelet-1.23.0-0 kubectl-1.23.0-0
Ensure kubelet uses the systemd cgroup driver:
cat <<EOF > /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS=--cgroup-driver=systemd
EOF
Enable the kubelet service (it will start automatically after cluster initialization):
systemctl enable kubelet
Extend Certificate Validity to 100 Years
By default, kubeadm generates certificates valid for one year. To avoid future rotation, compile a custom kubeadm binary with a 100-year lifespan.
Install the Go compiler:
wget https://studygolang.com/dl/golang/go1.17.13.linux-amd64.tar.gz
tar -C /usr/local -xzf go1.17.13.linux-amd64.tar.gz
export PATH=$PATH:/usr/local/go/bin
echo 'export PATH=$PATH:/usr/local/go/bin' >> /etc/profile
go version
Download and extract the Kubernetes 1.23.0 source tree:
wget https://github.com/kubernetes/kubernetes/archive/v1.23.0.tar.gz -O k8s-src.tar.gz
tar -zxf k8s-src.tar.gz
cd kubernetes-1.23.0
Modify the certificate duration constants. Edit cmd/kubeadm/app/constants/constants.go and set:
CertificateValidity = time.Hour * 24 * 365 * 100
Edit staging/src/k8s.io/client-go/util/cert/cert.go and update the NotAfter field:
NotAfter: now.Add(duration365d * 100).UTC(),
Compile the kubeadm binary:
yum install -y rsync jq
make WHAT=cmd/kubeadm GOFLAGS="-v"
If permission errors occur for code generators, adjust the execute bit and retry:
chmod +x _output/bin/prerelease-lifecycle-gen _output/bin/deepcopy-gen
make WHAT=cmd/kubeadm GOFLAGS="-v"
Back up the original binary and install the patched version:
cp /usr/bin/kubeadm /usr/bin/kubeadm.orig
cp _output/bin/kubeadm /usr/bin/kubeadm
kubeadm version
Initialize the Control Plane
Generate a default configuration template:
kubeadm config print init-defaults > init-config.yaml
Edit init-config.yaml to reflect your environment:
localAPIEndpoint:
advertiseAddress: 192.168.10.10
nodeRegistration:
name: ctrl-plane
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
Run the initialization. The --upload-certs flag stores encryption assets in etcd for high-availability scenarios:
kubeadm init --config init-config.yaml --upload-certs
After initialization completes, configure kubectl access:
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=/etc/kubernetes/admin.conf
Join Worker Nodes
On each worker node, execute the join command emitted by kubeadm init. The token and hash below are illustrative; use the actual values from your initialization output:
kubeadm join 192.168.10.10:6443 --token <token> \
--discovery-token-ca-cert-hash sha256:<hash>
Return to the control plane and inspect node registration:
kubectl get nodes
The nodes will appear in the NotReady state until the CNI plugin is installed.
Verify Certificate Expiration
Confirm the century-long validity:
kubeadm certs check-expiration
All user and CA certificates should show approximately 99 years of residual time.
Deploy Calico CNI
Calico provides high-performance BGP-based networking and network policy efnorcement. Download the manifest and apply it:
wget https://raw.githubusercontent.com/projectcalico/calico/v3.24.1/manifests/calico.yaml
kubectl apply -f calico.yaml
Monitor the rollout until every pod reaches the Running state:
kubectl get pods -n kube-system -w
Once all Calico pods are ready, confirm cluster health:
kubectl get nodes
All three nodes should now report Ready.
Deploy a Sample Application
Validate end-to-end connectivity by deploying an NGINX workload exposed via NodePort:
cat <<EOF > web-demo.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-demo
spec:
replicas: 1
selector:
matchLabels:
tier: frontend
template:
metadata:
labels:
tier: frontend
spec:
containers:
- name: nginx
image: nginx:stable-alpine
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: web-demo-svc
spec:
type: NodePort
selector:
tier: frontend
ports:
- port: 8080
targetPort: 80
nodePort: 30080
EOF
Apply the manifest:
kubectl apply -f web-demo.yaml
Inspect the created resources:
kubectl get pods
kubectl get svc web-demo-svc
Access the application through any cluster node:
http://192.168.10.10:30080/
http://192.168.10.11:30080/
http://192.168.10.12:30080/