A Critical Walkthrough of Setting Up Kubernetes and KubeSphere with KubeKey

Objective

The goal is to critical evaluate the user experience of KubeKey (kk). As a former community member, the aim is to identify friction points in the installation workflow, specifically when deploying a Kubernetes cluster and integrating KubeSphere.

Environment

The evaluation was performed on a provided Linux virtual machine (VM) acting as the sole node for the cluster.

Initial Setup and Dependencies

The starting point is the Quick Start guide. Running the create command immediately reveals a common issue:

./kk create cluster

The output appears successful at first glance, but closer inspection reveals conntrack and socat are missing. These are printed as non-error logs, which can be misleading.

Improvement Suggestions:

  1. Log Severity: Missing dependencies should be flagged as ERROR rather than info logs to force user attention.
  2. Actionable Logs: The output should detect the OS (e.g., CentOS vs Ubuntu) and print the exact installation command (e.g., yum install -y conntrack socat).
  3. Clarity: The dependency checklist table should distinguish between mandatory requirements and optional components to prevent user discouragement.

Resolving Prerequisites

Instead of relying solely on the lengthy docs, install the required tools directly:

yum -y install conntrack-tools socat

Execute the creation command again. If the network environment is restricted, ensure the zone is set to China to speed up mirror downloads:

export KKZONE=cn
./kk create cluster

Improvement Suggestions: 4. Feedback Loops: When downloading images, implement timeouts or progress bars. Long periods of silence often lead users to assume the process is frozen and termminate it with Ctrl+C. 5. Log Consistency: Avoid using WARN for critical failures. A Failed message accompanied by WARN creates confusion regarding whether the issue is fatal.

Once complete, verify the system pods:

kubectl get pod -A

A successful output should show all pods in Running state with 1/1 readiness.

Cluster Teardown

To test an installation that includes KubeSphere, the existing cluster must be removed. KubeKey handles the removal of cluster components but preserves the kubectl binary and downloaded images.

./kk delete cluster

Integrating KubeSphere

Generating a configuration file that includes KubeSphere is straightforward:

./kk create config --with-kubesphere

Attempt 1: Docker Conflict

Initial attempts failed because Docker was pre-installed manually. KubeKey expects to manage the container runtime or requires a specific setup that conflicts with manual installations.

Improvement Suggestions: 6. Error Handling: If a known conflict (like a Docker version mismatch) occurs, provide a link to the documentation or a specific fix command rather than a generic warning. 7. Documentation Structure: The "Note" regarding Docker installation in the docs is misleading due to indentation, making it look like a global requirement rather than a specific note for building from source.

Attempt 2: Clean Slate

Removing the manual Docker installation allows KubeKey to provision its own dependencies.

yum remove docker
export KKZONE=cn
./kk create cluster --with-kubesphere

The installation process features a dynamic progress indicator.

Improvement Suggestions: 8. Versioning: Documentation should be consistent—use either specific version numbers or clear placeholders like [version] in all examples. 9. Input Handling: Accept y, Y, and yes interchangeably to improve CLI usability. 10. State Detection: If a user interrupts an install (Ctrl+C) and runs create again, KubeKey should detect the dirty state (e.g., leftover ks-installer pods) and prompt for a delete action.

Attempt 3: Session Management and Debugging

A terminal disconnection (due to SSH timeout) or forgetting the KKZONE variable can corrupt the state. After a failed partial install, simply runnning create again might lead to image pull errors (e.g., ImagePullBackOff) for KubeSphere components.

Debugging involves checking the installer logs:

kubectl describe pod -n kubesphere-system $(kubectl get pod -n kubesphere-system | grep ks-installer | awk '{print $1}')

Improvement Suggestions: 11. Image Validation: Ensure the default configuration points to valid, existing image tags to prevent ErrImagePull errors during standard installs.

Attempt 4: Success

Specifying the version explicitly usually resolves tagging issues:

./kk create cluster --with-kubesphere v3.2.1

While the installation might show some restarts during the rolling deployment, final verification should show a healthy state:

kubectl get pod -A

Accessing the Dashboard

After successful deployment, access the KubeSphere console via the NodePort or a configured LoadBalancer. The default login credentials (usually admin/P@88w0rd) grant access to the dashboard.

Tags: kubernetes KubeSphere KubeKey devops Cloud Native

Posted on Fri, 15 May 2026 10:53:24 +0000 by irandoct