kubectl get pods response on a Kubernetes cluster provisioned using kubeadm on Ubuntu 22.04

Simple Single-node Kubernetes Cluster via kubeadm on Ubuntu 22.04

There are many tools out there to provision single-node Kubernetes clusters, but kubeadm is the way to go for a production-like set-up. Although it is more difficult to create a cluster with kubeadm, you can tweak the cluster to your needs with its configuration options. Following this post, you can easily create a Single-node Kubernetes Cluster using kubeadm on Ubuntu 22.04.

Install general dependencies

You need to install packages on your system for the commands we will use later.

sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl

Install containerd

Although we have a few container runtimes to choose from, we’re going with containerd. Before we install containerd, we’ll create its configuration file.

curl -fsSLo containerd-config.toml \
sudo mkdir /etc/containerd
sudo mv containerd-config.toml /etc/containerd/config.toml

Without delay, you can install containerd from their official GitHub repository as recommended using the following commands:

curl -fLo containerd-1.6.14-linux-amd64.tar.gz \
# Extract the binaries
sudo tar Cxzvf /usr/local containerd-1.6.14-linux-amd64.tar.gz

# Install containerd as a service
sudo curl -fsSLo /etc/systemd/system/containerd.service \

sudo systemctl daemon-reload
sudo systemctl enable --now containerd

Install runc

Installing runc from their official GitHub repository is the recommended way.

curl -fsSLo runc.amd64 \
sudo install -m 755 runc.amd64 /usr/local/sbin/runc

Install CNI network plugins

Install Container Network Interface network plugins from their official GitHub repository.

curl -fLo cni-plugins-linux-amd64-v1.1.1.tgz \
sudo mkdir -p /opt/cni/bin
sudo tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.1.1.tgz

Forward IPv4 and let iptables see bridged network traffic

You need to enable overlay and br_netfilter kernel modules. Additionally, you need to allow iptables see bridged network traffic.

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf

sudo modprobe -a overlay br_netfilter

# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1

# Apply sysctl params without reboot
sudo sysctl --system

Install kubeadm, kubelet & kubectl

You need to ensure the versions of kubeadm, kubelet and kubectl are compatible.

# Add Kubernetes GPG key
curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-archive-keyring.gpg

# Add Kubernetes apt repository
echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

# Fetch package list
sudo apt-get update

sudo apt-get install -y kubelet kubeadm kubectl

# Prevent them from being updated automatically
sudo apt-mark hold kubelet kubeadm kubectl

Ensure swap is disabled

You have to disable the swap feature because Kubernetes does not support it. See the GitHub issue regarding swap on Kubernetes for details.

# See if swap is enabled
swapon --show

# Turn off swap
sudo swapoff -a

# Disable swap completely
sudo sed -i -e '/swap/d' /etc/fstab

Create the cluster using kubeadm

It’s only a single command to initialise the cluster, but it won’t be very functional in single-node environments until we make some changes. Note that we’re providing “–pod-network-cidr” parameter as required by our CNI plugin (Flannel).

sudo kubeadm init --pod-network-cidr=

Configure kubectl

To access the cluster, we have to configure kubectl.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Untaint node

We must untaint the node to allow pods to be deployed to our single-node cluster. Otherwise, your pods will be stuck in a pending state.

kubectl taint nodes --all node-role.kubernetes.io/master-
kubectl taint nodes --all node-role.kubernetes.io/control-plane-

Install a CNI plugin

For networking to function, you must install a Container Network Interface (CNI) plugin. With this in mind, we’re installing flannel.

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Install helm

To install our packages, we’re installing helm v3.

curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash

Install a CSI driver

We need to install a Container Storage Interface (CSI) driver for the storage to work. We’ll install OpenEBS.

# Add openebs repo to helm
helm repo add openebs https://openebs.github.io/charts

kubectl create namespace openebs

helm --namespace=openebs install openebs openebs/openebs

Install a test application

To test the cluster, you can deploy WordPress. Note that we need to specify the storage class provided by our CSI.

# Add bitnami repo to helm
helm repo add bitnami https://charts.bitnami.com/bitnami

helm install wordpress bitnami/wordpress \

We’ve just provisioned a cluster using kubeadm on Ubuntu 22.04!

You have successfully created a single-node Kubernetes Cluster using kubeadm on Ubuntu 22.04, and the cluster has everything you need to install your application.

If you’d like to watch this in a video, see below:


14 thoughts on “Simple Single-node Kubernetes Cluster via kubeadm on Ubuntu 22.04

  1. Dear Oliver,
    thanks for this wonderful detailed instructions, it helped me very much.
    There was one problem with the “containerd” version 1.5.13. This versions produced an error when doing `kubeadm init`. This installs Kubernetes 1.26.
    `Starting with kubernetes 1.26, containerd 1.5.9 is out of support [0]`
    see https://containerd.io/releases/#kubernetes-support
    The solution was to install the latest version (1.6.14) of containerd:
    curl -fsSLo containerd-1.6.14-linux-amd64.tar.gz https://github.com/containerd/containerd/releases/download/v1.6.14/containerd-1.6.14-linux-amd64.tar.gz
    # Extract the binaries
    sudo tar Cxzvf /usr/local containerd-1.6.14-linux-amd64.tar.gz
    When exchanging these three lines in the above description the installation was successfull.
    Thanks, feri

  2. I am getting this error at the taint level

    kubectl taint nodes –all node-role.kubernetes.io/master-
    error: taint “node-role.kubernetes.io/master” not found

    this command worked fine
    kubectl taint nodes –all node-role.kubernetes.io/control-plane-

    any ideas?

  3. Great article, thanks. I added the following to streamline my containerd setup:

    sudo crictl config runtime-endpoint unix:///var/run/containerd/containerd.sock
    sudo crictl config image-endpoint unix:///var/run/containerd/containerd.sock

  4. This appears to be not working on an Ubuntu 22.04 EC2 instance (us-west):

    sudo kubeadm init –pod-network-cidr= –image-repository=k8s.gcr.io
    [init] Using Kubernetes version: v1.26.1
    [preflight] Running pre-flight checks
    [preflight] Pulling images required for setting up a Kubernetes cluster
    [preflight] This might take a minute or two, depending on the speed of your internet connection
    [preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’
    error execution phase preflight: [preflight] Some fatal errors occurred:
    [ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:v1.9.3: output: E0128 18:45:39.311321 3099 remote_image.go:171] “PullImage from image service failed” err=”rpc error: code = NotFound desc = failed to pull and unpack image \”k8s.gcr.io/coredns:v1.9.3\”: failed to resolve reference \”k8s.gcr.io/coredns:v1.9.3\”: k8s.gcr.io/coredns:v1.9.3: not found” image=”k8s.gcr.io/coredns:v1.9.3″
    time=”2023-01-28T18:45:39Z” level=fatal msg=”pulling image: rpc error: code = NotFound desc = failed to pull and unpack image \”k8s.gcr.io/coredns:v1.9.3\”: failed to resolve reference \”k8s.gcr.io/coredns:v1.9.3\”: k8s.gcr.io/coredns:v1.9.3: not found”
    , error: exit status 1
    [preflight] If you know what you are doing, you can make a check non-fatal with `–ignore-preflight-errors=…`
    To see the stack trace of this error execute with –v=5 or higher

    With default image-repository, all image-pulls are failing.

    1. Hi David,

      Thanks for your comment. Looks like Kubernetes gpg keys got updated. I’ve tested the whole guide so all should be working now.

      Regarding external IPs, by default, Kubernetes services put up a load balancer on the cloud, so the service can only be accessed via the service port from localhost. If not on the cloud it’s best to use an ingress controller such as ingress-nginx.

Leave a Reply

Your email address will not be published. Required fields are marked *