In this tutorial, we'll go through the step-by-step process of installing a Kubernetes cluster using Kubeadm and Containerd.
I'm going to use a machine with Ubuntu server 24.04 LTS so if you're using a different OS you might have to adapt some commands.
This tutorial was written using Kubernetes version 1.32, if you're using a newer version some steps might have changed.
Also keep in mind all the following steps will be executed as root unless otherwise stated, so no command will start with sudo.
Set up nodes
The following steps should be run on all nodes.
Enable IPv4 packet forwarding
We should enable IPv4 packet forwarding so the network can work as expected.
sysctl net.ipv4.ip_forward=1To make this change persistent between reboots, we should modify the /etc/sysctl.conf file.
# Uncomment the next line to enable packet forwarding for IPv4
- #net.ipv4.ip_forward=1
+ net.ipv4.ip_forward=1Install dependencies
There are some common dependencies we must be sure we have installed in our system.
apt update
apt install ca-certificates curl apt-transport-https ca-certificates gpgInstall containerd.io
To download containerd.io we should add the Docker repository to our system.
First, we add the Docker's official GPG key:
install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
chmod a+r /etc/apt/keyrings/docker.ascNext, we add the Docker repository:
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}") stable" | \
tee /etc/apt/sources.list.d/docker.list > /dev/nullNow we can install containerd.io:
apt update
apt install containerd.ioConfigure systemd cgroup driver for containerd
First, we need to create a containerd configuration file at /etc/containerd/config.toml.
mkdir -p /etc/containerd
containerd config default | tee /etc/containerd/config.toml > /dev/nullNow, we should enable the systemd cgroup driver for the CRI at /etc/containerd/config.toml.
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
BinaryName = ""
CriuImagePath = ""
CriuPath = ""
CriuWorkPath = ""
IoGid = 0
IoUid = 0
NoNewKeyring = false
NoPivotRoot = false
Root = ""
ShimCgroup = ""
- SystemdCgroup = false
+ SystemdCgroup = trueInstead of editing manually the file you can run the following command:
sed -i "s/SystemdCgroup = false/SystemdCgroup = true/g" "/etc/containerd/config.toml"Now we restart containerd and check its status to make sure it is working.
systemctl restart containerd
systemctl status containerdInstall kubeadm, kubelet and kubectl
First, we should download Kubernetes' official GPG key:
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.32/deb/Release.key | \
gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpgNext, we add the Kubernetes repository to our system:
echo \
"deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.32/deb/ /" | \
tee /etc/apt/sources.list.d/kubernetes.listWe can now install the Kubernetes packages and hold the packages to the downloaded version.
apt update
apt install -y kubelet kubeadm kubectl
apt-mark hold kubelet kubeadm kubectlFinally, we can enable the kubelet service:
systemctl enable --now kubeletSet up Kubernetes cluster
The following steps should only be run on the control plane node.
The control plane node is the one in charge of supervising and administrating the k8s cluster.
Initialize the k8s control plane
To initialize the control plane, we're going to specify the network CIDR for our pods, this is the default network for Calico and if you don't have a good reason, it's better to keep it as it is.
kubeadm init --pod-network-cidr=192.168.0.0/16Once you run this command it will output a kubeadm join command, keep it safe as we will use it later.
Prepare non-root user
To operate with our Kubernetes cluster, it's better to use a non-root user.
If you already have one with sudo permission log in with it, if you don't have one run the following command to create it:
adduser kubernetes
usermod -aG sudo kubernetes
su - kubernetesTo manage the cluster, the user must have the k8s config file at ~./kube.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/configTo check if the user can manage the cluster we can run the following command:
kubectl get nodesIt should output something like the following:
NAME STATUS ROLES AGE VERSION
eu-central-1.binarycomet.net NotReady control-plane 15m v1.32.3The following steps should be run using this user.
Deploy a Container Network Interface (CNI)
The pods require a CNI to communicate between them, there are a few options, we'll use the Calico operator.
First, we download the configuration files:
wget https://raw.githubusercontent.com/projectcalico/calico/v3.28.4/manifests/tigera-operator.yaml
wget https://raw.githubusercontent.com/projectcalico/calico/v3.28.4/manifests/custom-resources.yamlIf you're installing a newer Kubernetes version, check the Calico releases at https://github.com/projectcalico/calico/releases
If you changed the
pod-network-cidrwhile initializing the cluster, you should update the CIDR at thecustom-resources.yamlconfiguration file.
Next, we import the configuration files to our cluster.
kubectl create -f ./tigera-operator.yaml
kubectl create -f ./custom-resources.yamlAfter a few seconds check the pods status to make sure everything is working:
kubectl get pods -n calico-systemNAME READY STATUS RESTARTS AGE
calico-kube-controllers-88ff6f9d5-v4pp2 1/1 Running 0 102s
calico-node-mh8gd 1/1 Running 0 102s
calico-typha-6fc55bd49d-s62bq 1/1 Running 0 102s
csi-node-driver-2r96g 2/2 Running 0 102sJoin the worker nodes to the cluster
The following steps should be run only on worker nodes.
Prepare non-root user
As we've done at the control plane node, we're going to use a non-root user.
If you already have one with sudo permission log in with it, if you don't have one run the following command to create it:
adduser kubernetes
usermod -aG sudo kubernetes
su - kubernetesJoin cluster
Using the non-root user we should run the kubeadm join command we've keept safe previously:
kubeadm join : --token \
--discovery-token-ca-cert-hash sha256:<hash>Check worker node status
To check the worker node status, we should run the following command at the control plane node.
kubectl get nodesNAME STATUS ROLES AGE VERSION
eu-central-1.binarycomet.net Ready control-plane 78m v1.32.3
eu-central-2.binarycomet.net Ready 53m v1.32.3
eu-central-3.binarycomet.net Ready 25m v1.32.3Set the worker role to the worker node
The worker node is already part of the cluster, but it has no role. To give it the worker role we should run the following command:
kubectl label node node-role.kubernetes.io/worker=workerWe check the status again to make sure the node has obtained the worker role.
kubectl get nodesNAME STATUS ROLES AGE VERSION
eu-central-1.binarycomet.net Ready control-plane 78m v1.32.3
eu-central-2.binarycomet.net Ready worker 53m v1.32.3
eu-central-3.binarycomet.net Ready worker 25m v1.32.3