in this lab , has two stages:

  1. setup kubernetes version 1.31 for control plane and worker nodes using kubeadm

Note:

we deploy control plane and worker nodes with any cloud provider example Aws

using ec2 instance with run on ubuntu

Steps:

  1. create vpc and subnet and in this lab we create vpc with cidr block=77.132.0.0/16 and 3 public subnets with block size=24
  2. deploy 3 ec2 instance for all node and define which be control plane , others be worker nodes
  3. create 2 security groups for control plane node and worker nodes 4 . setup network acceess control list 5 . associate elastic ip with those instances
  4. integrate internet gateway with vpc 7 . create route table and associate it with subnets
  5. after the infrastructure are up and running example ec2 instances

note:
we use control plane node ports which need it by its components

apiserver port 6443
ectd port 2379-2380
scheduler port 10257
controller-manager 10259
access to control plane node port 22 only to my ip address /32

and for workers node port

kubelet    port 10250 
kube-proxy   port 10256 
nodePort      30000-32767 
access to worker node    port 22   only to  my ip  address /32

we add those ports to control plane security group and worker security groups

before start setup , first update and upgrade packages using sudo apt update -y && sudo apt upgrade -y
step for setup control plane node as master node :

  1. disable swap
  2. update kernel params 3 install container runtime
  3. install runc
  4. install cni plugin
  5. install kubeadm , kubelet , kubectl and check version
  6. initialize control plane node using kubeadm and print output token which let you to join worker nodes 8 . install cni plugin cilium 9 . use kubeadm join to master optional: you can change hostname at nodes change hostname to be worker1 , worker2 for worker nodes

step for setup worker nodes :

  1. disable swap
  2. update kernel params 3 install container runtime
  3. install runc
  4. install cni plugin
  5. install kubeadm , kubelet , kubectl and check version
  6. use kubeadm join to master

Note: copy kubeconfig from control plane to workers node at .kube/config

at control plane node
cd .kube/
cat config and copy it all

Note:
you can deploy it at console or as infrastructure as code
in this lab ,I used pulumi python for provisioning the resources.

"""An AWS Python Pulumi program"""
import pulumi , pulumi_aws as aws , json

cfg1=pulumi.Config()

vpc1=aws.ec2.Vpc(
"vpc1",
aws.ec2.VpcArgs(
cidr_block=cfg1.require(key='block1'),
tags={
"Name": "vpc1"
}
)
)

intgw1=aws.ec2.InternetGateway(
"intgw1",
aws.ec2.InternetGatewayArgs(
vpc_id=vpc1.id,
tags={
"Name": "intgw1"
}
)
)

zones="us-east-1a"
publicsubnets=["subnet1","subnet2","subnet3"]
cidr1=cfg1.require(key="cidr1")
cidr2=cfg1.require(key="cidr2")
cidr3=cfg1.require(key="cidr3")

cidrs=[ cidr1 , cidr2 , cidr3 ]

for allsubnets in range(len(publicsubnets)):
publicsubnets[allsubnets]=aws.ec2.Subnet(
publicsubnets[allsubnets],
aws.ec2.SubnetArgs(
vpc_id=vpc1.id,
cidr_block=cidrs[allsubnets],
map_public_ip_on_launch=False,
availability_zone=zones,
tags={
"Name" : publicsubnets[allsubnets]
}
)
)

table1=aws.ec2.RouteTable(
"table1",
aws.ec2.RouteTableArgs(
vpc_id=vpc1.id,
routes=[
aws.ec2.RouteTableRouteArgs(
cidr_block=cfg1.require(key="any_ipv4_traffic"),
gateway_id=intgw1.id
)
],
tags={
"Name" : "table1"
}
)
)

associate1=aws.ec2.RouteTableAssociation(
"associate1",
aws.ec2.RouteTableAssociationArgs(
subnet_id=publicsubnets[0].id,
route_table_id=table1.id
)
)
associate2=aws.ec2.RouteTableAssociation(
"associate2",
aws.ec2.RouteTableAssociationArgs(
subnet_id=publicsubnets[1].id,
route_table_id=table1.id
)

)

associate3=aws.ec2.RouteTableAssociation(
"associate3",
aws.ec2.RouteTableAssociationArgs(
subnet_id=publicsubnets[2].id,
route_table_id=table1.id
)

)

ingress_traffic=[
aws.ec2.NetworkAclIngressArgs(
from_port=22,
to_port=22,
protocol="tcp",
cidr_block=cfg1.require(key="myips"),
icmp_code=0,
icmp_type=0,
action="allow",
rule_no=100
),
aws.ec2.NetworkAclIngressArgs(
from_port=22,
to_port=22,
protocol="tcp",
cidr_block=cfg1.require(key="any_ipv4_traffic"),
icmp_code=0,
icmp_type=0,
action="deny",
rule_no=101
),

aws.ec2.NetworkAclIngressArgs(
from_port=0,
to_port=0,
protocol="-1",
cidr_block=cfg1.require(key="any_ipv4_traffic"),
icmp_code=0,
icmp_type=0,
action="allow",
rule_no=200
)

]

egress_traffic=[
aws.ec2.NetworkAclEgressArgs(
from_port=22,
to_port=22,
protocol="tcp",
cidr_block=cfg1.require(key="myips"),
icmp_code=0,
icmp_type=0,
action="allow",
rule_no=100
),
aws.ec2.NetworkAclEgressArgs(
from_port=22,
to_port=22,
protocol="tcp",
cidr_block=cfg1.require(key="any_ipv4_traffic"),
icmp_code=0,
icmp_type=0,
action="deny",
rule_no=101
),

aws.ec2.NetworkAclEgressArgs(
from_port=0,
to_port=0,
protocol="-1",
cidr_block=cfg1.require(key="any_ipv4_traffic"),
icmp_code=0,
icmp_type=0,
action="allow",
rule_no=200
)
]

nacls1=aws.ec2.NetworkAcl(
"nacls1",
aws.ec2.NetworkAclArgs(
vpc_id=vpc1.id,
ingress=ingress_traffic,
egress=egress_traffic,
tags={
"Name" : "nacls1"
},
)
)

nacllink1=aws.ec2.NetworkAclAssociation(
"nacllink1",
aws.ec2.NetworkAclAssociationArgs(
network_acl_id=nacls1.id,
subnet_id=publicsubnets[0].id
)
)

nacllink2=aws.ec2.NetworkAclAssociation(
"nacllink2",
aws.ec2.NetworkAclAssociationArgs(
network_acl_id=nacls1.id,
subnet_id=publicsubnets[1].id
)
)

nacllink3=aws.ec2.NetworkAclAssociation(
"nacllink3",
aws.ec2.NetworkAclAssociationArgs(
network_acl_id=nacls1.id,
subnet_id=publicsubnets[2].id
)
)

masteringress=[
aws.ec2.SecurityGroupIngressArgs(
from_port=22,
to_port=22,
protocol="tcp",
cidr_blocks=[cfg1.require(key="myips")],

),
aws.ec2.SecurityGroupIngressArgs(
from_port=6443,
to_port=6443,
protocol="tcp",
cidr_blocks=[vpc1.cidr_block]
),
aws.ec2.SecurityGroupIngressArgs(
from_port=2379,
to_port=2380,
protocol="tcp",
cidr_blocks=[vpc1.cidr_block]

),

aws.ec2.SecurityGroupIngressArgs(
from_port=10249,
to_port=10260,
protocol="tcp",
cidr_blocks=[vpc1.cidr_block]
)
]
masteregress=[
aws.ec2.SecurityGroupEgressArgs(
from_port=0,
to_port=0,
protocol="-1",
cidr_blocks=[cfg1.require(key="any_ipv4_traffic")]
),
]

mastersecurity=aws.ec2.SecurityGroup(
"mastersecurity",
aws.ec2.SecurityGroupArgs(
vpc_id=vpc1.id,
name="mastersecurity",
ingress=masteringress,
egress=masteregress,
tags={
"Name" : "mastersecurity"
}
)
)

workeringress=[
aws.ec2.SecurityGroupIngressArgs(
from_port=22,
to_port=22,
protocol="tcp",
cidr_blocks=[cfg1.require(key="myips")]
),

aws.ec2.SecurityGroupIngressArgs(
from_port=10250,
to_port=10250,
protocol="tcp",
cidr_blocks=[vpc1.cidr_block]
),
aws.ec2.SecurityGroupIngressArgs(
from_port=10256,
to_port=10256,
protocol="tcp",
cidr_blocks=[vpc1.cidr_block]
),
aws.ec2.SecurityGroupIngressArgs(
from_port=30000,
to_port=32767,
protocol="tcp",
cidr_blocks=[vpc1.cidr_block]
),

aws.ec2.SecurityGroupIngressArgs(
from_port=80,
to_port=80,
protocol="tcp",
cidr_blocks=[cfg1.require(key="any_ipv4_traffic")]
),
aws.ec2.SecurityGroupIngressArgs(
from_port=443,
to_port=443,
protocol="tcp",
cidr_blocks=[cfg1.require(key="any_ipv4_traffic")]
),

]
workeregress=[
aws.ec2.SecurityGroupEgressArgs(
from_port=0,
to_port=0,
protocol="-1",
cidr_blocks=[cfg1.require(key="any_ipv4_traffic")]

),
]

workersecurity=aws.ec2.SecurityGroup(
"workersecurity",
aws.ec2.SecurityGroupArgs(
vpc_id=vpc1.id,
name="workersecurity",
ingress=workeringress,
egress=workeregress,
tags={
"Name" : "workersecurity"
}
)
)

master=aws.ec2.Instance(
"master",
aws.ec2.InstanceArgs(
ami=cfg1.require(key='ami'),
instance_type=cfg1.require(key='instance-type'),
vpc_security_group_ids=[mastersecurity.id],
subnet_id=publicsubnets[0].id,
availability_zone=zones,
key_name="mykey1",
tags={
"Name" : "master"
},
ebs_block_devices=[
aws.ec2.InstanceEbsBlockDeviceArgs(
device_name="/dev/sdm",
volume_size=8,
volume_type="gp3"
)
]
)
)

worker1=aws.ec2.Instance(
"worker1",
aws.ec2.InstanceArgs(
ami=cfg1.require(key='ami'),
instance_type=cfg1.require(key='instance-type'),
vpc_security_group_ids=[workersecurity.id],
subnet_id=publicsubnets[1].id,
availability_zone=zones,
key_name="mykey2",
tags={
"Name" : "worker1"
},
ebs_block_devices=[
aws.ec2.InstanceEbsBlockDeviceArgs(
device_name="/dev/sdb",
volume_size=8,
volume_type="gp3"
)
],
)
)

worker2=aws.ec2.Instance(
"worker2",
aws.ec2.InstanceArgs(
ami=cfg1.require(key='ami'),
instance_type=cfg1.require(key='instance-type'),
vpc_security_group_ids=[workersecurity.id],
subnet_id=publicsubnets[2].id,
key_name="mykey2",
tags={
"Name" : "worker2"
},
ebs_block_devices=[
aws.ec2.InstanceEbsBlockDeviceArgs(
device_name="/dev/sdc",
volume_size=8,
volume_type="gp3"
)
],
)
)

eips=[ "eip1" , "eip2" , "eip3" ]
for alleips in range(len(eips)):
eips[alleips]=aws.ec2.Eip(
eips[alleips],
aws.ec2.EipArgs(
domain="vpc",
tags={
"Name" : eips[alleips]
}
)
)

eiplink1=aws.ec2.EipAssociation(
"eiplink1",
aws.ec2.EipAssociationArgs(
allocation_id=eips[0].id,
instance_id=master.id
)
)

eiplink2=aws.ec2.EipAssociation(
"eiplink2",
aws.ec2.EipAssociationArgs(
allocation_id=eips[1].id,
instance_id=worker1.id
)
)

eiplink3=aws.ec2.EipAssociation(
"eiplink3",
aws.ec2.EipAssociationArgs(
allocation_id=eips[2].id,
instance_id=worker2.id
)
)

pulumi.export("master_eip" , value=eips[0].public_ip )

pulumi.export("worker1_eip", value=eips[1].public_ip )

pulumi.export("worker2_eip", value=eips[2].public_ip )

pulumi.export("master_private_ip", value=master.private_ip)

pulumi.export("worker1_private_ip" , value=worker1.private_ip)

pulumi.export( "worker2_private_ip" , value=worker2.private_ip )

Notes:

  1. remember not all cni plugins support network policies
    you can use calico or cilium plugins

  2. for kubeadm , kubetlet , kubectl should same version package
    in this lab I used v1.31 to have 1.31.7
    references:
    https://kubernetes.io/docs/reference/networking/ports-and-protocols/
    https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
    https://github.com/opencontainers/runc/releases/
    https://github.com/containerd/containerd/releases
    https://github.com/opencontainers/runc

Cka 2024-2025 series:
https://www.youtube.com/watch?v=6_gMoe7Ik8k&list=PLl4APkPHzsUUOkOv3i62UidrLmSB8DcGC

shellscript for control plane ,worker1 node , worker2 node :

Under control plane use master.sh

!/bin/bash

disable swap

sudo swapoff -a
sudo sed -i '/ swap / s/^(.*)$/#\1/g' /etc/fstab

update kernel params

cat < overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

cat < net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF

sudo sysctl --system

install container runtime

install containerd 1.7.27

curl -LO https://github.com/containerd/containerd/releases/download/v1.7.27/containerd-1.7.27-linux-amd64.tar.gz

sudo tar Cxzvf /usr/local containerd-1.7.27-linux-amd64.tar.gz

curl -LO https://raw.githubusercontent.com/containerd/containerd/main/containerd.service

sudo mkdir -p /usr/local/lib/systemd/system/
sudo mv containerd.service /usr/local/lib/systemd/system/
sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml

sudo systemctl daemon-reload
sudo systemctl enable containerd --now

install runc

curl -LO https://github.com/opencontainers/runc/releases/download/v1.1.12/runc.amd64

sudo install -m 755 runc.amd64 /usr/local/sbin/runc

install cni plugins

curl -LO "https://github.com/containernetworking/plugins/releases/download/v1.6.2/cni-plugins-linux-amd64-v1.6.2.tgz"

sudo mkdir -p /opt/cni/bin

sudo tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.6.2.tgz

install kubeadm, kubelet, kubectl

sudo apt-get update -y
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.31/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.31/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update -y
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

kubeadm version
kubelet --version
kubectl version --client

configure crictl to work with containerd

sudo crictl config runtime-endpoint unix:///run/containerd/containerd.sock
sudo chmod 777 -R /var/run/containerd/

initialize the control plane node using kubeadm

sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=77.132.100.19 --node-name master

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

sudo chmod 777 .kube/
sudo chmod 777 -R /etc/kubernetes/

export KUBECONFIG=/etc/kubernetes/admin.conf

install helm v3

curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
sudo chmod 777 get_helm.sh
./get_helm.sh

install cni cilium cli

CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
CLI_ARCH=amd64
if [ "$(uname -m)" = "aarch64" ]; then CLI_ARCH=arm64; fi
curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}
sha256sum --check cilium-linux-${CLI_ARCH}.tar.gz.sha256sum
sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin
rm cilium-linux-${CLI_ARCH}.tar.gz{,.sha256sum}

cilium status --wait

install cni cilium using helm

helm repo add cilium https://helm.cilium.io/

helm install cilium cilium/cilium --version 1.17.2 --namespace kube-system

Under worker1 node use worker1.sh

!/bin/bash

disable swap

sudo swapoff -a
sudo sed -i '/ swap / s/^(.*)$/#\1/g' /etc/fstab

update kernel params

cat < overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

cat < net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF

sudo sysctl --system

install container runtime

install containerd 1.7.27

curl -LO https://github.com/containerd/containerd/releases/download/v1.7.27/containerd-1.7.27-linux-amd64.tar.gz

sudo tar Cxzvf /usr/local containerd-1.7.27-linux-amd64.tar.gz

curl -LO https://raw.githubusercontent.com/containerd/containerd/main/containerd.service

sudo mkdir -p /usr/local/lib/systemd/system/
sudo mv containerd.service /usr/local/lib/systemd/system/
sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml

sudo systemctl daemon-reload
sudo systemctl enable containerd --now

install runc

curl -LO https://github.com/opencontainers/runc/releases/download/v1.1.12/runc.amd64

sudo install -m 755 runc.amd64 /usr/local/sbin/runc

install cni plugins

curl -LO "https://github.com/containernetworking/plugins/releases/download/v1.6.2/cni-plugins-linux-amd64-v1.6.2.tgz"

sudo mkdir -p /opt/cni/bin

sudo tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.6.2.tgz

install kubeadm, kubelet, kubectl

sudo apt-get update -y
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.31/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.31/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update -y
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

kubeadm version
kubelet --version
kubectl version --client

configure crictl to work with containerd

sudo crictl config runtime-endpoint unix:///run/containerd/containerd.sock
sudo chmod 777 -R /var/run/containerd/

initialize the control plane node using kubeadm

sudo hostname worker1

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

sudo chmod 777 .kube/
sudo chmod 777 -R /etc/kubernetes/

sudo kubeadm join 77.132.100.19:6443 --token h0oyqy.f26d28ikx6acev65 \
--discovery-token-ca-cert-hash sha256:3551b3a9363969229557eabb5a51a9d3851e335b49f0f2f2ededc535c2fdf77d

Under worker2 node use worker2.sh

!/bin/bash

disable swap

sudo swapoff -a
sudo sed -i '/ swap / s/^(.*)$/#\1/g' /etc/fstab

update kernel params

cat < overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

cat < net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
EOF

sudo sysctl --system

install container runtime

install containerd 1.7.27

curl -LO https://github.com/containerd/containerd/releases/download/v1.7.27/containerd-1.7.27-linux-amd64.tar.gz

sudo tar Cxzvf /usr/local containerd-1.7.27-linux-amd64.tar.gz

curl -LO https://raw.githubusercontent.com/containerd/containerd/main/containerd.service

sudo mkdir -p /usr/local/lib/systemd/system/
sudo mv containerd.service /usr/local/lib/systemd/system/
sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml
sudo sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml

sudo systemctl daemon-reload
sudo systemctl enable containerd --now

install runc

curl -LO https://github.com/opencontainers/runc/releases/download/v1.1.12/runc.amd64

sudo install -m 755 runc.amd64 /usr/local/sbin/runc

install cni plugins

curl -LO "https://github.com/containernetworking/plugins/releases/download/v1.6.2/cni-plugins-linux-amd64-v1.6.2.tgz"

sudo mkdir -p /opt/cni/bin

sudo tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.6.2.tgz

install kubeadm, kubelet, kubectl

sudo apt-get update -y
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.31/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.31/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update -y
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

kubeadm version
kubelet --version
kubectl version --client

configure crictl to work with containerd

sudo crictl config runtime-endpoint unix:///run/containerd/containerd.sock
sudo chmod 777 -R /var/run/containerd/

initialize the control plane node using kubeadm

sudo hostname worker2

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

sudo chmod 777 .kube/
sudo chmod 777 -R /etc/kubernetes/

sudo kubeadm join 77.132.100.19:6443 --token h0oyqy.f26d28ikx6acev65 \
--discovery-token-ca-cert-hash sha256:3551b3a9363969229557eabb5a51a9d3851e335b49f0f2f2ededc535c2fdf77d

Second stage:

after setup control plane and worker nodes with kubernetes v1.31 are all ready. will upgrade to kubernetes v1.32 which lead to use version 1.32.3

use    kubectl get  nodes   to   see  version  1.31.7   for kubectl
  1. upgrade control plane cluster and its components:

check for kubernetes package

pager /etc/apt/sources.list.d/kubernetes.list

output:

deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.31/deb/ /

change permission for /etc/apt/sources.list.d/kubernetes.list

sudo chmod 777 /etc/apt/sources.list.d/kubernetes.list

change v1.31 to v1.32 by using nano /etc/apt/sources.list.d/kubernetes.list

deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.32/deb/ / change pkg 1.32

determine to which version you want to upgrade

sudo apt update
sudo apt-cache madison kubeadm

sudo apt-mark unhold kubeadm && \
sudo apt-get update && sudo apt-get install -y kubeadm='1.32.3-1.1' && \
sudo apt-mark hold kubeadm

sudo kubeadm upgrade plan

sudo kubeadm upgrade apply 1.32.3-1.1

kubectl drain master --ignore-daemonsets

sudo apt-mark unhold kubelet kubectl && \
sudo apt-get update && sudo apt-get install -y kubelet=''1.32.3-1.1'' kubectl=''1.32.3-1.1'' && \
sudo apt-mark hold kubelet kubectl

sudo systemctl daemon-reload
sudo systemctl restart kubelet

kubectl uncordon master

Upgrade for worker1 node

pager /etc/apt/sources.list.d/kubernetes.list

deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.31/deb/ /

deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.32/deb/ / change pkg 1.32

sudo chmod 777 /etc/apt/sources.list.d/kubernetes.list

sudo apt update
sudo apt-cache madison kubeadm

sudo apt-mark unhold kubeadm && \
sudo apt-get update && sudo apt-get install -y kubeadm='1.32.3-1.1' && \
sudo apt-mark hold kubeadm

sudo kubeadm upgrade node

kubectl drain worker1 --ignore-daemonsets

sudo apt-mark unhold kubelet kubectl && \
sudo apt-get update && sudo apt-get install -y kubelet=''1.32.3-1.1'' kubectl=''1.32.3-1.1'' && \
sudo apt-mark hold kubelet kubectl

sudo systemctl daemon-reload
sudo systemctl restart kubelet

kubectl uncordon worker1

upgrade for worker2 node

pager /etc/apt/sources.list.d/kubernetes.list

deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.31/deb/ /

sudo chmod 777 /etc/apt/sources.list.d/kubernetes.list

deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.32/deb/ / change pkg 1.32

sudo apt update
sudo apt-cache madison kubeadm

sudo apt-mark unhold kubeadm && \
sudo apt-get update && sudo apt-get install -y kubeadm='1.32.3-1.1' && \
sudo apt-mark hold kubeadm

sudo kubeadm upgrade node

kubectl drain worker2 --ignore-daemonsets

sudo apt-mark unhold kubelet kubectl && \
sudo apt-get update && sudo apt-get install -y kubelet=''1.32.3-1.1'' kubectl=''1.32.3-1.1'' && \
sudo apt-mark hold kubelet kubectl

sudo systemctl daemon-reload
sudo systemctl restart kubelet

kubectl uncordon worker2

use a command kubectl get nodes to see changes version to 1.32.3
also check :
kubeadm version
kubelet --version
kubectl -- version

Note :

kubectl drain [ node] it evicted pod and be unschedule
kubectl uncordon [node] it will be schedule to pod at a node

References:

https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/

https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/change-package-repository/

https://kubernetes.io/docs/tasks/administer-cluster/kubeadm/upgrading-linux-nodes/

https://docs.cilium.io/en/stable/installation/k8s-install-kubeadm/#installation-using-kubeadm

Cka series 2024-2025:

https://www.youtube.com/watch?v=NtX75Ze47EU&list=PLl4APkPHzsUUOkOv3i62UidrLmSB8DcGC&index=35