Post

K3S cluster With HA using Kube-VIP

A small 3 node cluster (minimum needed for HA with Kubernetes) with a HA Virtual IP for the Kubernetes control plane.

Hardware used

3 Proxmox Virtual machines with the following configration

  • 32GB Disk
  • 2 VCPU cored
  • 2GB RAM
  • Fixed IP (4)

Take 4 IP’s out of your DHCP pool or ip list and assign them as you wish, static is best for this.

In this instance I have used the followng

  • 192.168.1.15/25 - K3S Node 1
  • 192.168.1.16/25 - K3S Node 2
  • 192.168.1.17/25 - K3S Node 3

I plan on adding two worker nodes on 18 and 19 in the future

  • 192.168.1.20/25 - Kube VIP HA IP address

Preconfiguring the Nodes

Note:

If you are using ubuntu 22.04 server and K3Sup and the K3Sup has not been updated due to this issue:
SSH key format deprecated in Ubuntu 22.04
Then you should be able to add the following to your SSHD config in
/etc/ssh/sshd_config

1
PubkeyAcceptedKeyTypes=+ssh-rsa


Create an SSH key if you do not have one already (you can adjust the type to whatever suits you)

1
ssh-keygen

Ensure that you have copied your SSHkeys over to your nodes

1
ssh-copy-id username@hostname

Grab the installer for K3S

Ok now the nodes allow your ssh key to login to without a password we can grab the utility we are going to use to bootstrap K3s onto the the machine k3sup

Open a Terminal window and grab the following package (Windows Users can grab the EXE from the releases page)

1
curl -sLS https://get.k3sup.dev | sh

Now install K3sup

1
sudo install k3sup /usr/local/bin/

To be 100% it is installed and working you can run

1
k3sup --help

Setting up the first node

To configure the first node you will need to pass the following arguments into K3sup

1
2
3
4
5
6
7
K3sup install \
	--host=FIRSTHOSTHERE \
	--user=USERNMAE \
	--cluster \
	--tls-san YOURVIP \
	--k3s-extra-args="--disable servicelb,traefik --node-taint node-role.kubernetes.io/master=true:NoSchedule"

Replace “FIRSTHOSTNAME” with your first K3S server host name
Replace “USERNAME” with the username you copied the SSH Key to erlier
Replace “YOURVIP” with the IP you reserved for the VIP erlier.

This will install and configure your first K3s Node.
We have set the node taint to not run any workloads on the master nodes as it’s good practice to run workloads on worker nodes.

Installing Kube-VIP

We are going to ssh into the first node and deploy a manifest into the auto deployment folder as any file found in the following path is auto deployed into the cluster.

1
/var/lib/rancher/k3s/server/manifests/

Grab the file and add into the above folder

1
curl -s https://kube-vip.io/manifests/rbac.yaml > /var/lib/rancher/k3s/server/manifests/kube-vip-rbac.yaml

Edit kube-vip-rbac.yaml

Now in your favourite editor, edit the Kube-vip-rbac file and ensure the following block is within the file under the rules section This is so that Kube-Vip can announce and update ip’s on the clusters behalf

1
2
3
  - apiGroups: ["coordination.k8s.io"]
    resources: ["leases"]
    verbs: ["list", "get", "watch", "update", "create"]

Save the file and change back to the home directory.

Identifying network interfaces and setting VIP

Find which interface your main network connection is using.
This is usally ens18 or eth0 This will vary depending on the build of your nodes.
In this scenario it’s ens18 - in terminal type

1
ip a

This will list your interfaces and their ip addresses.
Now we want to export that into a bash alias so that the command to buld the next manifest will work.

1
2
export VIP=IP OF YOUR VIP
export INTERFACE=ens18

You will now want to fetch the container for Kube-vip

1
crictl pull docker.io/plndr/kube-vip:latest

It’s good practice to fix it to a specific version but in this instance I am using the latest release. Now create another alias for the kube-vip

1
alias kube-vip="ctr run --rm --net-host docker.io/plndr/kube-vip:latest vip /kube-vip"

Generate a manifest to install Kube-Vip

We are going to place kube-vip into the same auto deployment folder that we used before.

1
2
3
4
5
6
7
8
9
10
kube-vip manifest daemonset \
    --arp \
    --interface $INTERFACE \
    --address $VIP \
    --controlplane \
    --leaderElection \
    --taint \
    --services \
    --inCluster | tee /var/lib/rancher/k3s/server/manifests/kube-vip.yaml

Edit kube-vip.yaml

1
2
3
4
  tolerations:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
    operator: Exists

We now want to edit the Kube-vip yaml file to ignore the taint on the node so that Kube-vip will run on the master nodes and not just the worker nodes.


So at this point we should be able to ping the VIP.
(Subsitute my VIP below for your VIP)

1
ping 192.168.1.20

This is not HA yet as we have not joined the other nodes into the cluster.

You will want to edit your kubconfig file that will have been generated by K3Sup. You want to point this to your VIP and not the IP of the first node you added into the cluster.

Add the remaining nodes

To add the remaining nodes we will want to join them with the following

1
k3sup join --host=NODE2 --server-user=USERNAME --server-host=VIP --user=USERNAME --server --k3s-extra-args="--disable servicelb,traefik --node-taint node-role.kubernetes.io/master=true:NoSchedule"

For server IP we use the VIP instead of the ip of node 1

1
k3sup join --host=NODE3 --server-user=USERNAME --server-host=VIP --user=USERNAME --server --k3s-extra-args="--disable servicelb,traefik --node-taint node-role.kubernetes.io/master=true:NoSchedule"

Wait for the nodes to spin up and install k3s, at that point you should be able to see the nodes in the cluster

1
kubectl get nodes -o wide

to check the pods for kube-vip are running

1
kubectl get pods -o wide -n kube-system

NOTE I ran into the following error..

Fatal glibc error: CPU does not support x86-64-v2

When running this on Proxmox. Set the CPU type to “host” in vm settings to resolve

This post is licensed under CC BY 4.0 by the author.