Kubernetes installation on Rocky Linux 8.5 with Helm and Rancher

Sun, Feb 13, 2022 12-minute read

Prequisites

Swap

Swap when running containers is bad, since it can expose data from one container to another. So kubernetes refuses to run with it.

So turn it off by doing:

swapoff -a

Edit /etc/fstab

And comment out any swap partition so it looks like:

#UUID=3d0751dd-102b-4941-9174-3c104ccc16c9 none                    swap    defaults        0 0

Then run

systemctl daemon-reload

And reboot.

Sometimes uncommenting the entry in fstab is not enough - and you need to actually delete the swap partition as well - otherwise “magic” in the kernel seems to detect a swap partition and mount it.

To delete the swap partition use fdisk or similar.

NTP

It is important that all nodes in the cluster have the same view of the time to prevent issues with certificates etc.

So install a NTP client:

sudo dnf install -y chrony
sudo systemctl enable --now chronyd

Container runtime

Running a kubernetes cluster requires a container runtime - this is because the pods are running as containers - inside whatever container runtime you are using.

So make sure that the machine you want to install a kubernetes on has a working docker or other container installation.

iptables configuration

Its required that iptables can see bridged traffic - first part is to ensure that the module br_netfilter is loaded.

This can be checked and enforced by running first

lsmod|grep br_netfilter

If that turns up empty, then you can enable it now, by running:

sudo modprobe br_netfilter

To ensure that its also enabled after a reboot, it needs to be added to a module load configuration, by doing:

cat <<EOT | sudo tee /etc/modules-load.d/kubernetes.conf
br_netfilter
EOT

Also the bridging software needs to be configured, so iptables correctly sees the bridged traffic - this is done by adding the following configuration:

cat <<EOT | sudo tee /etc/sysctl.d/kubernetes.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOT

Finally tell sysctl to reload the configuration.

sudo sysctl --system

cgroup driver

You need to ensure that docker and kubernetes is using the same cgroup driver. Docker does not default to systemd, so that needs to be changed if you want kubernetes to use docker.

This means you need to add a docker config file.

Otherwise the cluster init will fail and when you check the logs you will see the following:

Feb 12 15:22:03 kube218.room.dom kubelet[7158]: E0212 15:22:03.476637    7158 server.go:302] "Failed to run kubelet" err="failed to run Kubelet: misconfiguration: kubelet cgroup driver: \"systemd\" is different from docker cgroup driver: \"cgroupfs\""
Feb 12 15:22:03 kube218.room.dom systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 12 15:22:03 kube218.room.dom systemd[1]: kubelet.service: Failed with result 'exit-code'.

To add a docker configuration file you can do this:

cat<<EOT | sudo tee /etc/docker/daemon.json
{
  "ipv6": true,
  "fixed-cidr-v6": "2001:db8:1::/64",
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOT

Restart docker after having added the above configuration file.

Kubernetes basic installation

Package manager setup

The first thing we have to do is to install the required repository. This is done by running the following code:

cat <<EOT | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOT

When that is done you run:

sudo dnf update

Now the package manager is configured and you are ready to install kubernetes.

SELinux configiration

Before installing kubernetes you need to configure SELinux, so it is compatible with the current kubernetes version. Basically what it needs to do is do disable SELinux, this is required so pods can access the filesystem, which parts of the kubernetes pods require.

# Set SELinux in permissive mode (effectively disabling it)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

If the above command fails - you can just create the file /etc/selinux/config - and add the following line to the file.

SELINUX=permissive

Reboot - to make sure that SELinux has been properly disabled - and when the machine comes up again, its time to install the required kubernetes software:

Kubernetes software

All prequisites are accounted for and we can now install kubernetes.

sudo dnf install -y kubelet-1.22.6-0 kubeadm-1.22.6-0 kubectl-1.22.6-0 --disableexcludes=kubernetes
sudo systemctl enable --now kubelet

Notitce the –disableexcludes - which is important, since the repository was set up to exclude these packages - which means dnf update does not update these packages when you do a normal package update. So to be able to install them at all this –disableexcludes needs to be added to the dnf command line.

Also the reason why I use these specific version of kubernetes is because Rancher 2.6 only supports at most 1.22.x.

Certificates

If you want to use the built in certificates that kubernetes will generate, fine - if you want to use your own - then you have to place

  • ca.crt
  • ca.key

Inside the folder:

/etc/kubernetes/pki

If your ca is signed by a CA not known universally you need to add it to the system certs like this:

cat <<EOT | sudo tee /etc/pki/ca-trust/source/anchors/root.dom.pem
-----BEGIN CERTIFICATE-----
MIIF2zCCA8OgAwIBAgIBADANBgkqhkiG9w0BAQ0FADBSMRQwEgYDVQQDEwtST09U
LkRPTSBDQTELMAkGA1UEBhMCREsxETAPBgNVBAcTCFJvZWRvdnJlMQ0wCwYDVQQK
EwRST09UMQswCQYDVQQLEwJDQTAgFw0xODExMjIxODE5NTVaGA8yMTE4MTAyOTE4
MTk1NVowUjEUMBIGA1UEAxMLUk9PVC5ET00gQ0ExCzAJBgNVBAYTAkRLMREwDwYD
VQQHEwhSb2Vkb3ZyZTENMAsGA1UEChMEUk9PVDELMAkGA1UECxMCQ0EwggIiMA0G
CSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQDLxsQ73t+ZbDHIePaVc68GjMQdu+qn
bWtlh+pNs3GGi76cJLmWiu4QVfJzYP1HpQPy3dREVUAFYP3iFpMdLx7Invb9CuKU
i/UIP08j/AbgFSQ6weiPecz5f4IAgBEtHWb9EexBXuowVouKNf8qnKwnyL9SspoQ
46VXB8I8w5+12gL1bQPYenVPL0gci4k95IKzVgTV25pwfB3/mEJ8L+Wd9mhCcPMs
LJ3vjlR1z+x3xPqGhrMLxwPZfu1+Bx9psPx6Ds1PsXcGEptapzkhfvb9BYpvR331
++1tlXYO0oxr0c0l1gJ11wuEtT/dVrY6ZSKBcTUadpKLBMYeA5AKQKB47ALOCJDy
/P1Oo46UHu+yFk5/oFYsYhWIch7p/E1Txu9rezCRGUeeU6FOSgcCTv2njfCGlsuU
tKALDSmciw5DOVpNjGG/kuAg9KEkZF4YKXIlDdsFr10OV4g8fS2BwYyEbStL6t3J
X9up8Q7eo5gC32KHd9lF/mtZqE6rkimkjUpNousicuri/xWagZN1lTseGWbKpfM/
vKcnfLXOyhdW5tYmVQBO512/MQ/AN3jNlxQwMp0CWNNxkj89GRblW0+EoCTQYTjk
ftZ9CgvNjew7sJjjY58smUVm4E+GSV+y1gjj4PCrW8mkFrOFy5eTR8OczsgmYPCk
KLwip542E0UV7wIDAQABo4G5MIG2MB0GA1UdDgQWBBRkuHAp/Dxp3bbx0Bo9kEVq
E6VRdTB6BgNVHSMEczBxgBRkuHAp/Dxp3bbx0Bo9kEVqE6VRdaFWpFQwUjEUMBIG
A1UEAxMLUk9PVC5ET00gQ0ExCzAJBgNVBAYTAkRLMREwDwYDVQQHEwhSb2Vkb3Zy
ZTENMAsGA1UEChMEUk9PVDELMAkGA1UECxMCQ0GCAQAwDAYDVR0TBAUwAwEB/zAL
BgNVHQ8EBAMCAQYwDQYJKoZIhvcNAQENBQADggIBAITd2MPPDl+BUx/mqZwpvd7u
zlQR9QIYh4H5omBjaGqFGM67dVhNHLZ8p0s3tRqKJ+ykb/Rw3Z4m61ncSpvDO0So
HYWMyP4XH7biD8RAR3LbNqJdm9CF8idLNT4RJnnHxIQqeug9OaHOEKbfGVkUW3zM
WPafYNoGBFszczFlq3BlbNxtSUzS2beRGI17ykeYA/fATG87pbjG6sEzoDrJODC6
6P+bPmiCiNuOs+7LjVNPvpIFp1AuA5GyqXSeudil+KFZ/su2vCPpEhEi/MuoqWmr
PTpGv7VRDHnsp2rB6M1lUlQzRG037YpwAmQfM9dA6uzUXBjYhe83xKDa2skcsq+G
e702XbwPLo9AekOFW7Z+wgM30Ehls/XmITarWU5vqJzVUw3iCa9yY88U/XXUOl7S
Akt0U9F+d1D3wNfwC9zfi1VtixUEjmdBO5zi73W9O9ELZsggKMTRh1w7jwmWcOKy
73WtgNZ47ndaJXB2PcIDt1h+4GYCqVJ7JwguutiLTqVuw+7qUpcfSegAcxwZi0oi
4Yx/d0/RhfUN9FPnDujuUyEhN8Jo4BtJHoi5kNOE0fsAoxTLaKsnIWeRmYDMc+i4
I6RRCI7wdt3xyyDA99PGTPGdzcRy3E5XlaenLLtMzw/4LHtKJGl+5sul971dhGsg
kyvUIspq2v4q7n6QokuV
-----END CERTIFICATE-----
EOT

sudo update-ca-trust

Kubernetes configuration & installation

If you want to speed up the kubernetes init you can download the required images beforehand by doing a:

kubeadm config images pull

Installing with a configuration file

Its possible to do the cluster init via command line arguments, but if you want to do the more advanced stuff you need to create a configuration file - and another advantage of having a configuration file is that you can easily do repeated installations.

To get a template for a configuration file, you can just call

kubeadm config print init-defaults > ./cluster-init.yml

Then you can simply edit the cluster-init.yml and pass that onto kubeadm when you are ready to init the cluster

Personally I have made a whole partition for my cluster/docker installations inside /k8s

And then I have mounted /k8s/docker as /var/lib/docker via my /etc/fstab And also put the kubernetes configs into /k8s/config

UUID=19013e1e-501c-4a19-8d8f-09d7f2161c6b /k8s                    ext4    defaults        0 0
/k8s/docker            /var/lib/docker                            none    nofail,bind     0 0
/k8s/config            /etc/kubernetes                            none    nofail,bind     0 0
/k8s/dockerconfig      /etc/docker                                none    nofail,bind     0 0

And my cluster init file then contains /k8s/etcd as the storage location for the etcd data.

My basic cluster init config file is looking like this:

apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.4.1
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  imagePullPolicy: IfNotPresent
  name: kube1.root.dom
  taints: null
---
apiServer:
  timeoutForControlPlane: 8m
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: r00t
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /k8s/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: 1.23.0
controlPlaneEndpoint: "k8s.root.dom:6443"
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
scheduler: {}

Then with the configuration file at hand in my homedir I call:

#pull images before actually initing the cluster
sudo kubeadm config images pull
sudo kubeadm init --config ~/cluster-init.yml --upload-certs >> cluster-join.txt && cat ./cluster-join.txt

Which pulls the images, inits the cluster, saves the output to cluster-join.txt and then outputs the .txt file that contains information you need both now and for each node you want to join to the cluster.

When the cluster has inited and you had no errors come up - you should see the following:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join k8s.root.dom:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:a32be5a71a6902bde72623c7ef99f8e5fe84a9f66c4de7e99535f4b54166259a

Then you can configure your local home dir to have the required bits for cluster interaction via:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

If you like me have pre-populated your machines with the CA certs, then the above join command will fail - and you have to add the following argument to the join command:

--ignore-preflight-errors=FileAvailable--etc-kubernetes-pki-ca.crt

So the full command like becomes:

kubeadm join k8s.root.dom:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:a32be5a71a6902bde72623c7ef99f8e5fe84a9f66c4de7e99535f4b54166259a \
        --ignore-preflight-errors=FileAvailable--etc-kubernetes-pki-ca.crt

If at some point you want to join another control-plane - you need to create a new token - this is done by several steps - which could look like the following:

token=$(kubeadm token generate)
certkeyout=$(sudo kubeadm init phase upload-certs --upload-certs)
#copy the contents of certkeyout variable into an array and now a[1] contains the certificate key
readarray -td ':' a <<<"$certkeyout";declare -p a
certkey=${a[1]}
sudo kubeadm token create $token --print-join-command --certificate-key $certkey

Which will output something similar to:

kubeadm join k8s.root.dom:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:a32be5a71a6902bde72623c7ef99f8e5fe84a9f66c4de7e99535f4b54166259a \
        --control-plane --certificate-key 7b8e93cd3fab4f46f901d2f6526535128821f97308c236e127873f52e69fdf3e

Feature flags

Since I want to run bind inside my k9s cluster I need to be able to expose the same port on different protocols. Kubernetes does not allow this per default, and its is hidden behind a feature flag: MixedProtocolLBService

To enable it ssh to all control-plane nodes and run the following:

sudo nano /etc/kubernetes/manifests/kube-apiserver.yaml

Find the line that starts with - command and scroll all the way down to where the - commandends and add a line with

    - --feature-gates=MixedProtocolLBService=true

so it ends up looking similar to the lines:

    - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
    - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
    - --feature-gates=MixedProtocolLBService=true

Save and exit and run the following:

sudo systemctl daemon-reload
sudo systemctl restart kubelet

When the services has restarted, it should be possible to add a kubernetes service with type: LoadBalancer which exposes the same port using different protocols similar to:

apiVersion: v1
kind: Service
spec:
  ports:
    - name: dns-udp
      protocol: UDP
      port: 53
      targetPort: 53
    - name: dns-tcp
      protocol: TCP
      port: 53
      targetPort: 53
  type: LoadBalancer

Kubernetes Networking

To make your pods capable of talking to each other, you need a networking component.

I will use Calico.

cd ~/
curl https://projectcalico.docs.tigera.io/manifests/calico.yaml -O
kubectl apply -f calico.yaml

Before applying the calico manifest, you can modify it as required.

Helm installation

sudo curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
sudo chmod 700 get_helm.sh
sudo ./get_helm.sh

Rancher SSL prequisites

For rancher we are using the Helm chart - which is why we installed Helm in the previous step.

Before we can do that we need to create some “secrets” that rancher will use for its certificates - so we use the same CA as we use for kubernetes.

This require that we pre create some data inside kubernetes that the rancher installation will detect and pick up instead of creating its own dummy ca-certificates.

First we need to create the namespace that rancher will use. This is dony simply by entering:

kubectl create namespace cattle-system

Then we create a secret with any root CA certificates - in case the CA you want to use is not a root certificate itself - like mine. I created an intermediate CA just for docker & kubernetes usage.

kubectl -n cattle-system create secret generic tls-ca --from-file=/etc/kubernetes/pki/cacerts.pem

When that is done, we can finally create the two last secrets needed - which is using the same CA as we added to kubernetes.

#create ca cert for rancher
kubectl -n cattle-system create secret tls tls-rancher-internal-ca --cert=/etc/kubernetes/pki/ca.crt --key=/etc/kubernetes/pki/ca.key
kubectl -n cattle-system create secret tls tls-rancher --cert=/etc/kubernetes/pki/ca.crt --key=/etc/kubernetes/pki/ca.key

Rancher installation

Now we can install rancher - first by adding the helm repo

helm repo add rancher-latest https://releases.rancher.com/server-charts/latest

And then finally installing rancher itself from the helm repo we added just before.

helm install rancher rancher-latest/rancher --namespace cattle-system --set hostname=k8s.root.dom --set replicas=1 --set privateCA=true --set ingress.tls.source=secret

Notice the arguments I added:

--set privateCA=true 
--set ingress.tls.source=secret

This tells the rancher installation that it should expect certificates in a secret and that we are using custom CA certificates.

When the installation is completed you should get a message like:

NAME: rancher
LAST DEPLOYED: Sun Feb 13 20:31:20 2022
NAMESPACE: cattle-system
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Rancher Server has been installed.

NOTE: Rancher may take several minutes to fully initialize. Please standby while Certificates are being issued, Containers are started and the Ingress rule comes up.

Check out our docs at https://rancher.com/docs/

If you provided your own bootstrap password during installation, browse to https://k8s.root.dom to get started.

If this is the first time you installed Rancher, get started by running this command and clicking the URL it generates:

```
echo https://k8s.root.dom/dashboard/?setup=$(kubectl get secret --namespace cattle-system bootstrap-secret -o go-template='{{.data.bootstrapPassword|base64decode}}')
```

To get just the bootstrap password on its own, run:

```
kubectl get secret --namespace cattle-system bootstrap-secret -o go-template='{{.data.bootstrapPassword|base64decode}}{{ "\n" }}'
```

Happy Containering!

When rancher is installed you cannot access it unless you have a working loadbalancer in your cluster.

So what you can do - is to make rancher just use one of the nodes in the system as the target ip.

For that to be possible we have to change the rancher service inside kubernetes from using a ClusterIP to a NodePort.

kubectl edit service rancher -n cattle-system

In the editor scroll all the way down to the bottom until you see

  type: ClusterIP

This you change to:

  type: NodePort

Press escape to get vi command prompt and enter :wq

Alternatively if you dont like the vi editor- you can instead export the current service to yaml and then edit it.

kubectl get service rancher -n cattle-system -o yaml > rancher.yml

Edit rancher.yml where you change ClusterIP -> NodePort

Then you apply the yaml which tells kubernetes that it should recreate the service inside kubernetes.

kubectl -f apply ./rancher.yml

When that has been done you execute

kubectl get service -n cattle-system

And you should see something similar to:

NAME              TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
rancher           NodePort    10.97.206.200    <none>        80:31037/TCP,443:31926/TCP   7m50s
rancher-webhook   ClusterIP   10.109.225.71    <none>        443/TCP                      5m52s
webhook-service   ClusterIP   10.106.221.248   <none>        443/TCP                      5m52s

In the ports column it now shows both the internal kubernetes port and the external ports.

The external ports are those with big numbers, since those are “global” inside the cluster.

You can then open up a browser and type in: https://k8s.root.dom:31926

Getting the token to log on is done by running:

kubectl get secret --namespace cattle-system bootstrap-secret -o go-template='{{.data.bootstrapPassword|base64decode}}{{ "\n" }}'

After you have logged in, you now have Rancher you can use to manage the kubernetes cluster.

In my opinion Rancher is much easier to use than the Kubernetes Dashboard - and its most certainly also prettier and 100% certain funnier.

Just look at this little nugget inside the user preferences.

Rancher User Preferences Yaml Editor

And I must agree - neither VIM or Emacs are very easy to figure out :-)