Cloud & Engineering

Dale Stirling

Deploying Kubernetes using lightweight runtimes on Centos Stream 8

Posted by Dale Stirling on 24 February 2022

containers, kubernetes, Centos, Linux

Thanks to the introduction of the Kubernetes CRI there are now additional runtimes that exist for running containers within the Kubernetes orchestration ecosystem.

cargo

In this article, we will look at how we can use a lightweight runtime layer to create smaller resource overheads when running a container.

The tools that we are going to utilise to meet this objective is the Cri-o CRI and the Crun OCI runtime.

Cri-o is a container runtime interface that is purpose-built to run containers in Kubernetes and does not contain additional features like other CRI implementations such as containerd. It is able to do this by handing off many tasks to libpod and other tools from the Container Tools project.

Crun is a lightweight and performant OCI runtime that is developed in C. This OCI implementation is part of the Containers project like podman, buildah and others. This implementation is able to use tighter resource limits than runc as illustrated in the example below:

# podman --runtime /usr/bin/runc run --rm --memory 4M fedora echo it works

Error: container_linux.go:346: starting container process caused "process_linux.go:327: getting pipe fds for pid 13859 caused \"readlink /proc/13859/fd/0: no such file or directory\"": OCI runtime command not found error

# podman --runtime /usr/bin/crun run --rm --memory 4M fedora echo it works

it works

 

In the above example, we see that when running the same echo command within the fedora container image the container can execute with only 4 megabytes of memory using crun but raises an error when executed using runc.

This Kubernetes topology can be deployed to a Centos 8 Stream relatively quickly using RPM packages making ongoing management of the Kubernetes platform more manageable as new versions are released.

To begin with start with a clean install of Centos 8 Stream and during install enable the Container Tools AppStream module and installed these tools through the installer GUI.

Once this is done, we can move on with preparing the Centos OS to run the Kubernetes stack.

First, we disable the swap partition to manage the performance of the platform if it becomes oversubscribed for CPU or Memory.

sudo swapoff -a
sudo sed -i '/ swap / s/^/#/' /etc/fstab

 

Next, we need to ensure that the correct modules are present in the kernel. These modules ensure that Centos Kernel is able to manage traffic as required by Kubernetes.

sudo modprobe br_netfilter
sudo modprobe overlay

 

Then these modules need to be included in the Kernel at boot.

sudo touch /etc/modprobe.d/br_netfilter.conf
sudo touch sudo touch /etc/modprobe.d/overlay.conf

 

Many of the tutorials that I read while I was looking into this called for the local firewalls on the host to be disabled. This is not the case with a few simple commands you can add firewalld rules. So that traffic is able to flow on the Service and Pod networks we need to enable Masquerade in firewalld.

firewall-cmd --add-masquerade --permanent
firewall-cmd --reload

 

While working on the firewall we can add the required ports for the cluster to operate with the service layer. Make sure that you do a review of your existing firewalld configuration to ensure that this will not impact the current state of your host. These rules will enable external nodes to connect to cluster services and allow for internal connectivity to the service layer.

sudo firewall-cmd --zone=public --permanent --add-port={6443,2379,2380,10250,10251,10252}/tcp
sudo firewall-cmd --zone=public --permanent --add-rich-rule 'rule family=ipv4 source address=172.17.0.0/16 accept'
sudo firewall-cmd --reload

 

Next, we use sysctl to enable Kernel settings to allow for packet forwarding and bridged packets to traverse iptables. Both of these are required for the Kubernetes network interface (CNI) to operate. The settings are stored in a public GitHub Gist to make the setup simpler.

 

sudo curl -o /etc/sysctl.d/99-kubernetes-cri.conf https://gist.githubusercontent.com/dalethestirling/316eae008bb123b783f90cb5ef8633b0/raw/56b34f93fb0528aa6818141fbd3e0f5f36db39b1/99-kubernetes-cri.conf
sudo sysctl --system

 

Now the host is prepared to deploy Kubernetes. To do this we first need to deploy our container runtime and CRI.

To deploy Cri-o we will be using the EPEL repositories for Centos stream 8. I chose to take this path oversteps in Cri-o repository in the Docs, as the repository referenced at the time of writing did not work. Additionally, the packages in the EPEL repository are built against Centos. Cri-o is distributed as an AppStream module so this will also need to be enabled so packages can be installed.

dnf install epel-release
dnf modules enable cri-o

 

Now we can install Cri-o using package management allowing easy updates as new versions are released.

sudo dnf install cri-o
sudo systemctl daemon-reload
sudo systemctl enable crio --now

 

Now that we have the CRI installed we need to configure Cri-o to set the cgroups driver.

 

sudo mkdir -p /etc/crio/crio.conf.d
sudo curl -o /etc/crio/crio.conf.d/02-cgroup-manager.conf https://gist.githubusercontent.com/dalethestirling/316eae008bb123b783f90cb5ef8633b0/raw/a0881b89a86420b874d6716205d669c0dbdd63cd/02-cgroup-manager.conf

 

And restart the service to have Cri-o accept the new configuration that was created.

sudo systemctl restart crio

 

Currently Cri-o is running runc as this is the default runtime that is installed. Runc is the default runtime installed when deploying components from the Container Tools project in Centos Stream.

To transition to Crun to complete our lightweight toolchain we can simply install and configure Cri-o to use it.

sudo dnf install crun

 

Adding the config to Cri-o can be done using the crio.conf.d directory we created earlier. After this config is added Cri-o will need a restart.

sudo curl -o /etc/crio/crio.conf.d/03-runtime-crun.conf https://gist.githubusercontent.com/dalethestirling/316eae008bb123b783f90cb5ef8633b0/raw/8b019a4e3c607a09d4d80e5989ac56657a72fd08/03-runtime-crun.conf
sudo systemctl restart crio

 

Now that we have a runtime we can install Kubernetes to manage containers using the runtime we installed. The first step in getting this done is to add the Kubernetes repository that holds the vanilla images.

sudo curl -o /etc/yum.repos.d/kubernetes.repo https://gist.githubusercontent.com/dalethestirling/316eae008bb123b783f90cb5ef8633b0/raw/56b34f93fb0528aa6818141fbd3e0f5f36db39b1/kubernetes.repo

 

This repository definition utilises the exclude parameter to ensure that the Kubernetes RPMS are not updated without your knowledge. This is done so that updates to Kubernetes can be planned and impacts to workloads minimised.

To install the Kubernetes RPMs we need to override the exclude with the --disableexcludes option for the Kubernetes repository.

sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

 

Now that we have the service in place we need to enable and restart the kublet service.

sudo systemctl enable --now kubelet
sudo systemctl restart kubelet

 

As all of the runtime components use cgroups interfaces rather than systemd we need to align kublet so that uses the latter as the default. This can be done by injecting the following config.

sudo mkdir -p /etc/systemd/system/kubelet.service.d
sudo curl -o /etc/systemd/system/kubelet.service.d/11-cgroups.conf https://gist.githubusercontent.com/dalethestirling/316eae008bb123b783f90cb5ef8633b0/raw/1e1ed6922c7f01db971ab21508ad28aba27e5933/11-cgroups.conf
systemctl daemon-reload && systemctl restart kubelet

 

This will add the required options to enable the correct CPU and memory interfaces for kublet to interact with.

Once kublet is running we can use kubeadm to pull all of the cluster images so when we initialise the cluster it can deploy the correct components.

sudo kubeadm config images pull

 

This step can take some time depending on connectivity.

Once you have pulled the images you can stand up a Kubernetes cluster using kubeadm to initialise the new cluster.

curl -o /tmp/kubeadm-config.yaml https://gist.githubusercontent.com/dalethestirling/316eae008bb123b783f90cb5ef8633b0/raw/08a76210c9cc5e85e005bf812d731a4fcec24932/kubeadm-config.yaml
sudo kubeadm init --config /tmp/kubeadm-config.yaml

 

This will deploy all of the containers that make up the control plane and generate the required keys to interact with the API server.

Now we have a functioning Kubernetes cluster and you should be able to use kubectl to interact with the cluster. As we are building a single node cluster we need to relabel the node to be a worker as well as a master.

kubectl label node localhost.localdomain node-role.kubernetes.io/worker=worker

 

As the cluster is a single node it contains all the roles. Master hosts are tainted for NoSchedule by default. This means that no Pods can be deployed to the cluster. This taint can be removed with the following:

kubectl taint node localhost.localdomain node-role.kubernetes.io/master:NoSchedule-

 

The resources on the node for the cluster proceeded to reschedule and this caused a mess for several minutes; this intermittently impacted the ability to interact with the API server. This sorted itself out and the cluster became stable.

Now you should have a working cluster that you can interact with using kubectl to manage the cluster and deploy your container workloads.

The use of lightweight container runtime and tolling will allow for greater utilization of the underlying compute that the container cluster that you have created. This will allow more containers to run on the cluster.

All the config files that have been discussed in this article are available in the following GitHub Gist to speed up the deployment of your own lightweight cluster.

 

 

If you like what you read, join our team as we seek to solve wicked problems within Complex Programs, Process Engineering, Integration, Cloud Platforms, DevOps & more!

 

Have a look at our opening positions in Deloitte. You can search and see which ones we have in Cloud & Engineering.

 

Have more enquiries? Reach out to our Talent Team directly and they will be able to support you best.

Leave a comment on this blog: