Creating Kubernetes Cluster With CRI-O Container Runtime

Exploring CRI-O Container Runtime and how to set up a Kubernetes Cluster with it.

Container Runtime Interface (CRI) is one of the important parts of the Kubernetes cluster. It is a plugin interface allowing Kubelet to use different container runtimes. And recently, CRI-O container runtime has been announced as a CNCF Graduated project. I thought of creating a hands-on lab on CRI-O and how to set up a single-node Kubernetes cluster with Kubeadm and CRI-O. 

What is CRI-O?

CRI-O is a lightweight container runtime for Kubernetes. It is an implementation of Kubernetes CRI to use Open Container Initiative (OCI) compatible runtimes for running pods. It supports runcand Kata Containers as the container runtimes, but any OCI-compatible runtime can be integrated.

It is an open-source, community-driven project which supports OCI-based container registries. It is being maintained by contributors working in Red Hat, Intel, etc. It also comes with a monitoring program known as conmon. Conmon is an OCI container runtime monitor, which makes the communication between CRI-O and runc for a single container.

The below figure shows how CRI-O works with the Kubernetes cluster for creating containers in the pod.

Figure 1: CRI-O in Kubernetes Cluster
Figure 1: CRI-O in Kubernetes Cluster

Read more about the architecture of CRI-O here. The networking of the pod is set up through CNI, and CRI-O can be used with any CNI plugin.

Now, let’s see how to set up a Kubernetes cluster with Kubeadm and CRI-O as the container runtime.

Kubernetes Cluster With Kubeadm and CRI-O

In this, we will see how to set up a single-node Kubernetes cluster with Kubeadm and CRI-O as the container runtime. For this, I have used an Ubuntu 22.04 VM with 2 CPUs and 2 GB memory (minimum requirement for Kubeadm).

Install Kubeadm, Kubelet, and Kubectl

  • First, disable the swap to make kubelet work properly.
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
swapoff -a
apt-get update && apt-get install -y apt-transport-https ca-certificates curl
curl -fsSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-archive-keyring.gpg

echo "deb [signed-by=/etc/apt/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
  • To install a Kubernetes cluster of a specific version, specify the version like below.
apt-get update && apt-get install -y kubelet=1.26.3-00 kubeadm=1.26.3-00 kubectl=1.26.3-00

Here I will be setting up a Kubernetes cluster with version 1.26.3

  • Check the version of the CLI tools.
kubeadm version
kubectl version
kubelet --version
  • Put a hold on these three tools so that it will not get an update if we update the system.
apt-mark hold kubelet kubeadm kubectl

Install CRI-O

Complete the prerequisites of installing any container runtime.

  • Enable br_netfilter and overlay modules and make iptables see bridged traffic.
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
sudo modprobe overlay
sudo modprobe br_netfilter
# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF
# Apply sysctl params without reboot
sudo sysctl --system
  • Verify the modules are loaded with the following commands.
lsmod | grep br_netfilter
lsmod | grep overlay
  • Check the below-mentioned variables are set to 1 for letting iptables seeing bridged traffic.
sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward
  • Install CRI-O by setting OS and VERSION variables. Set OS according to your system and VERSION according to the Kubernetes cluster you wish to set up. It should be the same as Kubeadm/Kubelet.
OS=xUbuntu_22.04
VERSION=1.26
echo "deb [signed-by=/usr/share/keyrings/libcontainers-archive-keyring.gpg] https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/ /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list

echo "deb [signed-by=/usr/share/keyrings/libcontainers-crio-archive-keyring.gpg] https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/ /" > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.list
mkdir -p /usr/share/keyrings

curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/Release.key | gpg --dearmor -o /usr/share/keyrings/libcontainers-archive-keyring.gpg

curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/$VERSION/$OS/Release.key | gpg --dearmor -o /usr/share/keyrings/libcontainers-crio-archive-keyring.gpg
apt-get update && apt-get install cri-o cri-o-runc cri-tools -y
  • Start and enable the CRI-O service and check its status.
systemctl start crio.service
systemctl enable crio.service
systemctl status crio.service
Figure 2: CRI-O Service Status
Figure 2: CRI-O Service Status
  • One can also see the runtime info with the following.
crictl info

Set Cluster With Kubeadm

  • Pull the images for kubernetes version 1.26.3
kubeadm config images pull --kubernetes-version v1.26.3
kubeadm config images list
  • Create the cluster control-plane node.
kubeadm init --kubernetes-version v1.26.3
Figure 3: Kubernetes Cluster Control-Plane Node Bootstrapping
Figure 3: Kubernetes Cluster Control-Plane Node Bootstrapping
  • Create the config file in the ~/.kube directory to access the kuberentes cluster.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
  • Remove the taint from the control-plane node.
kubectl taint nodes --all node-role.kubernetes.io/control-plane-
  • Check the cluster nodes and verify the container runtime is CRI-O.
kubectl get nodes -o wide
Figure 4 : Kubernetes Single Node Cluster With CRI-O Container Runtime
Figure 4 : Kubernetes Single Node Cluster With CRI-O Container Runtime

As we have completed the process of creating a single-node cluster. Now let’s install CNI to create a pod and expose it via service. Also, verify that the pod is running with CRI-O container runtime.

Install CNI

  • I have used Cilium as CNI and installed it with helm.
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
helm repo add cilium https://helm.cilium.io/
helm install cilium cilium/cilium --version 1.13.4 --namespace kube-system
Figure 5: Cilium CNI Installation
Figure 5: Cilium CNI Installation
  • Wait till the Cilium pods get into Running state.
kubectl get pods -n kube-system
  • Create a pod with nginx as its image.
kubectl run nginx --image=nginx
kubectl get pods
  • Verify CRI-O as container runtime is used in pod creation.
kubectl describe pod nginx | grep -i container
  • Expose the pod with the NodePort service.
kubectl apply -f nginx-svc.yaml
kubectl get svc
  • Access the application by clicking the port-30000.
Figure 6: Nginx Application on Browser
Figure 6: Nginx Application on Browser

OR

Access the application through the Cluster Node IP with kubectl get nodes -o wide command on Node Port 30000.

curl http://10.0.0.102:30000
Figure 7: Nginx Application on Terminal
Figure 7: Nginx Application on Terminal

Yay!! A single-node Kubernetes cluster of version 1.26.3 is ready with CRI-O as the container runtime.

References

Join Our Newsletter

Share this article:

Table of Contents