Introduction To Kata Containers

To learn how to provide more isolation and security to containers

Container technology has been widely adapted for packaging applications inside the container to make it portable on various platforms and infrastructures. Today different containerization platforms have been developed which follow Open Container Initiative(OCI) standards like Containerd and so on.

This portable characteristic of containers makes them share the host operating system’s kernel and resources. But it eventually gives rise to the overutilization of resources by one container and makes other containers wait for the resources.  

These containers share the same kernel which is also considered less secure and provide less isolation to containers. So, to solve this problem, Kata Containers, an open-source project managed by the Open Infrastructure Foundation have been introduced. 

In this hands-on lab, we will be going to deep dive more into Kata Containers by learnings and experiences that I have done by going through the documentation of Kata Containers;  which at a personal level I found to be cluttered.

Lab Setup

Install all necessary tools like base OS (Ubuntu), developer tools like Git, Vim, wget etc. 

About Kata Containers

Kata Containers aims to build a secure and OCI compatible container runtime that enhances the security and isolation of container workloads by putting each one of them in a lightweight virtual machine, using the hardware virtualization. Every virtual machine runs it own kernel. 

Brief History

Initially, Intel launched a project Clear Containers with a goal to solve CoW(Copy on Write) security Concerns in containers through virtualization. Later on it was merged with  Hyper.sh RunV  project, which we now refer as Kata Containers

Features of Kata Containers

Following are the features of Kata Containers:

  • Security: It runs in a dedicated and isolated kernel and also supports multiple hypervisors like QEMU, Cloud-Hypervisor, Firecracker and can easily be integrated with containerd.
  • Compatibility: It works seamlessly with Docker, Kubernetes by providing kata-runtime as a container runtime.
  • Performance: It gives a consistent performance as of any other Linux container with increased isolation. It also supports various architectures like AMD64, ARM, IBM p-series, and IBM z-series.
  • Simplicity: No need for nested containers inside VMs and compromising in the speed of containers. 

How Kata Containers are different from Traditional Containers?

Traditional Containers uses runC as a container runtime that relies on kernel features such as Cgroups and namespaces to provide isolation with the shared kernel as in Figure 1.1;  whereas Kata Containers makes containers to be more isolated in their own lightweight VM with the help of hardware virtualization as in Figure 1.2

Figure 1: Traditional Containers Vs Kata Containers (Image credits:https://katacontainers.io/learn/)
Figure 1: Traditional Containers Vs Kata Containers (Image credits:https://katacontainers.io/learn/)

Working of Kata Containers with Kubernetes

When the Kubernetes cluster is set up along with Container Runtime Interface (CRI) such as Containerd or cri-o, a high-level runtime; a container runtime shim also gets installed which resides in between CRI (Containerdcri-o) and a low-level container runtime runC (default runtime) for smooth communication between these two, and this low-level container runtime is responsible for running the containers in the pod.

Kata containers can also be installed on the Kubernetes cluster with kata-runtime for running containers in a lightweight VM  and for this containerd or cri-o are required. A different shim compatible with kata containers is required i.e. containerd-shim-kata-v2 which acts as a bridge between containerd and kata-runtimea runtime class by Kata Containers to run containers in an isolated kernel and namespace. 

Figure 2 : Running Containers with runC and kata-runtime
Figure 2 : Running Containers with runC and kata-runtime

There are multiple ways to install Kata containers and we will be using one of the ways to configure it in a Kubernetes cluster which will be discussed in the next section.

Lab with Kata Containers

In this section, we will be going to see how to install Kata Containers in a Kubernetes cluster and how to run a pod by provisioning a kata-runtime

Prerequisites

Installation of Kata-Containers

There are certain ways to install Kata Containers but the most preferred way to install it in a cluster is via kata-deploy. This will run as a pod inside the kube-system namespace and will install all the binaries and artifacts needed to run Kata Containers, as well as DaemonSets which can be used to install Kata Containers in a running Kubernetes cluster.

  • Create and provision different RBAC roles to kata-deploy pod
kubectl apply -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-rbac/base/kata-rbac.yaml
  • Then create a kata-deploy pod by deploying its stable version.
kubectl apply -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-deploy/base/kata-deploy-stable.yaml
  • Check the kata-deploy pod status inside the kube-system namespace.
kubectl get pods -n kube-system
kubectl -n kube-system wait --timeout=10m --for=condition=Ready -l name=kata-deploy pod
  • Check the Kata-Containers labels on the node 
kubectl get nodes --show-labels | grep kata
  • After this configure a runtime class for Kata Containers by creating a Kubernetes resource of a  kind:RuntimeClass.
# runtimeclass.yaml
kind: RuntimeClass
apiVersion: node.k8s.io/v1
metadata:
    name: kata-qemu
handler: kata-qemu
overhead:
    podFixed:
        memory: "160Mi"
        cpu: "250m"
scheduling:
  nodeSelector:
    katacontainers.io/kata-runtime: "true"
kubectl apply -f runtimeclass.yaml

Currently, we are creating a runtime class with a handler kata-qemu (line 6) as this is used to create kata-runtime in VM. There are other handlers that can also be used according to the platforms like kata-clh is used with cloud hypervisorkata-fc is used with firecracker.

In runtime class pod overhead (line 7) has been defined which has memory and CPU usage limit set for container resources.

  • See more information about the kata-qemu runtime class through
kubectl get runtimeclass
kubectl describe runtimeclass kata-qemu
  • Test the runtime class by creating an Nginx pod through it 
# nginx-kata.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nginx-kata
spec:
  runtimeClassName: kata-qemu
  containers:
  - name: nginx
    image: teamcloudyuga/nginx:latest

Here, kata-qemu runtime class has been specified to use inside spec attribute as runtimeClassName: kata-qemu (line 7) which will use kata-runtime for running containers. 

kubectl apply -f nginx-kata.yaml
kubectl get pods

What Next?

I would highly recommend you to check out the hands-on lab from Pradipta Banerjee in which he is exploring container image building, inside a container with Kata Containers for secured builds. 

You can also check out Katacontainers Documentation for more details. 

Conclusion

In this hands-on lab, we learned about Kata Containers and what problems it solves along with the installation of it on a Kubernetes cluster and running a pod with it.

Join Our Newsletter

Share this article:

Table of Contents