Revisiting Container Image Builds Inside a Container

1 March 2022
kata

More flexibility and Improved Isolation using Kata Containers

A commonly used approach to build container images as part of the DevOps pipeline is to use a builder image running in a Kubernetes cluster. The builder image leverages either docker or kaniko or buildah.

With Kata containers maturing into a production-ready container runtime and subsequent increase in uptake, it presents a great potential for improving the hosted build and development environment approach in the following two areas:  

  • Improved addressing of the noisy neighbor problem — building container image inside one container shouldn’t affect any other container workloads running on the same host.
  • Improved isolation — handling unique and privileged requirements (eg. docker-in-docker requiring privileged container) without impacting current host settings or policies.

Think peace of mind for Operations and increased flexibility for the developers at the same time. You continue to use your favorite builder or development images (based on either docker, buildah, or kaniko) but with the added benefits of running this as a Kata container.

Isolated Build and Development Environments with Kata Containers
Figure 1: Isolated Build and Development Environments with Kata Containers

So if you are currently hosting build and development environments on Kubernetes or planning to host one in the future here are some examples to help you get started with exploring Kata containers.

Installation Of Kata Containers in a Kubernetes Cluster

To get started with Kata Containers in a Kubernetes cluster there are some prerequisites to complete.

Prerequisites

Installation of Kata-Containers

There are certain ways to install Kata Containers but the most preferred way to install it in a cluster is via kata-deploy. This will run as a pod inside the kube-system namespace and will install all the binaries and artifacts needed to run Kata Containers, as well as DaemonSets which can be used to install Kata Containers in a running Kubernetes cluster.

  • For this first create different RBAC roles for kata-deploy pod
kubectl apply -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-rbac/base/kata-rbac.yaml
  • Then create a kata-deploy pod and check it inside the kube-system namespace.
kubectl apply -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-deploy/base/kata-deploy-stable.yaml
  • Check the kata-deploy pod status
kubectl get pods -n kube-system
kubectl -n kube-system wait --timeout=10m --for=condition=Ready -l name=kata-deploy pod
  • Check the Kata-Containers labels on the node 
kubectl get nodes --show-labels | grep kata
  • After this configure a runtime class for Kata Containers by creating a Kubernetes resource of a  kind:RuntimeClass.
Copy Code
Loading...
kubectl apply -f runtimeclass.yaml
  • See more information about the kata-qemu runtime class through
kubectl get runtimeclass
kubectl describe runtimeclass kata-qemu
  • Then test the runtime by creating a Nginx pod and using the above runtime class which has the name:kata-qemu
Copy Code
Loading...
kubectl apply -f nginx-kata.yaml
kubectl get pods

Examples with Kata Containers

The examples below demonstrate the basic building block of using Kata containers for container image building and not a complete solution for hosted development environments on Kubernetes.

Using buildah builder image with vfs

  • Create a buildah container with vfs as a storage driver in a namespace called sandboxed-builds
Copy Code
Loading...
kubectl create ns sandboxed-builds
kubectl apply -f buildah.yaml
kubectl get pods -n sandboxed-builds
  • Create an example Dockerfile to try out the build step inside the container
kubectl exec buildah -n sandboxed-builds mkdir /build
cat Dockerfile
kubectl cp Dockerfile sandboxed-builds/buildah:/build
  • Build the container image with buildah
kubectl exec buildah -n sandboxed-builds -- buildah bud --storage-driver vfs -f /build/Dockerfile .

Using kaniko builder image

  • Create a kaniko container in a namespace called sandboxed-builds
Copy Code
Loading...

Create a Kubernetes secret with your Docker credentials to push the image to your docker account and replace <YOUR_OWN_REGISTRY_PATH> with your own Docker registry path.

kubectl create secret docker-registry docker-regcred -n sandboxed-builds \ --docker-username=<YOUR_USERNAME> \ --docker-password=<YOUR_PASSWORD> \--docker-email=<YOUR_EMAILID> \ --docker-server=https://index.docker.io/v1/
kubectl apply -f kaniko.yaml
kubectl get pods -n sandboxed-builds
kubectl logs kaniko -n sandboxed-builds

Using buildah builder image with emptyDir or PersistentVolume and fuse-overlayfs

Using fuse-overlayfs results in better performance compared to vfs and is preferred.

fuse-overlayfs requires /dev/fuse device to be present.

  • Create a buildah container with emptyDir volume in a namespace called sandboxed-builds
Copy Code
Loading...
kubectl apply -f buildah-emptydir.yaml
kubectl get pods -n sandboxed-builds
  • Create an example Dockerfile to try out the build step inside the container.
kubectl exec buildah-emptydir -n sandboxed-builds mkdir /build
kubectl cp Dockerfile sandboxed-builds/buildah-emptydir:/build
  • Build the container image.
kubectl exec buildah-emptydir -n sandboxed-builds -- buildah bud -f /build/Dockerfile .

Using docker (DinD) builder image

  • Create a DinD container with emptyDir volume in a namespace called sandboxed-builds
Copy Code
Loading...
kubectl apply -f dind.yaml
kubectl get pods -n sandboxed-builds
  • Create an example Dockerfile to try out the build step.
kubectl exec dind -n sandboxed-builds mkdir /build
kubectl cp Dockerfile sandboxed-builds/dind:/build
  • Build the container image.
kubectl exec dind -n sandboxed-builds -- docker build -f /build/Dockerfile .

If you plan to use memory-backed ephemeral volume, remember that it's a tmpfs mounted volume using the Kata VM memory. Typically tmpfs volume size is 50% of VM RAM. By default, the tmpfs volumes will be approximately 1G, since the default Kata VM memory is 2G.

You can either increase the default Kata VM memory size as required, or you can use the following approach to create a tmpfs volume of the required size.

you can try out the below two examples by yourself, remember it requires more memory.

The example below shows how to provision ~3G of storage for containers using tmpfs

Copy Code
Loading...

There are other configurations described here that you can try for your specific use case.

An alternative approach using a loop-mounted file image instead of an ephemeral or persistent volume provides good performance as well as doesn’t depend on VM memory

Copy Code
Loading...

Hope this helps to get started with Kata containers for creating hosted build and development environments on Kubernetes.

Conclusion

In this hands-on lab, we have seen how to set up Kata containers in a Kubernetes cluster and how to use it to build images inside different image builders.

About the Author

Pradipta Banerjee

Pradipta Banerjee

Senior Principal Software Engineer, Red Hat

Pradipta is currently working on container isolation and confidential computing. He is a strong believer in self-learning and hands-on problem-solving. Connect with him for any help with container security, digitization, or technology adoption for improving livelihoods