Enabling Core Dumps with Kata Containers

Core Dumps with Kata Containers

To learn about core dump with kata containers

22 July 2022
kata
coredump
kubernetes

In the previous hands-on lab, we discussed the use of Kata containers for building isolated dev and build environments.  

In this hands-on lab, let’s take things a step further by discussing how to handle core dumps with Kata containers. There are a variety of reasons why you may need to analyze an application core dump — to identify bugs and memory leaks, or just to discover what caused the program to crash.

However, one of the challenges when working with containers is accessing the core-dump files. Let's dig into the reasons why this problem exists and find out what to do about it.

On Linux systems, the core-dump setting is influenced by two parameters on the host:

  • Core dump size (ulimit -c): To disable core dumps this can be set to 0.
  • /proc/sys/kernel/core_pattern: This specifies the location (and name format) of the core-dump file or points to the core-dump helper. On “systemd” setup this is usually set to core-dump helper: “|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h %e”. The core-dump file is generated under “/var/lib/systemd/coredump/”. Details of the format specifiers can be found here.
Note : The core dump location and file name pattern can be changed by the administrator as desired.

Now for the container to access the core dump file, the specific host path needs to be made available inside the container. However, this is inherently risky in a Kubernetes cluster and makes it difficult to be used as a general-purpose mechanism. 

Also, the kernel core_pattern setting is system-wide, which means you can’t have different settings for different processes, either system-wide processes or containers.

This is where Kata containers can make your life easier.

Since a Kata container (a Kata POD to be specific) has its own kernel, it’s possible to have different kernel settings for different PODs as needed.

Figure 1: Kata Containers with different kernel settings for different pods
Figure 1: Kata Containers with different kernel settings for different pods

Let’s look at a few options to see how this can be used in practice but before that let's install and configure kata containers in the Kubernetes cluster.

Lab for Core Dump with Kata Containers

Prerequisites

Installation of Kata-Containers

The easiest way to deploy Kata containers in a Kubernetes cluster is via kata-deploy. This will run as a pod inside the kube-system namespace and will install all the binaries and artifacts needed to run Kata containers.

  • Create and provision different RBAC roles to kata-deploy pod
kubectl apply -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-rbac/base/kata-rbac.yaml
  • Then create a kata-deploy pod by deploying its stable version.
kubectl apply -f https://raw.githubusercontent.com/kata-containers/kata-containers/main/tools/packaging/kata-deploy/kata-deploy/base/kata-deploy-stable.yaml
  • Check the kata-deploy pod status inside the kube-system namespace.
kubectl get pods -n kube-system
kubectl -n kube-system wait --timeout=10m --for=condition=Ready -l name=kata-deploy pod
  • Check the "kata" label on the nodes 
kubectl get nodes --show-labels | grep kata
  • After this configure a runtime class for Kata Containers by creating a Kubernetes resource of kind:RuntimeClass.
kubectl apply -f runtimeclass.yaml

Currently, we are creating a runtime class named kata-qemu (line 6) and this will be used to create pod running inside a VM. There are other runtime classes that can also be used according to the platforms like kata-clh is used with cloud hypervisorkata-fc is used with firecracker.

In runtime class pod overhead (line 7) has been defined which has memory and CPU overheads set for any pod using the specific runtime class.

  • See more information about the kata-qemu runtime class through
kubectl get runtimeclass
kubectl describe runtimeclass kata-qemu

Using InitContainer to set core-dump location and pattern

  • You can use an InitContainer to specify the core dump location and filename pattern. You can extend the buildah environment example from the last hands-on lab as shown below:
kubectl create ns sandboxed-builds
kubectl apply -f buildah-env.yaml
kubectl get pods -n sandboxed-builds

Let's access the container shell and verify creation of core dump.

kubectl exec -it -n sandboxed-builds buildah-env -- bash
cat /proc/sys/kernel/core_pattern
sleep 20 &

Replace the <JOB_ID> with the id displayed after running the above command to create an application core dump.

kill -SIGSEGV <JOB_ID>

The application core dump will be present in the current directory

ls /

You should see a similar file like below:

core.1249159.0.0.11.1658374361.18446744073709551615.597050bc1f25.sleep.14

Note: 
 1. If you are thinking of a specific path to store the core dump, then that path needs to exist inside the container. Otherwise, a core dump will not be generated.
2. Using root directory “/” as the path to store the generated dump will not work for containers running with non-root USERID.

Using Persistent Volume to store core-dump

Storing the core dump in a persistent volume will ensure that the dump will be available post container restarts. 

  • For this to work, the core_pattern needs to specify a path along with the filename pattern. The example below assumes that core_pattern is set to /core-dump, and the mount point is /core-dump.

Again you can create the buildah-env-pvc pod and specify the PVC in it as volumes.

kubectl apply -f pvc.yaml
kubectl apply -f buildah-env-pvc.yaml
kubectl get pods -n sandboxed-builds
kubectl get pvc -n sandboxed-builds

This should get you started. 

Conclusion

In this hands-on lab, we have learnt about application core dumps with Kata containers.

How likely are you going to recommend this lab to your friends or colleagues?

Unlikely
Likely

Leave a comment:

About the Authors

Oshi Gupta

Oshi Gupta

DevOps Engineer & Technical Writer, CloudYuga

Oshi Gupta works as a DevOps Engineer and Technical Writer at CloudYuga Technologies. She is a CKA certified and has been selected for LFX mentorship in Spring 2022 for CNCF Kyverno. She loves writing blogs and is keen to learn about various cloud-native technologies. Besides this, she loves cooking, badminton, traveling, and yoga.

Pradipta Banerjee

Pradipta Banerjee

Senior Principal Software Engineer, Red Hat

Pradipta is currently working on container isolation and confidential computing. He is a strong believer in self-learning and hands-on problem-solving. Connect with him for any help with container security, digitization, or technology adoption for improving livelihoods