Introduction to Confidential Containers

Get familiar with Kata containers based Confidential Containers stack

Confidential Containers (CoCo) is a CNCF sandbox project which aims to integrate existing Confidential Computing (CC) infrastructure support and technologies with the cloud-native world.

The following references are a good read:

In this lab, we’ll be deploying the Kata containers based CoCo stack as shown in the diagram below.

Further, we’ll demonstrate the container image management capability of the CoCo stack, whereby the container images are downloaded inside the Kata VM and not on the Kubernetes cluster node.

Figure 1: Kata Containers based CoCo Stack
Figure 1: Kata Containers based CoCo Stack

Please note that in this blog we are deploying the CoCo stack on a regular (non CC) hardware to help you get familiar with the CoCo stack.

Install a KinD Kubernetes cluster

  • Install docker as a prerequisite for KinD Cluster setup
sudo apt update && sudo apt install docker.io -y
  • Install kubectl CLI to interact with the cluster
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update && sudo apt-get install -y kubectl
  • Install KinD from its binaries
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.16.0/kind-linux-amd64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind
  • Verify the KinD installation
kind version
  • Now, create a single node Kubernetes cluster via KinD and give its name coco
kind create cluster --name coco

Wait for some time to get it provisioned.

  • Check the cluster node’s status and verify the Kubernetes version is greater than 1.24 and contained is CRI.
kubectl get nodes -o wide 

Setup the CoCo Stack

Setting up the CoCo stack is super easy, courtesy of the CoCo operator

  • The CoCo operator requires at least one cluster node having the label node-role.kubernetes.io/worker= . Set the environment variable NODENAME with the cluster node name.
export NODENAME=coco-control-plane
kubectl label node $NODENAME node-role.kubernetes.io/worker=

Verify the role on the node through

kubectl get nodes -o wide 
  • Deploy the CoCo Operator by running the following command:
kubectl apply -f https://raw.githubusercontent.com/confidential-containers/operator/main/deploy/deploy.yaml

The operator deploys all resources under the confidential-containers-system namespace.

Wait until each pod has the STATUS of Running.

kubectl get pods -n confidential-containers-system --watch

The operator is responsible for creating the custom resource definition (CRD), which is then used for creating a custom resource (CR).

The operator creates the ccruntime CRD as can be observed by running the following command:

kubectl get crd | grep ccruntime

The complete CRD can be seen by running the following command:

kubectl explain --recursive=true ccruntimes.confidentialcontainers.org

You can also see the details of the CRD in the following source file: https://github.com/confidential-containers/operator/blob/main/api/v1beta1/ccruntime_types.go#L90 .

  • Create the CR by running the following command:
kubectl apply -f https://raw.githubusercontent.com/confidential-containers/operator/main/config/samples/ccruntime.yaml

This triggers the deployment of the Kata containers based CoCo stack on the KinD cluster.

Wait until each pod has the STATUS of Running.

kubectl get pods -n confidential-containers-system --watch
  • Verify that the RuntimeClasses got created.
kubectl get runtimeclass

Following is a brief explanation of each of the RuntimeClasses:

kata – standard kata runtime using the QEMU hypervisor including all CoCo building blocks for a non Confidential Computing (CC) hardware

kata-clh – standard kata runtime using the cloud hypervisor including all CoCo building blocks for a non CC hardware

kata-clh-tdx – using the Cloud Hypervisor, with TD-Shim, and support for Intel TDX CC hardware

kata-qemu – same as kata

kata-qemu-tdx – using QEMU, with TDVF, and support for Intel TDX CC hardware

kata-qemu-sev – using QEMU, and support for AMD SEV hardware

Since we are running the Kata containers based CoCo stack on a non-CC hardware, we can use only kata and kata-clh.RuntimeClasses. Further, only Cloud Hypervisor works with Kata on a KinD Kubernetes cluster. Consequently, we’ll be using only kata-clh RuntimeClass.

Creating a sample workload

The sample workload is intended to show how the CoCo building blocks work together.

A key aspect of CoCo when using Kata containers is that the container images get downloaded inside the Kata VM and not on the cluster node as the cluster node is not trusted. Kata VM is the trusted entity from an end-user standpoint.

Further, the container images can be signed and encrypted with signature verification and decryption inside the Kata VM. The decryption keys are made available to the Kata VM by an attestation service after successful attestation. You can read more about the attestation process in the following blog. In a future lab guide we’ll cover the attestation process.

  • For the sample workload, we’ll use the bitnami/nginx image described in the following nginx.yaml:
# nginx.yaml
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: nginx
  name: nginx
spec:
  containers:
  - image: bitnami/nginx:1.22.0
    name: nginx
  dnsPolicy: ClusterFirst
  runtimeClassName: kata-clh
  • Before creating the pod from the above manifest, first, verify that the container image doesn’t exist on the cluster node through crictl .

Login to the KinD cluster node by executing the following command in the terminal.

CID=$(docker ps -qf "name=coco-control-plane")
docker exec -it $CID bash

You should be able to get a shell to the KinD cluster node (as shown below).

root@coco-control-plane:/# 

Now, verify whether the container image exists on the KinD cluster node by executing the following command in the terminal.

crictl image ls | grep bitnami/nginx

You should see an empty output.

  • Now create the POD by executing the following command:
kubectl apply -f nginx.yaml
  • Ensure the pod was created successfully (in running state):
kubectl get pods

Now re-verify that the container image doesn’t exist on the KinD cluster node by running the following command in the container shell as described previously:

crictl image ls | grep bitnami/nginx

Again, you should see an empty output.

If you have made it here, give yourself a shoutout :-). Do not worry if you are stuck with the instructions or need details on the CoCo stack. Help is right around the corner.

Join Our Newsletter

Share this article:

Table of Contents