Kubernetes Auditing

Kubernetes Auditing

To know about Kubernetes auditing and learn how to apply audit policies and store logs.

24 October 2021

In general, auditing means inspection, and  Kubernetes auditing refers to a set of records documenting the sequence of actions in a cluster. The cluster inspects the activities generated by users, by applications that use the Kubernetes API, and by the control plane itself.

But why do we need auditing in Kubernetes when we have logs features in k8s?

We need it because logs only answer the questions like what particular resources are used or what configuration in pods has changed but doesn’t give any answer to questions like from where this action occurred or who initiated it and to answer such type of questions auditing is used.

Basically, Auditing allows cluster administrators to answer the following questions:

  • what happened?
  • when did it happen?
  • who initiated it?
  • on what did it happen?
  • where was it observed?
  • from where was it initiated?
  • to where was it going?

Audit records begin their lifecycle inside the kube-apiserver component. Each request on each stage of its execution generates an audit event, which is then pre-processed according to a certain policy and written to a backend service then the policy determines what is recorded and the backend service stores these records.

Figure 1: Working of auditing in Kubernetes
Figure 1: Working of auditing in Kubernetes

As each request goes to the associated stage and these stages are :

  • RequestReceived: The stage for events is generated as soon as the audit handler receives the request.
  • ResponseStarted: Once the response headers are sent, but before the response body is sent. This stage is only generated for long-running requests (e.g. watch).
  • ResponseComplete: Once the response body has been completed.
  • Panic: Events generated when a panic occurred.

Audit Policy

Audit policies are rules about what events should be recorded and what data they should include. The audit policy object structure is defined in the audit.k8s.io API group. When an event is processed, it's compared against the list of rules in order. The first matching rule sets the audit level of the event. The defined audit levels are:

  • None - don't log events that match this rule.
  • Metadata - log request metadata (requesting user, timestamp, resource, verb, etc.) but not request or response body.
  • Request - log event metadata and request body but no response body. This does not apply to non-resource requests.
  • RequestResponse - log event metadata, request, and response bodies. This does not apply to non-resource requests.

It’s important to have at least one rule defined in the policy file and to instruct kube api server about the policy and it can be done by configuring the --audit-policy-file flag which gives a path to the policy file.

NOTE: Click on the LAB SETUP button on the right-hand side to get the lab ready for performing hands-on. This will set up a terminal and an IDE to perform the lab.

Audit backends

Audit backends persist audit events to external storage and the kube-apiserver provides two backends:

  • Log backend, which writes events into the filesystem in form of JSON
  • Webhook backend, which sends events to an external HTTP API
For the hands-on lab, we will be seeing the implementation of Kubernetes auditing and storing the logs with the logs backend in the local filesystem on the host.
  • Create an audit policy file
  • Copy the policy in /etc/kubernetes/directory
cp audit-policy.yaml /etc/kubernetes/
  • Make a copy of the current kube-apiserver configuration file in /tmp directory
cp /etc/kubernetes/manifests/kube-apiserver.yaml /tmp
  • Update the copy of kube-apiserver configuration file present in /tmp directory to add details about the auditing configuration like following 

  In which audit options has been added insidespec.containers.command 

Two volumes to support auditing configuration and log files. 

and respective volume mounts 

  • Remove the current /etc/kubernetes/manifests/kube-apiserver.yaml file
mv /etc/kubernetes/manifests/kube-apiserver.yaml /tmp/kube-apiserver-bkp.yaml
  • Copy the updated /kube-apiserver.yaml file
cp /tmp/kube-apiserver.yaml /etc/kubernetes/manifests/
  • Wait for some time for kube-apiserver pod to get started
kubectl get nodes

  If you face any issues after running the above command then check the pod's log under /var/log/pods

Check the Auditing Setup

  •   Now perform some operations and check the audit logs from the  /var/log/audit.log file.   
kubectl run mypod --image=nginx:alpine 
  • You will get an event like the following in the log file.

And if you parse via some JSON parser, it would look something like the following:-

Figure 2: Logs in JSON Parser
Figure 2: Logs in JSON Parser


In this hands-on lab, we saw about Kubernetes auditing and its implementation.

How likely are you going to recommend this lab to your friends or colleagues?


Leave a comment:

About the Author

Oshi Gupta

Oshi Gupta

DevOps Engineer & Technical Writer, CloudYuga

Oshi Gupta works as a DevOps Engineer and Technical Writer at CloudYuga Technologies. She is a CKA certified and has been selected for LFX mentorship in Spring 2022 for CNCF Kyverno. She loves writing blogs and is keen to learn about various cloud-native technologies. Besides this, she loves cooking, badminton, traveling, and yoga.