To know about Kubernetes auditing and learn how to apply audit policies and store logs.
In general, auditing means inspection, and Kubernetes auditing refers to a set of records documenting the sequence of actions in a cluster. The cluster inspects the activities generated by users, by applications that use the Kubernetes API, and by the control plane itself.
But why do we need auditing in Kubernetes when we have logs features in k8s?
We need it because logs only answer the questions like what particular resources are used or what configuration in pods has changed but doesn’t give any answer to questions like from where this action occurred or who initiated it and to answer such type of questions auditing is used.
Basically, Auditing allows cluster administrators to answer the following questions:
- what happened?
- when did it happen?
- who initiated it?
- on what did it happen?
- where was it observed?
- from where was it initiated?
- to where was it going?
Audit records begin their lifecycle inside the kube-apiserver component. Each request on each stage of its execution generates an audit event, which is then pre-processed according to a certain policy and written to a backend service then the policy determines what is recorded and the backend service stores these records.
As each request goes to the associated stage and these stages are :
- RequestReceived: The stage for events is generated as soon as the audit handler receives the request.
- ResponseStarted: Once the response headers are sent, but before the response body is sent. This stage is only generated for long-running requests (e.g. watch).
- ResponseComplete: Once the response body has been completed.
- Panic: Events generated when a panic occurred.
Audit Policy
Audit policies are rules about what events should be recorded and what data they should include. The audit policy object structure is defined in the audit.k8s.io API group. When an event is processed, it’s compared against the list of rules in order. The first matching rule sets the audit level of the event. The defined audit levels are:
- None – don’t log events that match this rule.
- Metadata – log request metadata (requesting user, timestamp, resource, verb, etc.) but not request or response body.
- Request – log event metadata and request body but no response body. This does not apply to non-resource requests.
- RequestResponse – log event metadata, request, and response bodies. This does not apply to non-resource requests.
It’s important to have at least one rule defined in the policy file and to instruct kube api server about the policy and it can be done by configuring the --audit-policy-file
flag which gives a path to the policy file.
NOTE: Click on the LAB SETUP button on the right-hand side to get the lab ready for performing hands-on. This will set up a terminal and an IDE to perform the lab.
Audit backends
Audit backends persist audit events to external storage and the kube-apiserver provides two backends:
- Log backend, which writes events into the filesystem in form of JSON
- Webhook backend, which sends events to an external HTTP API
For the hands-on lab, we will be seeing the implementation of Kubernetes auditing and storing the logs with the logs backend in the local filesystem on the host.
- Create an audit policy file
# audit-policy.yaml apiVersion: audit.k8s.io/v1 # This is required. kind: Policy # Don't generate audit events for all requests in RequestReceived stage. omitStages: - "RequestReceived" rules: # Log pod changes at RequestResponse level - level: RequestResponse resources: - group: "" # Resource "pods" doesn't match requests to any subresource of pods, # which is consistent with the RBAC policy. resources: ["pods"] # Log "pods/log", "pods/status" at Metadata level - level: Metadata resources: - group: "" resources: ["pods/log", "pods/status"] # Don't log requests to a configmap called "controller-leader" - level: None resources: - group: "" resources: ["configmaps"] resourceNames: ["controller-leader"] # Don't log watch requests by the "system:kube-proxy" on endpoints or services - level: None users: ["system:kube-proxy"] verbs: ["watch"] resources: - group: "" # core API group resources: ["endpoints", "services"] # Don't log authenticated requests to certain non-resource URL paths. - level: None userGroups: ["system:authenticated"] nonResourceURLs: - "/api*" # Wildcard matching. - "/version" # Log the request body of configmap changes in kube-system. - level: Request resources: - group: "" # core API group resources: ["configmaps"] # This rule only applies to resources in the "kube-system" namespace. # The empty string "" can be used to select non-namespaced resources. namespaces: ["kube-system"] # Log configmap and secret changes in all other namespaces at the Metadata level. - level: Metadata resources: - group: "" # core API group resources: ["secrets", "configmaps"] # Log all other resources in core and extensions at the Request level. - level: Request resources: - group: "" # core API group - group: "extensions" # Version of group should NOT be included. # A catch-all rule to log all other requests at the Metadata level. - level: Metadata # Long-running requests like watches that fall under this rule will not # generate an audit event in RequestReceived. omitStages: - "RequestReceived"
- Copy the policy in
/etc/kubernetes/
directory
cp audit-policy.yaml /etc/kubernetes/
- Make a copy of the current kube-apiserver configuration file in
/tmp
directory
cp /etc/kubernetes/manifests/kube-apiserver.yaml /tmp
- Update the copy of kube-apiserver configuration file present in
/tmp
directory to add details about the auditing configuration like following
apiVersion: v1 kind: Pod metadata: annotations: kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.197.107:6443 creationTimestamp: null labels: component: kube-apiserver tier: control-plane name: kube-apiserver namespace: kube-system spec: containers: - command: - kube-apiserver - --audit-policy-file=/etc/kubernetes/audit-policy.yaml - --audit-log-path=/var/log/audit.log - --advertise-address=192.168.197.107 - --allow-privileged=true - --authorization-mode=Node,RBAC - --client-ca-file=/etc/kubernetes/pki/ca.crt - --enable-admission-plugins=NodeRestriction - --enable-bootstrap-token-auth=true - --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt - --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt - --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key - --etcd-servers=https://127.0.0.1:2379 - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key - --requestheader-allowed-names=front-proxy-client - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt - --requestheader-extra-headers-prefix=X-Remote-Extra- - --requestheader-group-headers=X-Remote-Group - --requestheader-username-headers=X-Remote-User - --secure-port=6443 - --service-account-issuer=https://kubernetes.default.svc.cluster.local - --service-account-key-file=/etc/kubernetes/pki/sa.pub - --service-account-signing-key-file=/etc/kubernetes/pki/sa.key - --service-cluster-ip-range=10.96.0.0/12 - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key image: k8s.gcr.io/kube-apiserver:v1.22.2 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 8 httpGet: host: 192.168.197.107 path: /livez port: 6443 scheme: HTTPS initialDelaySeconds: 10 periodSeconds: 10 timeoutSeconds: 15 name: kube-apiserver readinessProbe: failureThreshold: 3 httpGet: host: 192.168.197.107 path: /readyz port: 6443 scheme: HTTPS periodSeconds: 1 timeoutSeconds: 15 resources: requests: cpu: 250m startupProbe: failureThreshold: 24 httpGet: host: 192.168.197.107 path: /livez port: 6443 scheme: HTTPS initialDelaySeconds: 10 periodSeconds: 10 timeoutSeconds: 15 volumeMounts: - mountPath: /etc/ssl/certs name: ca-certs readOnly: true - mountPath: /etc/ca-certificates name: etc-ca-certificates readOnly: true - mountPath: /etc/kubernetes/pki name: k8s-certs readOnly: true - mountPath: /usr/local/share/ca-certificates name: usr-local-share-ca-certificates readOnly: true - mountPath: /usr/share/ca-certificates name: usr-share-ca-certificates readOnly: true - mountPath: /etc/kubernetes/audit-policy.yaml name: audit readOnly: true - mountPath: /var/log/audit.log name: audit-log readOnly: false hostNetwork: true priorityClassName: system-node-critical securityContext: seccompProfile: type: RuntimeDefault volumes: - hostPath: path: /etc/kubernetes/audit-policy.yaml type: File name: audit - name: audit-log hostPath: path: /var/log/audit.log type: FileOrCreate - hostPath: path: /etc/ssl/certs type: DirectoryOrCreate name: ca-certs - hostPath: path: /etc/ca-certificates type: DirectoryOrCreate name: etc-ca-certificates - hostPath: path: /etc/kubernetes/pki type: DirectoryOrCreate name: k8s-certs - hostPath: path: /usr/local/share/ca-certificates type: DirectoryOrCreate name: usr-local-share-ca-certificates - hostPath: path: /usr/share/ca-certificates type: DirectoryOrCreate name: usr-share-ca-certificates status: {}
In which audit options has been added insidespec.containers.command
- --audit-policy-file=/etc/kubernetes/audit-policy.yaml - --audit-log-path=/var/log/audit.log
Two volumes to support auditing configuration and log files.
volumes: - hostPath: path: /etc/kubernetes/audit-policy.yaml type: File name: audit - name: audit-log hostPath: path: /var/log/audit.log type: FileOrCreate
and respective volume mounts
volumeMounts: ..... ..... - mountPath: /etc/kubernetes/audit-policy.yaml name: audit readOnly: true - mountPath: /var/log/audit.log name: audit-log readOnly: false
- Remove the current
/etc/kubernetes/manifests/kube-apiserver.yaml
file
mv /etc/kubernetes/manifests/kube-apiserver.yaml /tmp/kube-apiserver-bkp.yaml
- Copy the updated
/kube-apiserver.yaml
file
cp /tmp/kube-apiserver.yaml /etc/kubernetes/manifests/
- Wait for some time for
kube-apiserver
pod to get started
kubectl get nodes
If you face any issues after running the above command then check the pod’s log under /var/log/pods
Check the Auditing Setup
- Now perform some operations and check the audit logs from the
/var/log/audit.log
file.
kubectl run mypod --image=nginx:alpine
- You will get an event like the following in the log file.
{"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"RequestResponse","auditID":"5d41a883-3fbc-4316-b5e7-1ec68216695f","stage":"ResponseComplete","requestURI":"/api/v1/namespaces/default/pods?fieldManager=kubectl-run","verb":"create","user":{"username":"kubernetes-admin","groups":["system:masters","system:authenticated"]},"sourceIPs":["192.168.197.127"],"userAgent":"kubectl/v1.22.1 (linux/amd64) kubernetes/632ed30","objectRef":{"resource":"pods","namespace":"default","name":"mypod","apiVersion":"v1"},"responseStatus":{"metadata":{},"code":201},"requestObject":{"kind":"Pod","apiVersion":"v1","metadata":{"name":"mypod","creationTimestamp":null,"labels":{"run":"mypod"}},"spec":{"containers":[{"name":"mypod","image":"nginx:alpine","resources":{},"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","securityContext":{},"schedulerName":"default-scheduler","enableServiceLinks":true},"status":{}},"responseObject":{"kind":"Pod","apiVersion":"v1","metadata":{"name":"mypod","namespace":"default","uid":"c097146d-3cd2-46c5-b2f9-5e2220780e46","resourceVersion":"1171","creationTimestamp":"2021-10-22T01:02:39Z","labels":{"run":"mypod"},"managedFields":[{"manager":"kubectl-run","operation":"Update","apiVersion":"v1","time":"2021-10-22T01:02:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:labels":{".":{},"f:run":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"mypod\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},"spec":{"volumes":[{"name":"kube-api-access-64ttc","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"mypod","image":"nginx:alpine","resources":{},"volumeMounts":[{"name":"kube-api-access-64ttc","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Pending","qosClass":"BestEffort"}},"requestReceivedTimestamp":"2021-10-22T01:02:39.508829Z","stageTimestamp":"2021-10-22T01:02:39.515551Z","annotations":{"authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":""}}
And if you parse via some JSON parser, it would look something like the following:-
Conclusion
In this blog, we saw about Kubernetes auditing and its implementation.