To learn how to expose pod information to its own containers.
In a Kubernetes cluster, generally, an application running inside a container in a pod doesn’t have any information about the pod or about the cluster as we make the application to be portable.
But the information about the pod can be exposed to the application container and it’s most useful for the observability purpose. As most observability tools can not be directly used with legacy systems and downward API provides structured events for rich observations and contexts.
Downward API helps a pod expose its information to containers either through environment variables or by using volume files. This avoids creating a tight coupling to the Kubernetes API.
The Downward API gives a facility to containers to consume information about themselves or the cluster without using the Kubernetes client or API server.
Pod fields and Container fields
There are two types of metadata that can be exposed with the Downward API:
- Pod metadata
- Container metadata
Pod metadata includes name, namespace, node, IP address, labels, annotations. While container metadata will contain items such as CPU and memory limits for the container.
Also, there are two ways of exposing pod information to containers:
- Environment variables
- Volume files
Exposing Pod Information through Environment variables
A pod can use environment variables to expose pod fields and container fields to the container running inside it.
Mostly, environment variables use the value
field to carry values but downward API uses valueFrom
field which allows you to specify fieldRef
to select a field from the pod’s definition.
The fieldRef
field is a structure that has an apiVersion
field and a fieldPath
field.
The fieldPath
field is an expression designating a field of the pod. The apiVersion
field is the version of the API schema that the fieldPath
is written in terms of.
If the apiVersion
field is not specified then it defaults to the API version of the enclosing object.
The fieldRef
is evaluated and the resulting value got from fieldPath
is used as the value for the environment variable.
For example, to expose a pod name as an environment variable then valueFrom
will specify the fieldRef
to select the pod’s name and fieldPath
as metadata.name
which will tell where to find the pod name.
......... .... valueFrom: fieldRef: fieldPath: metadata.name .... ..........
This allows users to publish their pod’s information in an environment variable.
These environment variables permit the storage of information like pod name, namespace, IP address, etc.
# pod_env.yaml apiVersion: v1 kind: Pod metadata: name: pod1 spec: containers: - name: busybox-container image: busybox command: [ "/bin/sh", "-c", "env" ] env: - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace - name: MY_POD_IP valueFrom: fieldRef: fieldPath: status.podIP restartPolicy: Never
On applying the above pod file where pod information like pod name, namespace, its IP address is being exposed to the container through environment variables, and on seeing the logs one can see these environment variables.
kubectl apply -f pod_env.yaml
kubectl get pods
kubectl logs pod1
Exposing Pod Information through Volume
A pod can use the DownwardAPI volume file to expose pod fields and container fields to the container running inside it.
Downward APIs are dumped to a mounted volume. This is done using a downwardAPI
volume type and the different items represent the files to be created and the fieldPath
references the field to be exposed.
Downward API volume permits the storage of more complex data like labels and annotations.
Here in the below example, we have used InitContainers which will mount the volume with pod information on a python container and will show the details of the pod with the help of the Flask app and expose it via a service.
teamcloudyuga/python-downwardapi:v2 image will take care off the Flask app code.
# pod_volume.yaml apiVersion: v1 kind: Pod metadata: name: pod2 labels: app : flask spec: initContainers: - name: main-busybox image: busybox command: ["sh", "-c","echo Labels && cat /etc/pod/labels","echo Pod name && cat /etc/pod/podname","echo Pod namespace && cat /etc/pod/podns"] volumeMounts: - name: labels mountPath: /etc/pod - name: podname mountPath: /etc/podname - name: podns mountPath: /etc/podns dnsPolicy: Default volumes: - name: labels downwardAPI: items: - path: "labels" fieldRef: fieldPath: metadata.labels - name: podname downwardAPI: items: - path: "podname" fieldRef: fieldPath: metadata.name - name: podns downwardAPI: items: - path: "podns" fieldRef: fieldPath: metadata.namespace containers: - name: python image: teamcloudyuga/python-downwardapi:v2 ports: - containerPort: 5000 volumeMounts: - name: labels mountPath: /app/labels - name: podname mountPath: /app/podname - name: podns mountPath: /app/podns
The Python-Flask code for the above in the teamcloudyuga/python-downwardapi:v2
image looks like
from flask import Flask,render_template app = Flask(__name__) if __name__ == '__main__': app.run(host='0.0.0.0', port=80) @app.route("/") def data(): f = open("/app/labels/labels", "r") res1 = f.read() f = open("/app/podname/podname", "r") res2 = f.read() f = open("/app/podns/podns", "r") res3 = f.read() return render_template('index.html',res1=res1,res2=res2,res3=res3)
This code will access the volume mounts from the containers and will fetch the labels, pod name, and pod namespace and forward it to an HTML file.
Now, apply the pod_volume.yaml
file and see the pod2
status.
kubectl apply -f pod_volume.yaml
kubectl get pods
Now apply the below service.yaml to expose it via a NodePort service.
# service.yaml apiVersion: v1 kind: Service metadata: name: pod2 labels: app: flask spec: selector: app: flask type: NodePort ports: - name: http port: 80 targetPort: 5000 nodePort: 30000 protocol: TCP
kubectl apply -f service.yaml
kubectl get svc
Access the application on port 30000 and it will show the pod information like in the image shown below.
Conclusion
In this blog, we have learned about Downward API in Kubernetes and saw how to implement it.