Kubernetes Tip: How To Gracefully Handle Pod Shutdown?

Kubernetes Tip: How To Gracefully Handle Pod Shutdown?

To know about graceful termination of pods and difference between SIG-TERM and SIG-KILL for container deletion.

4 September 2022


Shutting down an application Pod is as important as starting it up.  We need to release all of the resources used by the application and process in-progress requests etc. Proper shutting of applications helps in reducing chances of request failure.
There are many use-cases where Pod deletion requires to be handled gracefully. Few examples are as follows:

  • logs of the deleting pod to be stored in a remote location
  • current requests/jobs to be processed before pod deletes
  • update certain rules/fields before shut down.

Such use-cases require an understanding of how the shutdown process works which helps in better designing of the system.

Here is a flowchart which explains the flow of events when a pod get’s deleted:

Figure 1: Pod Deletion Flowchart
Figure 1: Pod Deletion Flowchart

In this flowchart consider a user deleting the pod using either delete CLI or delete API. While this operation can occur in many ways such as auto-scaling, rolling updates, etc., here the flow of events remains the same as described in the picture which is elaborated below.

  • Api-Server : It receives a delete pod message from user and sets the status of the pod to terminating state along-with updating the information in etcd database. This triggers the Endpoint Controller and kubelet.
  • Endpoint Controller : This is responsible for managing the endpoints. As soon as it receives the message, it starts removing endpoints from all the service then endpoints are removed from kube-proxy, Ip-tables, Ingress and any other service related to pod. At this point, the pod stops getting new traffic.
  • kubelet : This actually does grunt of the work. Remember that the kubelet and Endpoint-controller work asynchronously that is one doesn't effect the other.
  • For each container in the pod, kubelet checks if the preStop hook is configured and runs the preStop script before the graceful shutdown of pod. 
  • If the preStop hook is not configured, kubelet sends a SIG-TERM signal to container's main process (PID 1) for it to shutdown gracefully.
SIG-TERM signal is used for graceful shutting down the process. It requests the process to start termination on its own, and if the process contains any function to handle this type of request then process termination get started otherwise this signal is ignored.
SIG-KILL on the other hand kills the process immediately. SIG-TERM signal can be handled, ignored and blocked but SIG-KILL cannot be handled or blocked. 
  • After grace-terminate-period, kubelet will forcefully kill any remaining running containers which didn't shutdown on it's own by executing the SIG-KILL signal.
  • After completing all these operations, kubelet updates api-server to remove the pod completely.

By default, graceful-terminate-period is set to 30 seconds but is a configurable parameter as part of Pod Spec

In the given pod yaml file you can observe the graceful-terminate-period is set to 120 seconds using the terminationGracePeriodSeconds parameter.

Alternatively, one can use CLI and provide a grace-period as an option during a delete operation.

kubectl delete pods <pod_name> --grace-period=<time_period>

The recommended way for deleting a pod is to configure a preStop hook and graceful-terminate-period for applications such that removal of pod services and endpoints (performed by Endpoint Controller) always occurs before sending the SIG-TERM signal by kubelet .This helps in handling of last few requests given to the service.

For proper execution in the correct order a preStop hook is called during a container’s termination sequence. This hook is used for executing commands during the terminating phase of container to properly close the container processes and safely remove the services attached to it.

Lab with SideCar and Termination Grace Period Seconds of Pods

You can start the lab setup by clicking on the Lab Setup button on the right side of the screen. Please note that there are app-specific URLs exposed specifically for the hands-on lab purpose.

Our lab has been set up with all necessary tools like base OS (Ubuntu), developer tools like Git, Vim, wget, and others. 

Overview of the Lab

In this lab a Pod is created which executes two containers inside it. Here first is the main container which initiates the nginx application and 2nd is the sidecar container used for execution of a bash script.

The bash script outputs a numeric counter which is stored in volume outside of pod.

The main container utilises this output counter with the help of shared volume and sends it to nginx application.

Figure 2: Pod Architecture
Figure 2: Pod Architecture

Graceful Time Period

Here our pod contains a terminationGracePeriodSeconds parameter which basically tells the pod to wait for this amount of time after sending SIG-TERM and before sending SIG-KILL command to the containers.

Create a Volume for Pod

We will create a Persistent Volume and Persistent Volume Claim which will attach itself to the pod and help in persistent storage and sharing of data between SideCar container and Main Container.

Apply the Volume for pod.

kubectl apply -f volume.yaml

To check the Persistent Volume and Persistent Volume Claim created

kubectl get pv,pvc

Create Pod with SideCar container and Main Container

Here the sideCar container is used for executing the bash script which is used for running the counter in the pod and storing it in volume storage directory.

Note the volumeMounts location for storing the data. Also you can observe the args section contains a small bash script which generates sequence of numbers (1 2 3 . . ) at regular interval of time.

Creating the yaml file for main container :

Note the volumeMounts for storing and retrieving of data is same as that of SideContainer.

Complete yaml file along with service for creating the Pod

Notice the terminationGracePeriodSeconds field which denotes the total time taken by pod to delete all the containers.
Create the Pod containers:

kubectl apply -f pod.yaml

Check the status of running pod:

kubectl get pods

Check the counter stored and the browser working

To check the counter created and stored in volume and the browser which is working through main container, we will create 2 watch commands for curl command and cat command used for checking the browser and monitoring the counter respectively.

To monitor the counter:

Execute the following command to monitor the sideCar container output.

watch "cat /mnt/nginx/index.html | tail -n $((LINES - 2))"

To monitor the nginx app:

Create another tab and execute the following command to monitor the nginx app container output.

watch "curl localhost:30000 | tail -n $((LINES - 2))"

You can also view the nginx app via browser, click on app-portURL under the Lab URLs section to view the browser.

Delete the Pod

Open another tab and execute the following command to delete the p

kubectl delete pod nginx-pv-pod
Figure 3: Pod deleted
Figure 3: Pod deleted

Here you will immediately observe that the main container or the web server is terminated instantly.

Figure 3: Nginx container deleted.
Figure 3: Nginx container deleted.

Whereas the sidecar container keeps on working till the terminationGracePeriodSeconds time-period is not finished, until that time-period the counter keeps on running.

Figure 4: Sidecar container.
Figure 4: Sidecar container.

This is because the main container or the nginx application contains function to handle SIG-TERM signal thus it starts terminating itself on it's own, whereas no such signal handling is performed in sidecar container that is why it keeps on working till it's forcefully killed by SIG-KILL.


In this hands-on-lab, we saw how pod termination works, how graceful period affects the pod deletion time period and difference between SIG-TERM and SIG-KILL signals.

How likely are you going to recommend this lab to your friends or colleagues?


Leave a comment:

About the Authors

Bhargav Bhikkaji

Bhargav Bhikkaji

Founder, Tailwinds.ai

Bhargav is the founder of Tailwinds.ai that provides managed SRE for your organization. Bhargav has 20+ yrs experience in the industry having 17 patents to his name in the field on Computer Architecture, Networking and Security. He has worked in many roles right from systems engineer to architect and being part of DELL CTO organization of Networking division.

Ayushman Mishra

Ayushman Mishra

Intern at CloudYuga

A final year undergraduate student in Computer Science working as technical writer intern at CloudYuga. Previously he has worked with DoK community in developing database related project.