Introduction to Terraform with Kubernetes Provider

Introduction to Terraform with Kubernetes Provider

2 November 2022
kubernetes
IaC
kind
Terraform

Exploring Terraform and deploying an app with its Kubernetes provider.

Management of infrastructure has always been an important process in the software development life cycle. And with an increase in complexity of software development,  creating and maintaining the infrastructure manually has become a tedious task, this is where infrastructure as Code(IaC) comes into play. 

What is Infrastructure as Code ?

IaC or Infrastructure as code is an IT practice of automating the process of managing and provisioning infrastructure through code instead of manual processes.

In simple words IaC refers to automating the repetitive tasks of configuring and managing the infrastructure (servers, VMs, DBs etc.) through writing scripts. By codifying the process we not only reduce manual work but also reduce chances of making errors and guarantee that we provision the same environment every time.

Some of the most popular IaC tools are Chef, Puppet, Ansible, Pulumi and Terraform. In this lab we will focus on terraform and how to utilise it as an IaC tool for automating kubernetes infrastructure.

About Terraform

Terraform is an open-source IaC provisioning tool developed by HashiCorp which is used to automate various tasks related to setting up of our infrastructure such as spinning up VMs, setting up cloud services like Azure, running any local or cloud based databases and furthermore.

Terraform helps us build and manage these resources in parallel across various providers. It not only manages each resource individually but also connects these resources through APIs (provided by terraform provider plugins) which helps in automation of the whole process of infrastructure management with the help of configuration management files written in Hashicorp Configuration Language (HCL) instead of a graphical user interface (GUI).

It should be noted that unlike other IaC management tools (Ansible or Chef), in terraform the configuration files are always written in declarative format, meaning we don't declare every step on how the automation and management are done, we just add what the final result should be and Terraform will identify the changes to be made by comparing it to the current state (terraform.tfstate file) and it will figure out the steps to perform the operations.

How terraform works

Terraform helps in automating the whole process of Infrastructure provisioning with the help of configuration management files (used with .tf file extension)

Figure 1: Terraform Infrastructure
Figure 1: Terraform Infrastructure

It performs this operation with the help of 2 main components:

1. Terraform Core:

 The Terraform core can be further divided into two components from which it takes input  sources. 

  • The first input source consists of a Terraform state which stores the updated information about infrastructure in a state file (usually in terraform.tfstate file). It keeps track of resources created by terraform and maps them to real-world resources.
  • The second resource is the user defined configuration file which instructs terraform what resources need to be created (provided in a file like main.tf which uses HashiCorp Configuration Language HCL). The configuration file is written according to the provider used.

For example the following code consist of a configuration file which creates a kubernetes pod with the image of nginx (as written in line number 8) and name terraform-example(line number 4).

After getting the state and configuration files Terraform compares the current state with the configuration and creates a plan using graph structure to relate different resources, how they are related to any current existing resources, and what all resources are required to be created or destroyed according to desired state.

2. Terraform Provider:

The second component of the architecture is provider plugin for specific technology. Providers enable terraform to work with virtually any platform or service with an accessible API.

The providers can be found on the terraform registry where they are maintained by official terraform partners or community members.

Some of the most popular terraform provider plugins are Amazon AWS, Microsoft Azure, Google GCP which are developed and maintained by their respective cloud providers and the HashiCorp team.

For example the following provider of kubernetes connects the current kubernetes architecture with terraform using which we can provision pods and deployments in the same way as done with kubectl command.

The providers are responsible for actual interaction with the resources and creating the infrastructure.

Terraform Workflow

The core Terraform workflow consists of following stages:

Figure 2: Terraform Workflow
Figure 2: Terraform Workflow

1. Write stage: Here you define the resources which may be across multiple cloud providers and services. The first stage of the workflow is the writing phase where the users create the configuration files to define or modify the underlying resources. This writing part can be facilitated through following methods:

  • HCL which is the default language used to define resources in terraform. 
  • By using the Cloud Development Kit for Terraform (CDKTF) which allows users to define resources using any supported common programming languages like Python, Go, Typescript etc.

2. Init stage: terraform init initializes the working directory which consists of all the configuration files.
It performs the following tasks:

  • Backend Initialization : Initialize the backend servers, databases etc. 
  • Child Module Installation : Initialize any child modules which are called by the main configuration file.
  • Plugin Installation : Install the plugins from terraform registry which are specified in the terraform provider file.

3. Plan stage: terraform plan creates an execution plan using graph structure describing the infrastructure it will create, update, or destroy based on the existing infrastructure and your configuration. Here the terraform state file is compared with the user created configuration file and the changes to be applied are shown to the user.

4. Apply stage: On approval from user, Terraform performs the proposed operations in the correct order and installs any extra resource required using terraform apply command. Here the actual changes to the infrastructure is performed and it will happen regardless of whether we have defined dependencies in the configuration. Terraform will automatically identify the resource dependencies of the platform and execute the changes.

5. Destroy stage: After the infrastructure is utilized we can use terraform destroy command which is used to delete all the old infrastructure resources, which are marked tainted after the apply phase.

Terraform code format

As seen in configuration files, the terraform code is written in hashicorp configuration language (HCL) which defines the resources using blocks and arguments: 

Block parameter - Defined within curly braces a block contains set of arguments in key-value format. This section contains information about infrastructure platforms and resources.

Key-value pair: The arguments or resource parameters are written in a <key> : <value> format where the key represent the argument name according to type of infrastructure and value represents the desired state of our infrastructure. These arguments are written according to a particular provider usually found in provider's documentation.

For example the arguments for managing kubernetes infrastructure are written according to the kubernetes provider documentation.

Lab Setup

You can start the lab setup by clicking on the Lab Setup button on the right side of the screen. Please note that there are app-specific URLs exposed specifically for the hands-on lab purpose.

Our lab has been set up with all necessary tools like base OS (Ubuntu), developer tools like Git, Vim, wget, and others.

In this hands-on-lab, we will use terraform to deploy a sample rsvp application through the kubernetes provider on kind infrastructure.

Kubernetes cluster creation

Create the kubernetes cluster using kind which will then be utilized to deploy our application.

kind create cluster

This creates a single kubernetes cluster which can be verified using kubectl command.

kubectl get nodes

Terraform Installation

Install the terraform module which will be utilized to manage kubernetes cluster.

wget -O- https://apt.releases.hashicorp.com/gpg | gpg --dearmor | sudo tee /usr/share/keyrings/hashicorp-archive-keyring.gpg

echo "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list

Update the repository and install terraform

sudo apt update && sudo apt install terraform -y

Provision Kubernetes provider in Terraform

After terraform is installed, create a provider file which is used by terraform to install the necessary provider plugins. Since we will be deploying our application through kubernetes, we will utilize the kubernetes provider from the terraform registry

Here you can observe that the kubernetes provider is given the path to the kube/configfile through config_path (line number 11) through which the current kubernetes architecture gets authorized and connected to terraform.

Create the RSVP configuration files

We now create the HCL configuration files for creating the deployment and services for our rsvp application.

Create frontend.tf to manage the front-end application and service for front-end deployment.

Here we are creating 2 resources of type kubernetes_deployment (as seen in line number 2) and kubernetes_service (as seen in line number 64) with same resource name rsvp.

Create backend.tf to manage the back-end application and service for back-end deployment.

Here also we are creating 2 resources of type kubernetes_deployment (as seen in line number 2) and kubernetes_service (as seen in line number 37) with different resource name of rsvp-db and rsvpdb respectively. 

After creating terraform configuration files for kubernetes provider and RSVP application, we initiate terraform platform.

terraform init

This initialises terraform platform and the resources to be utilised. 

Figure 3: terraform-init
Figure 3: terraform-init

Create the plan for applying the configuration files.

terraform plan

This creates a plan which will be executed by terraform.

Implement the configuration through the plan created.

terraform apply

After displaying the plan, terraform asks for approval from user to implement the plan where if given  yes , terraform will start implementing the plan which in this case is to create kubernetes deployment and services.

Figure 4: terraform-apply
Figure 4: terraform-apply

You can observe the deployment and services created in kubernetes through kubectl commands.

To verify the deployments created:

kubectl get deployments

To verify the services created:

kubectl get services

To view the deployed application in the command line we have to first get node IP address and the port-number (defined in frontend.tf as a fixed port in keynode_port) for the front-end deployment.

kubectl get node -o wide

Select the node IP from the output (in this case 172.18.0.2).

Figure 5: Node IP
Figure 5: Node IP

Use the curl command to load the application.

curl http://<node_ip>:30000/

Here the port 30000 is the fixed port defined by the service NodePort in the frontend.tf configuration file using the key-value pair node_port = 30000.

Conclusion

In this hand-on-lab we learned how to use terraform in automating and managing our  Infrastructure using custom scripts, and deployed a sample application on kubernetes with the help of terraform kubernetes provider.

How likely are you going to recommend this lab to your friends or colleagues?

Unlikely
Likely

Leave a comment:

About the Authors

Oshi Gupta

Oshi Gupta

DevOps Engineer & Technical Writer, CloudYuga

Oshi Gupta is currently working as DevOps Engineer and Technical Writer at CloudYuga. She is working on Kubernetes and different cloud-native technologies.

Ayushman Mishra

Ayushman Mishra

Intern at CloudYuga

A final year undergraduate student in Computer Science working as technical writer intern at CloudYuga. Previously he has worked with DoK community in developing database related project.