Exploring Kind to setup single and multi-node local cluster on Linux
Kubernetes is one of the most widely used orchestrators for automating deployment, scaling, and managing containerized apps. However, one wouldn’t want to take risk of testing directly on the production cluster. To address these concerns, a Kubernetes cluster can be built up locally using various technologies such as kind, Docker Desktop, and minikube, and then one can experiment on this cluster.
kind provides one of the easiest ways to set or run Kubernetes clusters locally. It makes cluster nodes run as docker containers. It can be used for local development, quality assurance (QA), or CI/CD.
In this hands-on lab, we are going to explore the tool kind and try to understand it by setting up the Kubernetes cluster, interacting with it, and knowing its features.
Features of kind
kind
provides many advanced functionalities which gave it an edge over other tools. They are as follows:
- Multi-node clusters
- Control-plane High-Availability (HA)
- Mapping ports to the host machine
- Setting Kubernetes version
- Enable Feature Gates in your Cluster
- Configure
kind
to use a proxy - Exporting cluster logs
Prerequisites
Before using this tool, let’s go through the prerequisites for installing kind
.
apt-get update
Docker
Run the below command to install Docker:
apt install docker.io -y
Kubectl
Although, kind
does not strictly require kubectl
, however, since for the demo purpose, we are aiming to set up a fully functioning development environment, we need kubectl
to be installed so that we can perform basic Kubernetes functions on our cluster.
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubectl
Installation
There are many ways of installing kind
. However, one of the recommended ways is to install it from released binaries. Run the following commands in the terminal to install these binaries.
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.11.1/kind-linux-amd64 chmod +x ./kind mv ./kind /usr/local/bin/kind
Now, confirm the installation by running the following command:
kind version
Creating the Cluster
We will begin by creating the Kubernetes cluster. Run the following command to create a local cluster.
kind create cluster
The above command will create a single-node cluster with the name “kind
” by fetching a kind image “kindest/node
” and creating a node by running a container out of it.
You can confirm the corresponding docker container by running the command:
docker ps
The default name of the cluster is “kind
”. However, if you want a cluster with a specific name, then you can use --name
flag.
Interacting with the Cluster
Getting Cluster
To list all the cluster, you can use following command:
kind get clusters
By default, this will create a single Kubernetes node running as a docker container named kind-control-plane.
We can also confirm the nodes through kubectl
as:
kubectl get nodes
Cluster Details
Once a cluster is ready, we can check the details using the cluster-info
command of kubectl
:
kubectl cluster-info --context kind-kind
Delete Cluster
You can delete your cluster at any time using the command:
kind delete cluster
The above command will delete the default “kind
” cluster. Use the –name
flag for deleting the specific one.
Configuration
Configuration is used to provide custom specifications by using various flags or modifying config files as per the requirement.
Initially, let’s begin with the basic config file.
The lab setup has been configured with desired config files such as config.yaml
or others. You can check the files by running the following command:
ls cat config.yaml
You should be able to see output as:
kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4
To create a cluster, run the command as:
kind create cluster --config=config.yaml
Now, we will go through different configuration options.
Nodes
We can define the nodes field in the config file. The default is one node hosting a control plane.
You can pass additional configurations to customize your cluster. For example, the following configuration creates a multi-node cluster – one control plane and two child nodes.
apiVersion: kind.x-k8s.io/v1alpha4 kind: Cluster nodes: - role: control-plane - role: worker - role: worker
Run the following command to create a multi-node cluster:
kind create cluster --config kind-config.yaml --name kind-multi-node
Multiple version
Using a different image allows you to change the Kubernetes version of the cluster. To specify another image, use the --image
flag.
We can have two different versions of clusters running on the same machine. Run the following commands to create two different clusters with name “cluster1” and “cluster2”:
kind create cluster --name cluster1 --image kindest/node:v1.21.1
kind create cluster --name cluster2 --image kindest/node:v1.22.0
Consider checking the Kind’s release notes if you want to use a different image for your cluster.
Feature Gates
A feature gate is a high-level tool to turn any features on and off. Technically, feature gates are a set of key=value pairs that describe Kubernetes features. You can turn these features on or off cluster-wide using the --feature-gates
flag on each Kubernetes component.
In the config file, you can write as:
kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 featureGates: "PodSecurity": true
To enable any feature gate, add “Name”: true in the config file or add “Name”: false to disable it.
NOTE: Be careful while using feature gates as not all the feature gates are tested.
Runtime Config
It is a set of key=value pairs that enable or disable built-in APIs. This may be used to disable alpha/beta APIs. Following are some of the valid ways to write it:
- v1=true|false for the core API group
- <group>/<version>=true|false for a specific API group and version
kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 runtimeConfig: "api/alpha": "false"
The above example controls all API versions of the form v[0-9]+alpha[0-9]+
.
In the following sections, we will be exploring some of the advanced concepts of Kubernetes like setting up Ingress
, or creating a local registry through kind
.
Ingress
Ingress is an object that manages external access to the services running on the Cluster.
Set up an Ingress Controller
We’ll require an ingress controller to provide a bridge between Kubernetes services and external ones. In kind
, the ingress can be set up by specifying a few options like portmapping and node labels while creating a cluster.
There are two steps involved during the setup.
1. Create/configure a cluster
We start by creating a kind cluster with extraPortMappings and node-labels directives.
extraPortMappings
allow the local host to make requests to the Ingress controller over ports 80/443
Extra port mappings can be used to port forward to the kind nodes. This is a cross-platform option to get traffic into your kind cluster.node-labels
only allow the ingress controller to run on a specific node(s) matching the label selector
Node label is a way to group nodes with similar characteristics and applications can specify where to run.
kind create cluster --config=cluster-for-ingress.yaml --name ingress-cluster
2. Deploy an Ingress Controller
There are many implementations of Ingress Controllers like:
and many more!
Here we will use Ingress NGINX which provides a deployment we can leverage through Github:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml
It forwards the hostPorts to the ingress controller, sets taint tolerations, and schedules it to the custom labelled node.
Finally, we’re all set to deploy our service. For the demo purpose, we are using a simplehttp-echo web server available as a docker image.
Consider checking the service.yaml
file which contains all the configurations required for our service.
kind: Pod apiVersion: v1 metadata: name: foo-app labels: app: foo spec: containers: - name: foo-app image: hashicorp/http-echo:0.2.3 args: - "-text=foo" --- kind: Service apiVersion: v1 metadata: name: foo-service spec: selector: app: foo ports: # Default port used by the image - port: 5678 --- kind: Pod apiVersion: v1 metadata: name: bar-app labels: app: bar spec: containers: - name: bar-app image: hashicorp/http-echo:0.2.3 args: - "-text=bar" --- kind: Service apiVersion: v1 metadata: name: bar-service spec: selector: app: bar ports: # Default port used by the image - port: 5678 --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: example-ingress spec: rules: - http: paths: - pathType: Prefix path: "/foo" backend: service: name: foo-service port: number: 5678 - pathType: Prefix path: "/bar" backend: service: name: bar-service port: number: 5678 ---
Now, deploy our service by running the following command:
kubectl apply -f service.yaml
Note: If you get error ‘Error: Internal error occurred: failed calling webhook “validate.nginx.ingress.kubernetes.io”: an error on the server (“”) has prevented the request from succeeding‘ then this is most likely because of the feature ‘Add validation support for networking.k8s.io/v1‘. For now, just disable the Webhook configuration kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission
Then, deploy the service again.
We can check the status of all the services using kubectl
as:
kubectl get services
You should be able to see the following output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE foo-service ClusterIP 10.96.49.126 <none> 5678/TCP 24s bar-service ClusterIP 10.96.249.219 <none> 5678/TCP 24s kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 26m
Verifications
You should now be able to access the application under the URL.
You need to write /foo
and /bar
in the URL to see the application running respectively.
Local Registry
A registry is a storage and content delivery system, holding named Docker images, available in different tagged versions.
As we have configured a local Kubernetes cluster, now let’s try to set up a local registry as well on it.
Our goal is to keep everything locally including a local Docker registry.
Check the shell script file in the terminal which creates a Kubernetes cluster with local Docker registry enabled.
cat setup.sh
#!/bin/sh set -o errexit # create registry container unless it already exists reg_name='local-kind-registry' reg_port='5000' if [ "$(docker inspect -f '{{.State.Running}}' "${reg_name}" 2>/dev/null || true)" != 'true' ]; then docker run \ -d --restart=always -p "127.0.0.1:${reg_port}:5000" --name "${reg_name}" \ registry:2 fi # create a cluster with the local registry enabled in containerd cat <<EOF | kind create cluster --config=- kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 containerdConfigPatches: - |- [plugins."io.containerd.grpc.v1.cri".registry.mirrors."localhost:${reg_port}"] endpoint = ["http://${reg_name}:5000"] EOF # connect the registry to the cluster network if not already connected if [ "$(docker inspect -f='{{json .NetworkSettings.Networks.kind}}' "${reg_name}")" = 'null' ]; then docker network connect "kind" "${reg_name}" fi # Document the local registry # https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry cat <<EOF | kubectl apply -f - apiVersion: v1 kind: ConfigMap metadata: name: local-registry-hosting namespace: kube-public data: localRegistryHosting.v1: | host: "localhost:${reg_port}" help: "https://kind.sigs.k8s.io/docs/user/local-registry/" EOF
NOTE: You can find the same shell script in the official document of Kind documentation.
The above shell script creates a Docker registry called kind-registry which runs locally on port 5000. It first inspects the current environment for any existing registry and set up a new registry if it couldn’t find it. The registry itself is simply a container of the registry Docker image available on Docker Hub. We can even use the docker run command to start the registry.
This configuration file will run a single node Kubernetes cluster and add some configuration to the containerd interface to allow pulling images from a local Docker registry.
Once the docker registry is available, the shell script configures kind to use this local registry for pulling container images during deployments.
The last step is to connect the kind cluster’s network with the local Docker registry’s network.
Run the below command to execute the process of cluster creation using the local registry.
bash setup.sh
Make sure the Docker registry is running by running the command:
docker logs -f local-kind-registry
Now, we will try to use this registry:
- Pull (or build) some images from the hub
docker pull gcr.io/google-samples/hello-app:1.0
- Tag the image so that it points to your registry
docker tag gcr.io/google-samples/hello-app:1.0 localhost:5000/hello-app:1.0
- Push it
docker push localhost:5000/hello-app:1.0
- And use the image
kubectl create deployment hello-server --image=localhost:5000/hello-app:1.0
Verify the deployment as:
kubectl get deployments
Conclusion
In this hands-on lab, we have learned how to install Kind, create a Kubernetes cluster, and how to interact with that cluster using kind. We have also learned many basic commands and learned a few advanced concepts like setting up an ingress controller, deploying our service, and setting a local registry.