Single master cluster setup using Kubeadm
Kubeadm
is a tool built to provide best-practice “fast paths” for creating Kubernetes clusters. It uses kubeadm init
and kubeadm join
type simple commands for creating a better end-user experience for cluster creation & maintenance.
It enables Kubernetes administrators to quickly and easily bootstrap minimum viable clusters that are fully compliant with Certified Kubernetes guidelines.
As a defacto cluster creation tool, it has been used by projects like minikube, kops, etc.
Kubeadm
is focused on bootstrapping Kubernetes clusters on existing infrastructure and performing an essential set of maintenance tasks. Its scope of work includes :
- Creating a control plane node (master node) by
kubeadm init
& joining other master and worker nodes bykubedm join
command. - It also includes various utilities for performing management tasks like control plane upgrades on already bootstrapped clusters, token and certificate renewal.
What kubeadm is not supposed to do
- Infrastructure provisioning and direct infrastructure manipulation(as done by kops for cloud/on-prem or minikube for local)
- Third-party networking integration
- Monitoring /logging etc.
- Specific cloud provider integration (This actually is handled well by Cloud Controller Manager which will be discussed in details in another blog post)
Creating a Single Control plane cluster using kubeadm
Now let’s get our hands dirty with the installation & configuration of kubeadm
to deploy a cluster with a single master and 2 worker nodes.
Prerequisites
- 3 Virtual machines with Ubuntu 18.04 installed and sudo privileges. Name one node as
master
& other 2 asworker1
&worker2
repectively. - 2 GB or more of RAM per machine.
- 2 CPUs or more on all nodes.
End Goal
- Setup a Kubernetes cluster with a single master node & two worker nodes
- Complete networking setup for the cluster so that Pods can talk to each other
Video
Steps to be followed on all nodes
- Install container runtime on all three nodes (Docker in the current tutorial)
Ref: https://kubernetes.io/docs/setup/production-environment/container-runtimes/#docker
# Install Docker CE ## Set up the repository: ### Install packages to allow apt to use a repository over HTTPS apt-get update && apt-get install apt-transport-https ca-certificates curl software-properties-common ### Add Docker's official GPG key curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - ### Add Docker apt repository. add-apt-repository \ "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) \ stable" ## Install Docker CE. apt-get update && apt-get install docker-ce=18.06.2~ce~3-0~ubuntu # Setup daemon. cat > /etc/docker/daemon.json <<EOF { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2" } EOF mkdir -p /etc/systemd/system/docker.service.d # Restart docker. systemctl daemon-reload systemctl restart docker
Installing kubeadm, kubelet and kubectl
We will install the following packages on all three nodes:
kubeadm
: the tool to bootstrap the cluster.kubelet
: the component that runs on all of the machines in your cluster as systemd process and does things like starting pods and containers.kubectl
: the command line utility to talk to your cluster.
apt-get update && apt-get install -y apt-transport-https curl curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - cat <<EOF >/etc/apt/sources.list.d/kubernetes.list deb https://apt.kubernetes.io/ kubernetes-xenial main EOF apt-get update apt-get install -y kubelet kubeadm kubectl apt-mark hold kubelet kubeadm kubectl
Setup Control Plane on the node with hostname set to master
kubeadm init --pod-network-cidr=192.168.0.0/16
After running kubeadm init
successfully we get output that the master node is initialized & we get some command to setup kubectl & also a command for joining worker nodes to this master node.
Note: We are using Calico as a network plugin, so we are using “–pod-network-cidr=192.168.0.0/16” option
- Setup
kubectl
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
Setup Kubernetes network by installing Calico network plugin
kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml
Finally, look for master Node
status
kubectl get nodes
NAME STATUS ROLES AGE VERSION master Ready master 9m v1.16.0
Setup worker nodes
- Copy the
kudeadm join
command from the output ofkubeadm init
and run on the worker nodes to join them to the cluster. - To regenerate
kubeadm join
command output to join worker nodes run below command on master node & use the output to join the worker nodes.
kubeadm token create --print-join-command
Final status check of cluster
- Run
kubeadm get nodes
on master node again to check the status of worker nodes
kubectl get nodes NAME STATUS ROLES AGE VERSION master Ready master 21m v1.16.0 worker1 Ready <none> 9m v1.16.0 worker2 Ready <none> 9m v1.16.0
Find cluster information
kubectl cluster-info
Kubernetes master is running at https://157.245.118.253:6443 KubeDNS is running at https://157.245.118.253:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'
As seen above in kubectl cluster-info
output our single control plane setup is complete & cluster is up and running with control plane master running on Node master
with ip https://157.245.118.253:6443 & KubeDNS is running at endpoint as mentioned above. Next, we can check the health of control plane components.
- Check the health status of various components of the control plane.
kubectl get componentstatus
NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health":"true"}
All the control plane components are up & in a healthy state.
Conclusion
As seen above kubeadm
has made single control plane setup a real easy & no-sweat job.
In the next article of the kubeadm
series we will discuss steps for setting up a HA Multimaster control plane.