Playing with k0s

Playing with k0s

28 April 2022
edge
kubernetes
k0s

To get an introduction to k0s by setting up a single-node kubernetes cluster

k0s is a Kubernetes distribution released at end of 2020. k0s is shipped as a single binary without any OS dependencies. It is thus defined as a zero-friction/zero-deps/zero-cost Kubernetes distribution.

The latest k0s release:

  • ships a certified and CIS-benchmarked Kubernetes (1.20 / 1.21 / 1.22 versions available)
  • uses containerd as the default container runtime
  • supports Intel (x86-64) and ARM (ARM64) architectures
  • uses in-cluster Etcd or SQLite or external PostgresSQL / MySQL
  • uses the Kube-Router network plugin by default
  • provides built-in security features (RBAC, PSP, Network Policies, ...)
  • provides built-in cluster features (CoreDNS, Metrics Server, HPA, ...)
  • ... and many other things...

Pretty neat, right? We’ll now see how to create a single-node k0s cluster.

Lab Setup

You can start the lab setup by clicking on the Lab Setup button on the right side of the screen. Please note that there is an app-specific URL exposed specifically for the hands-on lab purpose.

Our lab has been set up with all necessary tools like base OS (Ubuntu), and developer tools like Git, Vim, wget, and others. 

Lab with k0s for single node cluster

In this hands-on lab, we will be going to see how to install k0s and create a single-node Kubernetes cluster with the help of it.

Installation of k0s

First, we need to get the latest k0s Release, this can be done with this convenient installation script:

curl -sSLf https://get.k0s.sh | sudo sh

This installs k0s in /usr/local/bin/k0s. We can get all the available commands by running the binary without any argument.

k0s

Let's check the current version

k0s version

Default k0s configuration

When running a k0s cluster, the default configuration options are used but it is also possible to modify that one to better match specific needs.

The defaults configuration options can be retrieved with:

k0s config create

This configuration file allows us to define:

  • The startup options of the API server, the controller manager, and the scheduler
  • The storage that could be used to save the cluster information (etcd)
  • The network plugin and its configuration
  • The version of the container images of the control-plane components
  • Some additional helm charts that should be deployed when the cluster is started
  • ...

🔥 To override some of those properties you can save the output of the previous command in a file, modify that one to match our needs and then use it when running k0s.

Create a Controller Node

Once the k0s binary is installed, we can get a single node k0s cluster

sudo k0s install controller --single

A systemd unit file has been created but the controller is not started yet:

sudo systemctl status k0scontroller

🔥 if you need to provide some configuration options different from the default ones, you could provide a configuration file through the -c flag in the sudo k0s install controller command.

Start the cluster

First, start the cluster:

sudo k0s start

Next verify it has been started properly:

sudo k0s status

It takes a few tens of seconds for the cluster to be up and running. In the following step you will configure your local kubectl binary to communicate with the cluster's API Server.

You could also now see the k0scontroller is started in systemd:

sudo systemctl status k0scontroller

Accessing the cluster

As k0s comes with its own kubectl subcommand, you can communicate with the API Server directly from the current (control-plane / master) node. Do wait for a few seconds till the cluster status gets in a Ready state. 

k0s kubectl get node

Note: in a real k0s environment, we do not ssh into a master node to run kubectl commands but use an admin machine instead. In order to do so, we could retrieve the kubeconfig file generated during the cluster creation (located in /var/lib/k0s/pki/admin.conf), copy it onto our machine and change the server property so it points towards the IP address of the VM instead of the default localhost.

Testing the whole thing

Let's now run a Deployment based on the ghost image (ghost is an open-source blogging platform) and expose it through a NodePort Service

Note: both Deployment and Service are defined in the ghost.yaml file.

k0s kubectl apply -f ./ghost.yaml

Make sure the resources have been created correctly:

k0s kubectl get deploy,pods,svc

You can wait for the ghost pod to be ready with the following command:

k0s kubectl wait po -l app=ghost --for=condition=ready

Using the VM's IP address and the NodePort 30000, we can access the ghost web interface.

Now access the app from the app-port-30000 URL under the Lab-Urls section and you will get an output shown in the image below.

Figure 1: Ghost App
Figure 1: Ghost App

Cleanup

Remove the ghost Deployment and Service:

k0s kubectl delete deploy/ghost svc/ghost

In order to remove k0s from the system you first need to stop k0s:

sudo k0s stop

and then reset it:

sudo k0s reset

Conclusion

In this hands-on lab, we have learned about k0s, how to install it, and set up a single-node cluster with the help of it. In future labs we'll explore k0s in more details.

How likely are you going to recommend this lab to your friends or colleagues?

Unlikely
Likely

Leave a comment:

About the Author

Luc Juggery

Luc Juggery

Software Engineer @TRAXxs // Freelance Docker & Kubernetes trainer (CKA / CKAD)

Software engineer with 18+ years of experience in big companies and startups. Co-founder of 2 startups located in Sophia-Antipolis, southern France.