In Part 3 of Argo Rollout, we have seen the Canary deployment strategy with analysis and deployed a sample app using the Argo Rollout controller in the Kubernetes cluster. In this hands-on lab, we will be going to explore the canary deployment strategy with traffic management using the Nginx controller via deploying a sample app using the Argo Rollouts.
Traffic Management in Kubernetes
Traffic management becomes vital when discussing progressive delivery and a strategy like canary deployment. The key here is to ensure that we can correctly shift traffic to respective application versions. We want the right set of users to receive the version that was intended for them. Further, the data plane should be intelligent enough to route incoming traffic to the intended version.
Traffic routing can be achieved using multiple methods, some of the common ones being:
- Raw Routing: you mention a percentage, and that amount of users will be routed to the new version while the rest will be sent to the stable version.
- Header-based Routing: Route traffic based on certain headers sent as part of a request
When it comes to Kubernetes, it doesn’t offer any traffic management capabilities. At the most, you can create a service object that will provide you with some features but not all. That’s where service meshes come in. By using CRDs, service meshes add traffic management capabilities to Kubernetes.
Agro Rollouts come with multiple options for traffic management. It allows you to choose from a suite of service meshes. It modifies the service mesh to match the requirement of a rollout. In this hands-on lab, we’ll look into using Nginx Ingress to route traffic for our canary deployment.
You can start the lab setup by clicking on the Lab Setup button on the right side of the screen. Please note that there are app-specific URLs exposed specifically for the hands-on lab purpose.
Our lab has been set up with all necessary tools like base OS (Ubuntu), developer tools like Git, Vim, wget, and others.
Lab of Argo Rollout with Canary Deployment And Traffic Management using Nginx Controller
As we triggered the lab through the LAB SETUP button, a terminal, and an IDE comes for us which already have a Kubernetes cluster running in it. This can be checked by running the
kubectl get nodes command.
- Clone the Argo Rollouts example GitHub repo or preferably, please fork this
git clone https://github.com/NiniiGit/argo-rollouts-example.git
- Install Helm3 as it would be needed later during the demo
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 chmod 700 get_helm.sh ./get_helm.sh helm version
Installation of Argo Rollouts controller
- Create the namespace for installation of the Argo Rollouts controller and Install the Argo Rollout through the below command, more about the installation can be found here.
kubectl create namespace argo-rollouts kubectl apply -n argo-rollouts -f https://github.com/argoproj/argo-rollouts/releases/latest/download/install.yaml
You will see that the controller and other components have been deployed. Wait for the pods to be in the
kubectl get all -n argo-rollouts
- Install Argo Rollouts Kubectl plugin with
curlfor easy interaction with Rollout controller and resources.
curl -LO https://github.com/argoproj/argo-rollouts/releases/latest/download/kubectl-argo-rollouts-linux-amd64 chmod +x ./kubectl-argo-rollouts-linux-amd64 sudo mv ./kubectl-argo-rollouts-linux-amd64 /usr/local/bin/kubectl-argo-rollouts kubectl argo rollouts version
- Argo Rollouts comes with its own GUI as well that you can access with the below command
kubectl argo rollouts dashboard
and now by clicking on the available
argo-rollout-app URL on the right side under the LAB-URLs section.
you would be presented with UI as shown below(currently it won't show you anything since we are yet to deploy any Argo Rollouts based app)
Now, let's go ahead and deploy the sample app using the Canary Deployment strategy and traffic management using nginx controller.
Canary Deployment And Traffic Management With Argo Rollouts
You must be wondering how we should test internally if my canary or blue-green version of service is really working and handling traffic i.e client requests well or not before we release it as a beta feature or subsequently to a larger audience.
For this, we would play around with using Nginx-controller and passing additional header values with our client requests, which would make sure that all our client requests would always route to the canary version of our sample Nginx service.
We have created a sample Nginx service which is wrapped up as a helm chart and available inside the folder
argo-traffic-management-demo which you will find in the cloned repo.
- We will be running this demo in
ic-demonamespace, so lets create the same namespace
kubectl create ns ic-demo
- Before deploying the helm chart; Let's update the ingress controller
values.yaml. Please access the OPEN IDE, which will open the VS code-like editor in another tab, and then access the
Update the below entry:
- Deploy the helm chart, which will eventually create all the necessary Kubernetes objects in
cd argo-rollouts-example helm install demo-nginx -n ic-demo ./argo-traffic-management-demo
- Verify the deployment status by running the following.
helm list -n ic-demo
- Now, you can check the status of your rollouts using the following command.
kubectl argo rollouts get rollout demo-nginx -n ic-demo
- If you look at the above command output, you can see the rollout object has been created with the given replicas (1). Since this is the first revision, there will not be any canary ReplicaSet. Now you can update the
values.yamland run a
helm upgradeto update the rollout object.
Please access the OPEN IDE, which will open the VS code like editor in another tab and then access the
argo-traffic-management-demo/values.yaml file and update the
image tag from the existing
1.20.0 as shown below:
- Now, let's run a
helm upgradeto update the rollout object
cd argo-rollouts-example helm upgrade demo-nginx -n ic-demo ./argo-traffic-management-demo
- Now watch the rollout object status
kubectl argo rollouts get rollout demo-nginx -n ic-demo
You should be able to see something like the stable and canary sections . A new replica set has been created for the new canary pods.
Now please access the OPEN IDE, which will open the VS code like editor in another tab, and then access splitted terminal.
Now in one of the terminals, run the below command, which will tail live logs of pods running as canary service. Replace <canary-pod-name> with your canary pod name.
kubectl logs -f <canary-pod-name> -n ic-demo
Now in another terminal, access the app URL using the exposed ingress URL and pass the custom header values. Replace
<your-host-name> with the hostname of the URL.
curl -H "canary: yep" -IL http://<your-host-name>/helloe/jsbdsdsd.html
you would be able to see our curl request (which we are mimicking as a client request) are always landing on our canary version of service only as shown below screenshot:
- Now, you need to promote the rollout object after manual verification
kubectl argo rollouts promote demo-nginx -n ic-demo
- Now let's delete this test setup
helm delete demo-nginx -n ic-demo
In this hands-on lab, we saw how we could add traffic management capabilities to Argo Rollouts using Nginx ingress. Achieving canary deployment using fine-grained traffic management capabilities with Argo Rollouts is simple and provides much better-automated control on rolling out a new version of your application.
That brings us to the end of the 4 part series around Progressive Delivery with Argo Rollouts. We explored the basics of deployment strategies, followed by hands-on examples of using blue-green & canary rollouts. We also looked into complex concepts like Analysis with canary deployments and finally ended with traffic management using Nginx Ingress.
You can find all the parts of this Argo Rollouts Series below:
- Part 1: Progressive Delivery With Argo Rollouts: Blue-Green Deployment
- Part 2: Progressive Delivery With Argo Rollouts: Canary Deployment
- Part 3: Progressive Delivery With Argo Rollouts: Canary with Analysis
- Part 4: Progressive Delivery With Argo Rollouts: Canary & Traffic Management