Kubernetes native network security policies by example.
The Kubernetes network model is a “flat” network design. This means that all pods on one node can communicate freely with all pods on all the other nodes without being address translated, or blocked.
The networking is implemented by a CNI (Container Networking Interface). Examples of popular CNI’s are Cilium, Calico, kubenet, AWS CNI, Azure CNI, and many others. Pod-to-pod networking can be achieved by implementing encapsulation and/or routing and using standard Linux networking or, more recently, eBPF.
In order to implement micro-segmentation or zero trust networking we need some kind of isolation mechanism. This can be achieved by native kubernetes network policies (only supported by some CNI’s) or more advanced project specific network policies (ex. Cilium or Calico).
Network policies basicly define
- on what pods do we apply the network policies (pod selector)
- policy type (Ingress / Egress)
- to/from access rules
We’ll dive deeper into it as we progess the labs.
Creating a simple lab environment
- Let’s create a simple lab environment in which we’ll deploy some nginx pods as web servers and a kubernetes service in a dedicated namespace.
kubectl create ns prod-nginx
kubectl apply -f - <<EOF apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment namespace: prod-nginx labels: app: nginx-deployment spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx env: prod spec: containers: - name: nginx image: nginx ports: - containerPort: 80 EOF
kubectl apply -f - <<EOF apiVersion: v1 kind: Service metadata: name: my-nginx-clusterip namespace: prod-nginx spec: ports: - port: 80 protocol: TCP selector: app: nginx EOF
- Let’s verify if everything is set up correctly. Take a close look at the labels because they are essential to understand network policies.
kubectl get pod -n prod-nginx -o wide --show-labels
- Check if the service is configured correctly.
kubectl get svc -n prod-nginx -o wide --show-labels
Connectivity testing
- To avoid typing the IP address of a pod over and over again for testing, let’s pick a random nginx pod’s IP and store it in an environment variable.
POD=$(kubectl get pods -n prod-nginx -l app=nginx -o jsonpath='{range .items[0]}{@.status.podIP}{"\n"}{end}')
- Create a pod with the name
debug
in aprod-nginx
namespace with the above pod’s IP environment variable.
kubectl run -it --rm -n prod-nginx --image xxradar/hackon \ --env="POD=$POD" debug
- Look for the service
my-nginx-clusterip
.
dig +search my-nginx-clusterip
- Access the service and pod’s IP.
curl http://my-nginx-clusterip
curl http://$POD
curl http://cloudyuga.guru
exit
Use these previous steps to test connectivity in the next part of the labs.
Note: Network policies are applied immediately and affect running pods. (so no need to restart or recreate pods to enable the network policies)
Default Deny network policy
This first policy will block all traffic entering or leaving all the pods in the namespace.
Important: By default, pods have no network policies applied and can connect to every other pod. Once a pod is ‘selected’ by .spec.podSelector
, only traffic according the rules in the Ingress
and Egress
section of the manifest is allowed.
Note: This network policies is applied in namespaceprod-nginx
, so it will only affect pods in this namespace.
Note: .spec.podSelector.matchLabels={}
is a special kind of selector. It selects every pod in the namespace to which the network policy is applied in.
Note: No rules are applied in the Ingress or Egress section, so this results in implicit denying all traffic. (actually ‘allowing’ an empty rule set)
kubectl apply -n prod-nginx -f - <<EOF apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: default-deny spec: podSelector: matchLabels: {} policyTypes: - Ingress - Egress EOF
You can check if the network policy is applied
kubectl get netpol -n prod-nginx
Run the connectivity procedure as described above.
Results:
- DNS resolution: failed
- Access to service name: failed
- Access to the pod IP: failed
Egress DNS network policy
This is the most overlooked caveat when applying a default deny strategy. The policy obviously blocks also DNS traffic from leaving the pods which is crucial for network connectivity.
To fix DNS resolution, we must allow our pods to connect to the kube-system
namespace and pods with the label k8s-app=kube-dns.
In some K8S distributions, the labeling might be slightly different, but you can add the necessary labels if required.
kubectl label ns kube-system kubernetes.io/metadata.name=kube-system --overwrite
kubectl apply -n prod-nginx -f - <<EOF apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-dns spec: podSelector: matchLabels: {} policyTypes: - Egress egress: - to: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: kube-system podSelector: matchLabels: k8s-app: kube-dns EOF
kubectl get netpol -n prod-nginx
Run the connectivity procedure as described above.
Results:
- DNS resolution: working
- Access to service name: failed
- Access to the pod IP: failed
Ingress network policy for nginx
In the next network policy, we’ll take care of traffic entering the nginx
pods. The ingress rule allows all pods (within this namespace) to connect to TCP/80.
Note: There are no Egress rules applied, but the existing Egress rules still apply !
kubectl apply -n prod-nginx -f - <<EOF apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-http spec: podSelector: matchLabels: app: nginx ingress: - from: - podSelector: matchLabels: {} ports: - protocol: TCP port: 80 EOF
kubectl get netpol -n prod-nginx
Run the connectivity procedure as described above.
Results:
- DNS resolution: working
- Access to service name: failed
- Access to pod IP: failed
Egress network policy for client
Although traffic towards the nginx pods is allowed, our debug
pod is still ‘locked’ by the Default Deny rule and DNS egress rule. So adding an Egress policy that allows all pods with the label run=debug
will solve this.
kubectl apply -n prod-nginx -f - <<EOF apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-debug-egress spec: podSelector: matchLabels: run: debug egress: - to: - podSelector: matchLabels: {} EOF
kubectl get netpol -n prod-nginx
Run the connectivity procedure as described above.
Results:
- DNS resolution: working
- Access to service name: working
- Access to pod IP: working
Please note that traffic to other destinations is not allowed from our debug
pod.
Access from a different namespace
kubectl apply -n prod-nginx -f - <<EOF apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-http-other-ns spec: podSelector: matchLabels: app: nginx ingress: - from: - namespaceSelector: matchLabels: project: debug podSelector: matchLabels: mode: debug ports: - protocol: TCP port: 80 EOF
To test the network policy, create a new namespace myhackns
, label it as project=debug
and create a debug pod with label mode=debug.
kubectl create ns myhackns kubectl label ns myhackns project=debug
kubectl run -it --rm -n myhackns --image xxradar/hackon \ -l mode=debug othernspod
curl my-nginx-clusterip.prod-nginx
curl http://cloudyuga.guru
Conclusion
Network policies are an effective way to implement segmentation and isolation between pods or namespaces. Network policies are very complicated to troubleshoot. Projects like #cilium provide debugging and logging functionality in their open-source version.