K3S: Networking and Service Discovery
Understanding CNI (Container Network Interface) in K3s
Kubernetes networking can feel like magic—until it breaks. Then, it’s just dark sorcery. Enter CNI (Container Network Interface), the unsung hero managing network connectivity between Pods.
What is CNI and its role in Kubernetes networking?
CNI is what allows containers to talk to each other inside a cluster. Without it, your Pods would be isolated islands, like an office full of developers who refuse to communicate.
Default CNI plugin used in K3s
By default, K3s comes with Flannel, a lightweight overlay network that’s simple and effective. Other options include Calico, Cilium, and WeaveNet, each with their own networking superpowers.
Checking the current CNI configuration
kubectl get pods -n kube-system | grep flannelConfiguring different CNI plugins
Want a custom networking solution? Install a different CNI plugin like Calico:
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yamlConfiguring DNS and Service Discovery
If networking is the backbone of Kubernetes, CoreDNS is the brain. It ensures services can find each other without a scavenger hunt.
Verifying DNS resolution inside a Pod
kubectl exec -it mypod -- nslookup myserviceExposing services using ClusterIP
The ClusterIP service type allows Pods to communicate internally.
Creating a ClusterIP service
apiVersion: v1
kind: Service
metadata:
name: myapp-clusterip
spec:
selector:
app: myapp
ports:
- protocol: TCP
port: 80
targetPort: 80Listing and testing services
kubectl get svc
kubectl exec -it mypod -- curl myapp-clusteripManaging Ingress Controllers (Traefik, Nginx)
A Service gets you basic connectivity, but Ingress gives you proper routing. In K3s, Traefik is the default Ingress controller.
Deploying an Ingress resource
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp-ingress
spec:
rules:
- host: myapp.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: myapp-clusterip
port:
number: 80Applying and verifying Ingress
kubectl apply -f myapp-ingress.yaml
kubectl get ingressSwitching from Traefik to Nginx Ingress Controller
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/cloud/deploy.yamlLoad Balancing and Networking Best Practices
Understanding NodePort and LoadBalancer services
- NodePort: Opens a specific port on each node to access your application externally.
- LoadBalancer: Directs traffic to services automatically (if your infrastructure supports it).
Creating a NodePort service
apiVersion: v1
kind: Service
metadata:
name: myapp-nodeport
spec:
type: NodePort
selector:
app: myapp
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30007Accessing applications via NodePort
curl http://<node-ip>:30007Using MetalLB for LoadBalancer services in K3s
K3s doesn’t have a built-in LoadBalancer, but MetalLB can help.
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/main/manifests/metallb.yamlDefining a LoadBalancer service
apiVersion: v1
kind: Service
metadata:
name: myapp-loadbalancer
spec:
type: LoadBalancer
selector:
app: myapp
ports:
- protocol: TCP
port: 80
targetPort: 80Hands-On Exercise
Now, let’s put all this networking knowledge into action:
- Deploy an application and expose it via ClusterIP
- Configure Ingress using Traefik and test HTTP routing
- Implement a NodePort service and access the application externally
- Set up MetalLB for LoadBalancer services in K3s
Master these, and you’ll be well on your way to Kubernetes networking greatness—without breaking everything (hopefully). 🚀