K3S: Serverless & Edge Computing
Deploying KNative Workloads in K3s
Serverless computing: because why should we pay for idle resources? KNative brings serverless capabilities to Kubernetes, allowing you to run workloads that only consume resources when needed.
What is KNative, and why use it for serverless applications?
KNative is a Kubernetes-based serverless framework that scales workloads to zero when not in use. It’s great for event-driven applications, APIs, and microservices that don’t need to run 24/7.
Installing KNative in K3s
kubectl apply -f https://github.com/knative/serving/releases/latest/download/serving-crds.yaml
kubectl apply -f https://github.com/knative/serving/releases/latest/download/serving-core.yamlDeploying a serverless function with KNative
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: knative-example
spec:
template:
spec:
containers:
- image: gcr.io/knative-samples/helloworld-go
env:
- name: TARGET
value: "KNative on K3s!"Applying and verifying the deployment
kubectl apply -f knative-service.yaml
kubectl get ksvcOnce deployed, KNative will automatically scale your function down to zero when not in use and spin it back up when needed. Magic? Almost.
Running Lightweight AI/ML Models on Edge Devices
AI on the edge? Yes, please. Instead of sending all data to the cloud, we can run AI/ML inference locally using TensorFlow or PyTorch models in K3s.
Deploying TensorFlow or PyTorch models in K3s
apiVersion: apps/v1
kind: Deployment
metadata:
name: ai-model
spec:
replicas: 1
selector:
matchLabels:
app: ai-model
template:
metadata:
labels:
app: ai-model
spec:
containers:
- name: model-server
image: tensorflow/serving
ports:
- containerPort: 8501Accessing the AI model inference API
curl -X POST http://<node-ip>:8501/v1/models/model:predict -d '{"inputs": [1,2,3,4]}'With this setup, you can run AI workloads at the edge, reducing latency and dependency on cloud resources.
Optimizing K3s for IoT and Edge Applications
K3s is already lightweight, but running it on a Raspberry Pi or an IoT device requires even more fine-tuning.
Using K3s with Raspberry Pi or small devices
- Run K3s with reduced memory consumption:
curl -sfL https://get.k3s.io | sh -s - --disable traefik --disable servicelb - Use ARM-based containers for optimal performance.
Configuring lightweight storage and networking for edge devices
- Use SQLite instead of etcd for cluster state storage.
- Enable mTLS for secure communication between IoT devices.
Enabling automatic updates and rollbacks for IoT deployments
- Use K3s with GitOps (ArgoCD or FluxCD) for remote updates.
- Implement rollback policies to prevent breaking edge devices.
Case Studies of K3s in Production
Real-world applications of K3s go beyond just lab experiments. Here are some actual use cases:
Industrial Automation
- K3s runs in factories to process sensor data locally and trigger alerts.
- Reduces network latency by avoiding constant cloud communication.
AI at the Edge
- AI-powered security cameras use K3s to process video streams locally.
- Faster response times for facial recognition and anomaly detection.
Cloud-Native IoT Deployments
- Smart agriculture systems use K3s clusters on IoT gateways to analyze soil data.
- Saves bandwidth by only sending relevant alerts to the cloud.
Hands-On Exercise
Now it’s time to build and deploy real-world workloads:
- Deploy a KNative serverless function in K3s.
- Run an AI/ML inference model on an edge device using K3s.
- Optimize a K3s cluster for low-power IoT devices.
Master this, and you’ll be ready to deploy Kubernetes at the edge—whether it’s a Raspberry Pi in your basement or an AI-driven drone in the sky. 🚀