K3S: Storage Management

Understanding Persistent Volumes (PV) and Persistent Volume Claims (PVC)

Storage in Kubernetes is like keeping track of your socks in the laundry—things disappear unless you manage them properly. Persistent Volumes (PV) ensure your data survives beyond the lifespan of a Pod.

What are Persistent Volumes (PV) and why are they needed?

A PV is a piece of storage allocated for a cluster, while a Persistent Volume Claim (PVC) is how applications request storage. Think of it as Kubernetes’ way of making sure storage isn’t just a free-for-all.

Difference between PV and PVC

  • PV: The actual storage resource, defined by the administrator.
  • PVC: A request for storage by applications, which Kubernetes tries to match with an available PV.

Example YAML for creating a Persistent Volume

apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-pv
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  hostPath:
    path: "/mnt/data"

Creating a Persistent Volume Claim (PVC)

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi

Applying PV and PVC

kubectl apply -f my-pv.yaml
kubectl apply -f my-pvc.yaml
kubectl get pv,pvc

Using Different Storage Classes in K3s

Understanding Storage Classes in Kubernetes

StorageClasses allow dynamic provisioning of storage. Instead of pre-allocating a PV, Kubernetes can create one on-demand.

Listing available storage classes

kubectl get storageclass

Defining a custom StorageClass

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: fast-storage
provisioner: rancher.io/local-path
volumeBindingMode: WaitForFirstConsumer

Assigning a StorageClass to a PVC

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-fast-pvc
spec:
  storageClassName: fast-storage
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi

Applying the StorageClass and PVC

kubectl apply -f fast-storage.yaml
kubectl apply -f my-fast-pvc.yaml

Local Storage vs. Cloud Storage Options

Local Storage

  • Uses the host machine for storage (hostPath, localPath)
  • Best for on-prem clusters, but doesn’t scale well

Cloud Storage Providers

  • AWS EBS, Google Persistent Disk, Azure Disk provide managed storage
  • Example AWS EBS StorageClass:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: aws-ebs
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2
  fsType: ext4

Implementing NFS and Longhorn Storage in K3s

Using NFS as a Storage Provider

NFS (Network File System) allows multiple nodes to share the same storage, making it perfect for multi-node K3s clusters.

Setting up an NFS-based Persistent Volume

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany
  nfs:
    server: 192.168.1.100
    path: "/data"

Using Longhorn for Dynamic Storage in K3s

Longhorn is a lightweight, distributed block storage system for Kubernetes, designed for simplicity and high availability.

Installing Longhorn in K3s

kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/longhorn.yaml

Verifying Longhorn installation

kubectl get pods -n longhorn-system

Creating a Longhorn-based StorageClass

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: longhorn
provisioner: driver.longhorn.io

Hands-On Exercise

Now it’s time to put all this storage knowledge into action:

  • Set up a Persistent Volume (PV) and Persistent Volume Claim (PVC) in K3s
  • Configure and deploy an application with persistent storage
  • Install and test Longhorn storage in K3s
  • Use NFS for multi-node storage across a K3s cluster

With these skills, you’ll be managing Kubernetes storage like a pro—no more lost data, no more missing socks. 🚀