All Articles

What is Kubernetes StorageClass and PVC? Explained Simply

PersistentVolume, PersistentVolumeClaim, and StorageClass in Kubernetes explained from scratch — how storage works, how to use it, and common mistakes.

DevOpsBoysApr 21, 20265 min read
Share:Tweet

One of the hardest parts of learning Kubernetes is understanding how storage works. Containers are ephemeral — when a pod dies, everything inside it is gone. But databases, file uploads, and application data need to survive pod restarts.

Kubernetes solves this with three objects: PersistentVolume (PV), PersistentVolumeClaim (PVC), and StorageClass. Here's how they work.


The Problem: Container Storage is Temporary

Without persistent storage:

Pod starts → container creates /data/database.db
Pod crashes → Kubernetes restarts the pod
New container starts → /data/database.db is GONE

Your data is lost every time a pod restarts. That's fine for stateless apps (web servers, APIs) but breaks any stateful workload (databases, message queues, file storage).


PersistentVolume (PV) — The Storage Resource

A PersistentVolume is a piece of storage that exists independently of any pod. It's like a disk attached to your Kubernetes cluster.

PVs can be:

  • An AWS EBS volume
  • A Google Cloud Persistent Disk
  • An NFS mount
  • A DigitalOcean Block Storage volume
  • A local disk on the node
yaml
# Manually created PV (static provisioning)
apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-pv
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce          # Only one node can mount it read-write
  persistentVolumeReclaimPolicy: Retain
  storageClassName: manual
  hostPath:
    path: /data/my-volume    # On the node (for local dev only!)

Manually creating PVs works but doesn't scale. That's where StorageClass comes in.


StorageClass — Automatic Volume Provisioning

A StorageClass is a template that tells Kubernetes how to automatically create PersistentVolumes when they're requested.

Instead of a sysadmin manually creating EBS volumes, a StorageClass defines how volumes should be created — which provisioner to use, what type, what performance tier.

yaml
# AWS EBS StorageClass
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: fast-ssd
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"   # Default class
provisioner: ebs.csi.aws.com     # EBS CSI driver
parameters:
  type: gp3                       # EBS volume type (gp3 = latest gen)
  iops: "3000"
  throughput: "125"
  encrypted: "true"
reclaimPolicy: Delete             # Delete EBS volume when PVC is deleted
volumeBindingMode: WaitForFirstConsumer   # Create volume when pod is scheduled

Cloud Kubernetes services (EKS, GKE, AKS) come with default StorageClasses pre-installed:

bash
kubectl get storageclass
# NAME              PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE
# gp2 (default)     kubernetes.io/aws-ebs   Delete          WaitForFirstConsumer
# gp3               ebs.csi.aws.com         Delete          WaitForFirstConsumer

PersistentVolumeClaim (PVC) — Requesting Storage

A PersistentVolumeClaim is a request for storage by a pod. It's how pods ask for a volume.

yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: database-pvc
  namespace: production
spec:
  accessModes:
    - ReadWriteOnce           # Must match the PV's access mode
  storageClassName: fast-ssd  # Use this StorageClass
  resources:
    requests:
      storage: 20Gi           # Request 20GB

When you apply this:

  1. Kubernetes sees the PVC request
  2. Finds a matching StorageClass (fast-ssd)
  3. Calls the EBS CSI driver to create a 20GB gp3 EBS volume
  4. Creates a PersistentVolume automatically
  5. Binds the PVC to that PV
bash
kubectl get pvc
# NAME           STATUS   VOLUME              CAPACITY   ACCESS MODES   STORAGECLASS
# database-pvc   Bound    pvc-abc123def456     20Gi       RWO            fast-ssd

STATUS: Bound = the PVC has been matched to a PV. Your pod can now use it.


Using a PVC in a Pod

yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgres
spec:
  replicas: 1
  selector:
    matchLabels:
      app: postgres
  template:
    metadata:
      labels:
        app: postgres
    spec:
      containers:
      - name: postgres
        image: postgres:15
        env:
        - name: POSTGRES_PASSWORD
          value: "mysecretpassword"
        - name: PGDATA
          value: /var/lib/postgresql/data/pgdata
        volumeMounts:
        - name: postgres-storage
          mountPath: /var/lib/postgresql/data   # Where data is stored
      volumes:
      - name: postgres-storage
        persistentVolumeClaim:
          claimName: database-pvc               # Reference the PVC

Now when Postgres writes data to /var/lib/postgresql/data, it goes to the EBS volume — not the container's filesystem. Pod restarts don't lose data.


Access Modes Explained

ModeShortWhat it means
ReadWriteOnceRWOOne node can mount read-write
ReadOnlyManyROXMany nodes can mount read-only
ReadWriteManyRWXMany nodes can mount read-write
ReadWriteOncePodRWOPOne pod (not just node) can mount read-write

EBS volumes only support ReadWriteOnce — they can only attach to one node. This means EBS-backed PVCs only work with Deployment replicas = 1 or StatefulSet.

EFS (NFS) supports ReadWriteMany — multiple pods across multiple nodes can read and write simultaneously.

Use EBS (RWO) for: databases, single-instance stateful apps
Use EFS (RWX) for: shared file storage, multiple pods need same files

Reclaim Policy

What happens to the PV and underlying storage when a PVC is deleted?

PolicyWhat happens
DeletePV and cloud volume are deleted automatically
RetainPV stays, cloud volume stays, manual cleanup needed
RecycleDeprecated — don't use

For production databases: use Retain to avoid accidental data loss. For dev/test ephemeral environments: use Delete to avoid orphaned volumes.

yaml
storageClassName: safe-production
reclaimPolicy: Retain     # Don't auto-delete our data!

StatefulSet + PVCs (The Right Pattern for Databases)

Deployment doesn't work well for databases — all replicas share the same PVC, which most volume types don't support. Use StatefulSet instead:

yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: postgres
spec:
  serviceName: postgres
  replicas: 3
  selector:
    matchLabels:
      app: postgres
  template:
    metadata:
      labels:
        app: postgres
    spec:
      containers:
      - name: postgres
        image: postgres:15
        volumeMounts:
        - name: data
          mountPath: /var/lib/postgresql/data
  volumeClaimTemplates:         # Each pod gets its OWN PVC
  - metadata:
      name: data
    spec:
      accessModes: ["ReadWriteOnce"]
      storageClassName: fast-ssd
      resources:
        requests:
          storage: 20Gi

StatefulSet creates:

  • data-postgres-0 PVC → for postgres-0 pod
  • data-postgres-1 PVC → for postgres-1 pod
  • data-postgres-2 PVC → for postgres-2 pod

Each pod gets its own dedicated volume. This is the correct pattern for distributed databases (PostgreSQL, MySQL, Cassandra).


Debugging PVC Issues

bash
# PVC stuck in Pending?
kubectl describe pvc my-pvc
# Look for: Events section — explains why it's not binding
 
# Common reasons for Pending:
# 1. No default StorageClass
# 2. StorageClass doesn't exist
# 3. No available PV (static provisioning, no matching PV)
# 4. Quota exceeded (ResourceQuota on PVC storage)
# 5. Node hasn't been scheduled yet (WaitForFirstConsumer)
 
# Check StorageClasses
kubectl get storageclass
 
# Check PVs
kubectl get pv
kubectl describe pv <pv-name>

Quick Reference

bash
# List PVCs
kubectl get pvc -n production
 
# List PVs
kubectl get pv
 
# List StorageClasses
kubectl get storageclass
 
# Delete a PVC (careful — may delete data if reclaimPolicy=Delete)
kubectl delete pvc my-pvc
 
# Describe to debug
kubectl describe pvc my-pvc
kubectl describe pv <pv-name>

Summary

ObjectPurpose
PersistentVolume (PV)Actual storage resource (EBS disk, NFS share)
PersistentVolumeClaim (PVC)Pod's request for storage
StorageClassTemplate for auto-creating PVs dynamically

Flow: Pod needs storage → creates PVC → StorageClass provisions PV → PVC binds to PV → Pod mounts the PVC.

For most production workloads: use a StatefulSet with volumeClaimTemplates for stateful apps, and set reclaimPolicy: Retain on your StorageClass to avoid accidental data loss.

Practice Kubernetes storage on a real cluster — KodeKloud has dedicated labs for PVC, StorageClass, and StatefulSets. Spin up a test cluster on DigitalOcean with $200 free credit.

Newsletter

Stay ahead of the curve

Get the latest DevOps, Kubernetes, AWS, and AI/ML guides delivered straight to your inbox. No spam — just practical engineering content.

Related Articles

Comments