🚨 The Problem

When deploying Prometheus with persistence enabled, you may encounter this issue:

kubectl get pvc -n prometheus

Bad output:

storage-prometheus-alertmanager-0   Pending

Then describing the PVC reveals:

kubectl describe pvc storage-prometheus-alertmanager-0 -n prometheus

Output:

no persistent volumes available for this claim and no storage class is set

🔧 Root Cause

Although your cluster may have a default storage class like gp2, Helm charts like Prometheus do not always assume one and may require you to explicitly specify it.

✅ Solution

Specify the storage class name for both Alertmanager and Server volumes:

resource "helm_release" "prometheus" {
  name       = "prometheus"
  repository = "https://prometheus-community.github.io/helm-charts"
  chart      = "prometheus"
  version    = local.prometheus_version
  namespace  = "prometheus"
  create_namespace = false

  set {
    name  = "server.persistentVolume.storageClass"
    value = "gp2"
  }

  set {
    name  = "alertmanager.persistentVolume.storageClass"
    value = "gp2"
  }
}

🔍 Validation

Check if the PVCs are now bound:

kubectl get pvc -n prometheus

Expected:

prometheus-server                   Bound     pvc-...   8Gi        RWO            gp2            ...
storage-prometheus-alertmanager-0   Bound     pvc-...   2Gi        RWO            gp2            ...

You can also verify the storage class:

kubectl get storageclass

Expected:

NAME   PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
gp2    kubernetes.io/aws-ebs   Delete          WaitForFirstConsumer   false                  ...

🧼 Summary

When deploying stateful workloads with Helm, always check if the chart expects a specific storage class to be set. PVCs stuck in Pending are usually a hint that your StorageClass wasn't picked up properly.