Methods to Resize a Kubernetes StatefulSet’s Volumes

0
96


Kubernetes StatefulSets are used to deploy stateful functions inside your cluster. Every Pod within the StatefulSet can entry native persistent volumes that keep on with it even after it’s rescheduled. This permits Pods to take care of particular person state that’s separate from their neighbors within the set.

Sadly these volumes include a giant limitation: Kubernetes doesn’t present a approach to resize them from the StatefulSet object. The spec.sources.requests.storage property of the StatefulSet’s volumeClaimTemplates discipline is immutable, stopping you from making use of any capability will increase you require. This text will present you how you can workaround the issue.

Making a StatefulSet

Copy this YAML and reserve it to ss.yaml:

apiVersion: v1
sort: Service
metadata:
  identify: nginx
  labels:
    app: nginx
spec:
  selector:
    app: nginx
  ports:
  - identify: nginx
    port: 80
  clusterIP: None
---
apiVersion: apps/v1
sort: StatefulSet
metadata:
  identify: nginx
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 3
  serviceName: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - identify: nginx
        picture: nginx:newest
        ports:
        - identify: internet
          containerPort: 80
        volumeMounts:
        - identify: knowledge
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
  - metadata:
      identify: knowledge
    spec:
      accessModes: ["ReadWriteOnce"]
      sources:
        requests:
          storage: 1Gi

Apply the YAML to your cluster with Kubectl:

$ kubectl apply -f ss.yaml
service/nginx created
statefulset.apps/nginx created

You’ll want a storage class and provisioner in your cluster to run this instance. It creates a StatefulSet that runs three replicas of an NGINX internet server.

Whereas this isn’t consultant of when StatefulSets ought to be used, it’s sufficient as a demo of the amount issues you’ll be able to face. A quantity declare with 1 Gi of storage is mounted to NGINX’s knowledge listing. Your internet content material might outgrow this comparatively small allowance as your service scales. Nonetheless making an attempt to change the volumeClaimTemplates.spec.sources.requests.storage discipline to 10Gi will report the next error once you run kubectl apply:

$ kubectl apply -f ss.yaml
service/nginx unchanged
The StatefulSet "nginx" is invalid: spec: Forbidden: updates to statefulset spec for fields aside from 'replicas', 'template', 'updateStrategy', 'persistentVolumeClaimRetentionPolicy' and 'minReadySeconds' are forbidden

This happens as a result of nearly all of the fields of a StatefulSet’s manifest are immutable after creation.

Manually Resizing StatefulSet Volumes

You possibly can bypass the restriction by manually resizing the persistent quantity declare (PVC). You’ll then must recreate the StatefulSet to launch and rebind the amount out of your Pods. This can set off the precise quantity resize occasion.

First use Kubectl to search out the PVCs related along with your StatefulSet:

$ kubectl get pvc
NAME           STATUS   VOLUME                                     CAPACITY   ACCESS MODES
data-nginx-0   Certain    pvc-ccb2c835-e2d3-4632-b8ba-4c8c142795e4   1Gi        RWO         
data-nginx-1   Certain    pvc-1b0b27fe-3874-4ed5-91be-d8e552e515f2   1Gi        RWO         
data-nginx-2   Certain    pvc-4b7790c2-3ae6-4e04-afee-a2e1bae4323b   1Gi        RWO

There are three PVCs as a result of there are three replicas within the StatefulSet. Every Pod will get its personal particular person quantity.

Now use kubectl edit to regulate the capability of every quantity:

$ kubectl edit pvc data-nginx-0

The PVC’s YAML manifest will seem in your editor. Discover the spec.sources.requests.storage discipline and alter it to your new desired capability:

# ...
spec:
  sources:
    requests:
      storage: 10Gi
# ...

Save and shut the file. Kubectl ought to report that the change has been utilized to your cluster.

persistentvolumeclaim/data-nginx-0 edited

Now repeat these steps for the StatefulSet’s remaining PVCs. Itemizing your cluster’s persistent volumes ought to then present the brand new measurement in opposition to each:

$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM               
pvc-0a0d0b15-241f-4332-8c34-a24b61944fb7   10Gi       RWO            Delete           Certain    default/data-nginx-2
pvc-33af452d-feff-429d-80cd-a45232e700c1   10Gi       RWO            Delete           Certain    default/data-nginx-0
pvc-49f3a1c5-b780-4580-9eae-17a1f002e9f5   10Gi       RWO            Delete           Certain    default/data-nginx-1

The claims will keep the outdated measurement for now:

$ kubectl get pvc
NAME           STATUS   VOLUME                                     CAPACITY   ACCESS MODES
data-nginx-0   Certain    pvc-33af452d-feff-429d-80cd-a45232e700c1   10Gi       RWO         
data-nginx-1   Certain    pvc-49f3a1c5-b780-4580-9eae-17a1f002e9f5   10Gi       RWO         
data-nginx-2   Certain    pvc-0a0d0b15-241f-4332-8c34-a24b61944fb7   10Gi       RWO

It is because the amount can’t be resized whereas Pods are nonetheless utilizing it.

Recreating the StatefulSet

Full the resize by releasing the amount declare from the StatefulSet that’s holding it. Delete the StatefulSet however use the orphan cascading mechanism so its Pods stay in your cluster. This can assist decrease downtime.

$ kubectl delete statefulset --cascade=orphan nginx
statefulset.apps "nginx" deleted

Subsequent edit your unique YAML file to incorporate the brand new quantity measurement within the spec.sources.requests.storage file. Then use kubectl apply to recreate the StatefulSet in your cluster:

$ kubectl apply -f ss.yaml
service/nginx unchanged
statefulset.apps/nginx created

The brand new StatefulSet will assume possession of the beforehand orphaned Pods as a result of they’ll already meet its necessities. The volumes might get resized at this level however generally you’ll should manually provoke a rollout that restarts your Pods:

$ kubectl rollout restart statefulset nginx

The rollout proceeds sequentially, focusing on one Pod at a time. This ensures your service stays accessible all through.

Now your PVCs ought to present the brand new measurement:

$ kubectl get pvc
NAME           STATUS   VOLUME                                     CAPACITY   ACCESS MODES
data-nginx-0   Certain    pvc-33af452d-feff-429d-80cd-a45232e700c1   10Gi       RWO         
data-nginx-1   Certain    pvc-49f3a1c5-b780-4580-9eae-17a1f002e9f5   10Gi       RWO         
data-nginx-2   Certain    pvc-0a0d0b15-241f-4332-8c34-a24b61944fb7   10Gi       RWO

Strive connecting to one among your Pods to verify the elevated capability is seen from inside:

$ kubectl exec -it nginx-0 bash
[email protected]:/# df -h /usr/share/nginx/html
Filesystem                                                                Dimension  Used Avail Use% Mounted on
/dev/disk/by-id/scsi-0DO_Volume_pvc-33af452d-feff-429d-80cd-a45232e700c1  9.9G  4.5M  9.4G   1% /usr/share/nginx/html

The Pod’s reporting the anticipated 10 Gi of storage.

Abstract

Kubernetes StatefulSets allow you to run stateful functions in Kubernetes with persistent storage volumes which are scoped to particular person Pods. Nonetheless the flexibleness this allows ends when you could resize one among your volumes. This can be a lacking characteristic which at present requires a number of handbook steps to be accomplished in sequence.

The Kubernetes maintainers are conscious of the difficulty. There’s an open characteristic request to develop an answer which ought to ultimately allow you to provoke quantity resizes by enhancing a StatefulSet’s manifest. This might be a lot faster and safer than the present scenario.

One closing caveat is that quantity resizes are depending on a storage driver that permits dynamic enlargement. This characteristic solely turned usually out there in Kubernetes v1.24 and never all drivers, Kubernetes distributions, and cloud platforms will help it. You possibly can verify whether or not yours does by working kubectl get sc and searching for true within the ALLOWVOLUMEXPANSION column of the storage driver you’re utilizing along with your StatefulSets.





Supply hyperlink