Experiment on Persistent Volume Access Mode

This page mainly does some experiments on changing the access mode if PV or PVC is in use. The testing environment is on VMware TKG - Project Pacific.

If Dynamic provisioning

Create PVC and Pod

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: daniel-pvc
  annotations:
    volume.beta.kubernetes.io/storage-class: gcstorage
spec:
  accessModes:
    - ReadOnlyMany
  resources:
    requests:
      storage: 1Gi
  storageClassName: gcstorage

Make sure you have the storageclass available

apiVersion: v1
kind: Pod
metadata:
  name: daniel-pod
spec:
  restartPolicy: Never
  containers:
  - name: hello
    image: "wcp-docker-ci.artifactory.eng.vmware.com/vmware/photon:1.0"
    # The script continously writes some text into the mounted persistent volume. This will ensure the pod is running and the persistent volume is accessible.
    command: ["/bin/sh", "-c", "echo 'hello' > /data/persistent/index.html && chmod o+rX /data /data/persistent/index.html && while true ; do sleep 2 ; done"]
    volumeMounts:
    - name: gc-persistent-storage
      mountPath: /data/persistent
  volumes:
  - name: gc-persistent-storage
    persistentVolumeClaim:
      claimName: daniel-pvc

Using kubectl apply to create PVC and Pod.

After the creation, you should be able to see that pod is running, PVC is bound.

root@420f435c39833406727a02f08bc4a7b6 [ ~/daniel-tests ]# kubectl-gc get pvc daniel-pvc
NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
daniel-pvc   Bound    pvc-bee5e54e-62ef-4abf-a40f-b6d3aa461d2f   1Gi        RWO            gcstorage      10m

root@420f435c39833406727a02f08bc4a7b6 [ ~/daniel-tests ]# kubectl-gc get pod daniel-pod
NAME         READY   STATUS    RESTARTS   AGE
daniel-pod   1/1     Running   0          10m

Modify the access mode under PVC

Modify the access mode under PVC to ReadOnlyMany

The PersistentVolumeClaim "daniel-pvc" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims

Modify the access mode under PV(which is dynamically created)

Modify the access mode under PV to ReadOnlyMany

root@420f435c39833406727a02f08bc4a7b6 [ ~/daniel-tests ]# kubectl-gc edit pv pvc-bee5e54e-62ef-4abf-a40f-b6d3aa461d2f
persistentvolume/pvc-bee5e54e-62ef-4abf-a40f-b6d3aa461d2f edited

Strange thing happens under the PVC spec, that spec and status are inconsistent:

spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: gcstorage
  volumeMode: Filesystem
  volumeName: pvc-bee5e54e-62ef-4abf-a40f-b6d3aa461d2f
status:
  accessModes:
  - ReadOnlyMany
  capacity:
    storage: 1Gi
  phase: Bound

However, CLI get returns the correct value:

root@420f435c39833406727a02f08bc4a7b6 [ ~/daniel-tests ]# kubectl-gc get pvc,pv
NAME                                 STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/blog-content   Bound    pvc-386ef37c-94f3-4efa-9670-fde4e6f531a1   2Gi        RWO            gcstorage      3h7m
persistentvolumeclaim/daniel-pvc     Bound    pvc-bee5e54e-62ef-4abf-a40f-b6d3aa461d2f   1Gi        ROX            gcstorage      19m

NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                  STORAGECLASS   REASON   AGE
persistentvolume/pvc-386ef37c-94f3-4efa-9670-fde4e6f531a1   2Gi        RWO            Delete           Bound    default/blog-content   gcstorage               3h7m
persistentvolume/pvc-bee5e54e-62ef-4abf-a40f-b6d3aa461d2f   1Gi        ROX            Delete           Bound    default/daniel-pvc     gcstorage               19m

After changing the access mode to ReadOnlyMany:

The Pod still can write data to the mounted volume

Conclusion:

  • After changing the access mode under PV to ReadOnlyMany, the Pod runs fine and the volume is still writable

  • If delete the Pod and redeploy, the Pod could not be up. That is because VsphereVolume does not support ReadOnlyMany at this moment. Details

7s          Warning   FailedAttachVolume       pod/daniel-pod   AttachVolume.Attach failed for volume "pvc-bee5e54e-62ef-4abf-a40f-b6d3aa461d2f" : rpc error: code = Internal desc = Validation for PublishVolume Request: {VolumeId:15aef1dd-8d27-4f46-a1da-0d0eb6a32fb9-bee5e54e-62ef-4abf-a40f-b6d3aa461d2f NodeId:cluster-md-0-55fddb665c-pmkhm VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:MULTI_NODE_READER_ONLY >  Readonly:false Secrets:map[] VolumeContext:map[storage.kubernetes.io/csiProvisionerIdentity:1576057825043-8081-csi.vsphere.vmware.com type:vSphere CNS Block Volume] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} has failed. Error: rpc error: code = InvalidArgument desc = Volume capability not supported

If static provisioning

Create PV, PVC and Pod

apiVersion: v1
kind: PersistentVolume
metadata:
  name: daniel-staticpv-csi-driver
  annotations:
    pv.kubernetes.io/provisioned-by: csi.vsphere.vmware.com
spec:
  storageClassName: daniel-static-provisioning-storageclass
  capacity:
    storage: 3Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Delete
  csi:
   driver: "csi.vsphere.vmware.com"
   volumeAttributes:
     type: "vSphere CNS Block Volume"
   volumeHandle: "95786abe-8f14-48fe-8f9d-c40094eaadc4-d842b247-5029-4f13-b8e8-999de471fb31"

volumeHandler above needs to reference to a valid PVC in Supervisor Cluster

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: daniel-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 3Gi
  storageClassName: daniel-static-provisioning-storageclass
apiVersion: v1
kind: Pod
metadata:
  name: daniel-pod
spec:
  restartPolicy: Never
  containers:
  - name: hello
    image: "wcp-docker-ci.artifactory.eng.vmware.com/vmware/photon:1.0"
    # The script continously writes some text into the mounted persistent volume. This will ensure the pod is running and the persistent volume is accessible.
    command: ["/bin/sh", "-c", "echo 'hello' > /data/persistent/index.html && while true ; do sleep 2 ; done"]
    volumeMounts:
    - name: gc-persistent-storage
      mountPath: /data/persistent
  volumes:
  - name: gc-persistent-storage
    persistentVolumeClaim:
      claimName: daniel-pvc

Modify the access mode under PVC

root@420f435c39833406727a02f08bc4a7b6 [ ~/daniel-tests ]# kubectl-gc apply -f static-provision-pvc.yaml
The PersistentVolumeClaim "daniel-pvc" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims

Modify the access mode under PV (Statically created)

Modify the access mode to ReadOnlyMany

root@420f435c39833406727a02f08bc4a7b6 [ ~/daniel-tests ]# kubectl-gc apply -f static-provision-pv.yaml
persistentvolume/daniel-staticpv-csi-driver configured
root@420f435c39833406727a02f08bc4a7b6 [ ~/daniel-tests ]# kubectl-gc get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                  STORAGECLASS                              REASON   AGE
daniel-staticpv-csi-driver                 3Gi        ROX            Delete           Bound    default/daniel-pvc     daniel-static-provisioning-storageclass            3h
pvc-386ef37c-94f3-4efa-9670-fde4e6f531a1   2Gi        RWO            Delete           Bound    default/blog-content   gcstorage                                          7h40m

However, the same inconsistency between spec and status happens as Dynamic Provisioning

And Pod can still write data into the persistent volume after changing the access mode.

Conclusion:

The same conclusion as Dynamic provisinoing scenario

Last updated