404 Not Found
  • Introduction
  • Monitoring related
    • K8s cluster monitoring
    • Monitor Jenkins with G.A.P on K8s cluster
    • Monitoring tools | projects
      • Grafana
      • AlertManager
      • Prometheus
      • Wavefront
  • Logging related
    • BOSH logs
    • How to gather systemd log
    • K8s cluster logging
    • Logging tools | projects
      • vRealize Log Insight
      • Fluentd
      • syslog vs fluentd
  • Having fun with docker
    • Using docker-compose for redmine
    • Customize Fluentd docker image
  • K8S or Apache Mesos
  • K8S Related
    • Main Architecture
      • Master components
        • API Server
        • etcd
        • Controller Manager
        • Kube Scheduler
      • Worker components
        • kubelet
        • kube-proxy
    • K8S Storage
      • Volume Provisioning
      • Understand CSI
      • How to write CSI
      • VMware CNS
      • K8S storage e2e experiment under VMware vSphere
      • Experiment on Persistent Volume Access Mode
      • Design: Storage in Cluster-API architecture
    • K8S Networking
      • Ingress
      • Endpoints
    • K8S Policies
      • Resource Quotas
    • K8S Management Platform
    • K8S Tests Tool
    • K8S Extension
      • CRDs
        • Custom Resources
        • Custom Controllers
        • How to user code-generator
        • K8S Operators
        • Operators Development Tools
          • Kubebuilder
          • Metacontroller
          • Operator SDK
      • Custom API Server
    • K8S Resource CRUD Workflow
    • K8S Garbage Collection
  • K8S CONTROLLER RELATED
    • IsController: true
    • Controller clients
  • PKS RELATED
    • How to Access VMs and Databases related to PKS
    • PKS Basics
    • BOSH Director
    • Backup and Restore on Ent. PKS with Velero
  • CICD RELATED
    • Configure Jenkins to run on K8S
    • Customize Jenkins JNLP slave image
    • Jenkins global shared libs
  • Google Anthos
    • Google Anthos Day from KubeCon 2019 San Diego
    • Migrate for Anthos
    • Config Connector
  • SYSTEM DESIGN RELATED
    • Design Data Intensive Application - Notes
      • RSM
        • Reliability
        • Scalability
      • Data models and Query Languages
      • Storage and Retrieval
    • How Alibaba Ensure K8S Performance At Large Scale
  • Miscellaneous
    • Knative
    • Serverless
    • Service Mesh
    • gRPC
    • Local persistent volumes
    • ownerReferences in K8S
    • File(NAS) vs Block(SAN) vs Object storage
    • KubeVirt
    • Why K8S HA chooses 3 instead of 5..6..7 as the size of masters?
    • goroutine & go channel
    • How to make docker images smaller
Powered by GitBook
On this page
  • If Dynamic provisioning
  • Create PVC and Pod
  • Modify the access mode under PVC
  • Modify the access mode under PV(which is dynamically created)
  • If static provisioning
  • Create PV, PVC and Pod
  • Modify the access mode under PVC
  • Modify the access mode under PV (Statically created)

Was this helpful?

  1. K8S Related
  2. K8S Storage

Experiment on Persistent Volume Access Mode

This page mainly does some experiments on changing the access mode if PV or PVC is in use. The testing environment is on VMware TKG - Project Pacific.

If Dynamic provisioning

Create PVC and Pod

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: daniel-pvc
  annotations:
    volume.beta.kubernetes.io/storage-class: gcstorage
spec:
  accessModes:
    - ReadOnlyMany
  resources:
    requests:
      storage: 1Gi
  storageClassName: gcstorage

Make sure you have the storageclass available

apiVersion: v1
kind: Pod
metadata:
  name: daniel-pod
spec:
  restartPolicy: Never
  containers:
  - name: hello
    image: "wcp-docker-ci.artifactory.eng.vmware.com/vmware/photon:1.0"
    # The script continously writes some text into the mounted persistent volume. This will ensure the pod is running and the persistent volume is accessible.
    command: ["/bin/sh", "-c", "echo 'hello' > /data/persistent/index.html && chmod o+rX /data /data/persistent/index.html && while true ; do sleep 2 ; done"]
    volumeMounts:
    - name: gc-persistent-storage
      mountPath: /data/persistent
  volumes:
  - name: gc-persistent-storage
    persistentVolumeClaim:
      claimName: daniel-pvc

Using kubectl apply to create PVC and Pod.

After the creation, you should be able to see that pod is running, PVC is bound.

root@420f435c39833406727a02f08bc4a7b6 [ ~/daniel-tests ]# kubectl-gc get pvc daniel-pvc
NAME         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
daniel-pvc   Bound    pvc-bee5e54e-62ef-4abf-a40f-b6d3aa461d2f   1Gi        RWO            gcstorage      10m

root@420f435c39833406727a02f08bc4a7b6 [ ~/daniel-tests ]# kubectl-gc get pod daniel-pod
NAME         READY   STATUS    RESTARTS   AGE
daniel-pod   1/1     Running   0          10m

Modify the access mode under PVC

Modify the access mode under PVC to ReadOnlyMany

The PersistentVolumeClaim "daniel-pvc" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims

Modify the access mode under PV(which is dynamically created)

Modify the access mode under PV to ReadOnlyMany

root@420f435c39833406727a02f08bc4a7b6 [ ~/daniel-tests ]# kubectl-gc edit pv pvc-bee5e54e-62ef-4abf-a40f-b6d3aa461d2f
persistentvolume/pvc-bee5e54e-62ef-4abf-a40f-b6d3aa461d2f edited

Strange thing happens under the PVC spec, that spec and status are inconsistent:

spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: gcstorage
  volumeMode: Filesystem
  volumeName: pvc-bee5e54e-62ef-4abf-a40f-b6d3aa461d2f
status:
  accessModes:
  - ReadOnlyMany
  capacity:
    storage: 1Gi
  phase: Bound

However, CLI get returns the correct value:

root@420f435c39833406727a02f08bc4a7b6 [ ~/daniel-tests ]# kubectl-gc get pvc,pv
NAME                                 STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/blog-content   Bound    pvc-386ef37c-94f3-4efa-9670-fde4e6f531a1   2Gi        RWO            gcstorage      3h7m
persistentvolumeclaim/daniel-pvc     Bound    pvc-bee5e54e-62ef-4abf-a40f-b6d3aa461d2f   1Gi        ROX            gcstorage      19m

NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                  STORAGECLASS   REASON   AGE
persistentvolume/pvc-386ef37c-94f3-4efa-9670-fde4e6f531a1   2Gi        RWO            Delete           Bound    default/blog-content   gcstorage               3h7m
persistentvolume/pvc-bee5e54e-62ef-4abf-a40f-b6d3aa461d2f   1Gi        ROX            Delete           Bound    default/daniel-pvc     gcstorage               19m

After changing the access mode to ReadOnlyMany:

The Pod still can write data to the mounted volume

Conclusion:

  • After changing the access mode under PV to ReadOnlyMany, the Pod runs fine and the volume is still writable

7s          Warning   FailedAttachVolume       pod/daniel-pod   AttachVolume.Attach failed for volume "pvc-bee5e54e-62ef-4abf-a40f-b6d3aa461d2f" : rpc error: code = Internal desc = Validation for PublishVolume Request: {VolumeId:15aef1dd-8d27-4f46-a1da-0d0eb6a32fb9-bee5e54e-62ef-4abf-a40f-b6d3aa461d2f NodeId:cluster-md-0-55fddb665c-pmkhm VolumeCapability:mount:<fs_type:"ext4" > access_mode:<mode:MULTI_NODE_READER_ONLY >  Readonly:false Secrets:map[] VolumeContext:map[storage.kubernetes.io/csiProvisionerIdentity:1576057825043-8081-csi.vsphere.vmware.com type:vSphere CNS Block Volume] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0} has failed. Error: rpc error: code = InvalidArgument desc = Volume capability not supported

If static provisioning

Create PV, PVC and Pod

apiVersion: v1
kind: PersistentVolume
metadata:
  name: daniel-staticpv-csi-driver
  annotations:
    pv.kubernetes.io/provisioned-by: csi.vsphere.vmware.com
spec:
  storageClassName: daniel-static-provisioning-storageclass
  capacity:
    storage: 3Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Delete
  csi:
   driver: "csi.vsphere.vmware.com"
   volumeAttributes:
     type: "vSphere CNS Block Volume"
   volumeHandle: "95786abe-8f14-48fe-8f9d-c40094eaadc4-d842b247-5029-4f13-b8e8-999de471fb31"

volumeHandler above needs to reference to a valid PVC in Supervisor Cluster

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: daniel-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 3Gi
  storageClassName: daniel-static-provisioning-storageclass
apiVersion: v1
kind: Pod
metadata:
  name: daniel-pod
spec:
  restartPolicy: Never
  containers:
  - name: hello
    image: "wcp-docker-ci.artifactory.eng.vmware.com/vmware/photon:1.0"
    # The script continously writes some text into the mounted persistent volume. This will ensure the pod is running and the persistent volume is accessible.
    command: ["/bin/sh", "-c", "echo 'hello' > /data/persistent/index.html && while true ; do sleep 2 ; done"]
    volumeMounts:
    - name: gc-persistent-storage
      mountPath: /data/persistent
  volumes:
  - name: gc-persistent-storage
    persistentVolumeClaim:
      claimName: daniel-pvc

Modify the access mode under PVC

root@420f435c39833406727a02f08bc4a7b6 [ ~/daniel-tests ]# kubectl-gc apply -f static-provision-pvc.yaml
The PersistentVolumeClaim "daniel-pvc" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims

Modify the access mode under PV (Statically created)

Modify the access mode to ReadOnlyMany

root@420f435c39833406727a02f08bc4a7b6 [ ~/daniel-tests ]# kubectl-gc apply -f static-provision-pv.yaml
persistentvolume/daniel-staticpv-csi-driver configured
root@420f435c39833406727a02f08bc4a7b6 [ ~/daniel-tests ]# kubectl-gc get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                  STORAGECLASS                              REASON   AGE
daniel-staticpv-csi-driver                 3Gi        ROX            Delete           Bound    default/daniel-pvc     daniel-static-provisioning-storageclass            3h
pvc-386ef37c-94f3-4efa-9670-fde4e6f531a1   2Gi        RWO            Delete           Bound    default/blog-content   gcstorage                                          7h40m

However, the same inconsistency between spec and status happens as Dynamic Provisioning

And Pod can still write data into the persistent volume after changing the access mode.

Conclusion:

The same conclusion as Dynamic provisinoing scenario

PreviousK8S storage e2e experiment under VMware vSphereNextDesign: Storage in Cluster-API architecture

Last updated 5 years ago

Was this helpful?

If delete the Pod and redeploy, the Pod could not be up. That is because VsphereVolume does not support ReadOnlyMany at this moment.

Details