K8S storage e2e experiment under VMware vSphere
Environment
root@4201c9358d96fda3a30840a3e7cc9795 [ ~ ]# kubectl version
Client Version: version.Info{Major:"1", Minor:"15+", GitVersion:"v1.15.3-beta.0.15+93d6c59069f682", GitCommit:"93d6c59069f6824fb05e926ea094966b47ed8b28", GitTreeState:"clean", BuildDate:"2019-08-15T03:43:36Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15+", GitVersion:"v1.15.3-beta.0.15+93d6c59069f682", GitCommit:"93d6c59069f6824fb05e926ea094966b47ed8b28", GitTreeState:"clean", BuildDate:"2019-08-15T03:38:09Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
And my K8S cluster is running on VMware PKS.
E2E steps
Create
StorageClass
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: wcpglobal-storage-profile
parameters:
storagePolicyID: eb806c5f-f214-4f34-9fa8-5387020354ea
provisioner: csi.vsphere.vmware.com
reclaimPolicy: Delete
volumeBindingMode: Immediate
After you do kubectl apply
to create the StorageClass, you will see the following:
root@4201c9358d96fda3a30840a3e7cc9795 [ ~ ]# kubectl get storageclass wcpglobal-storage-profile -o yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
creationTimestamp: "2019-10-01T20:56:02Z"
name: wcpglobal-storage-profile
resourceVersion: "7201419"
selfLink: /apis/storage.k8s.io/v1/storageclasses/wcpglobal-storage-profile
uid: 2ca6493e-b53b-40b5-bf56-340382e0dc84
parameters:
storagePolicyID: eb806c5f-f214-4f34-9fa8-5387020354ea
provisioner: csi.vsphere.vmware.com
reclaimPolicy: Delete
volumeBindingMode: Immediate
Create
PersistentVolumeClaim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: demo-pvc
namespace: my-podvm-ns
spec:
storageClassName: wcpglobal-storage-profile
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
After you do kubectl apply
above yaml, you should be able to see the following details after a while(The status of your PVC might show Pending for a while):
root@4201c9358d96fda3a30840a3e7cc9795 [ ~ ]# kubectl get pvc -n my-podvm-ns
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
demo-pvc Bound pvc-bbc40450-e8f9-4699-95dc-af3a09710b5f 5Gi RWO wcpglobal-storage-profile 15m
root@4201c9358d96fda3a30840a3e7cc9795 [ ~ ]# kubectl get pvc -n my-podvm-ns -o yaml
apiVersion: v1
items:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"demo-pvc","namespace":"my-podvm-ns"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"5Gi"}},"storageClassName":"wcpglobal-storage-profile"}}
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-provisioner: csi.vsphere.vmware.com
creationTimestamp: "2019-10-02T21:59:58Z"
finalizers:
- kubernetes.io/pvc-protection
name: demo-pvc
namespace: my-podvm-ns
resourceVersion: "7544761"
selfLink: /api/v1/namespaces/my-podvm-ns/persistentvolumeclaims/demo-pvc
uid: bbc40450-e8f9-4699-95dc-af3a09710b5f
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: wcpglobal-storage-profile
volumeMode: Filesystem
volumeName: pvc-bbc40450-e8f9-4699-95dc-af3a09710b5f
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 5Gi
phase: Bound
kind: List
metadata:
resourceVersion: ""
selfLink: ""
Create
Pod
apiVersion: v1
kind: Pod
metadata:
name: demo-pod
namespace: my-podvm-ns
spec:
volumes:
- name: demo-storage
persistentVolumeClaim:
claimName: demo-pvc
containers:
- name: demo-pod
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: demo-storage
After you apply above yaml. your pod will be under pending status for a while, that is because the dynamic provisioning is trying to provision a volume for you. It might take some time depends on the Storage Provider.
After a while, your pod should be up and running:
root@4201c9358d96fda3a30840a3e7cc9795 [ ~ ]# kubectl get pods -n my-podvm-ns
NAME READY STATUS RESTARTS AGE
demo-pod 1/1 Running 0 9m31s
You should see PV gets created automatically
root@4201c9358d96fda3a30840a3e7cc9795 [ ~ ]# kubectl get pv -n my-podvm-ns
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-0c85c6fa-db94-4fbd-b8f5-7f3b8db85ca1 5Gi RWO Delete Released my-podvm-ns/demo-pvc wcpglobal-storage-profile 35m
What if the StorageClass gets deleted, what will happen to those Pods, PV, PVC ?
I have the following running now:
root@4201c9358d96fda3a30840a3e7cc9795 [ ~ ]# kubectl get pods,pvc,pv,storageclass -n my-podvm-ns
NAME READY STATUS RESTARTS AGE
pod/demo-pod 1/1 Running 0 18m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/demo-pvc Bound pvc-bbc40450-e8f9-4699-95dc-af3a09710b5f 5Gi RWO wcpglobal-storage-profile 21m
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-1588c800-98ed-4a06-9d9d-48df8b010b75 1Gi RWO Delete Bound storage-policy-test/wcp-profile wcp-profile-1jxtwc1tm3 22d
persistentvolume/pvc-76cada0f-6685-4684-a0ce-bda9e20bcbf2 1Gi RWO Delete Bound storage-policy-test/wcp-profile-second wcp-profile-1jxtwc1tm3 22d
persistentvolume/pvc-bbc40450-e8f9-4699-95dc-af3a09710b5f 5Gi RWO Delete Bound my-podvm-ns/demo-pvc wcpglobal-storage-profile 21m
NAME PROVISIONER AGE
storageclass.storage.k8s.io/0-wcp-test-storage-policyname-1jxtwc1tm3 csi.vsphere.vmware.com 22d
storageclass.storage.k8s.io/demo-storageclass csi.vsphere.vmware.com 26m
storageclass.storage.k8s.io/wcp-profile-1jxtwc1tm3 csi.vsphere.vmware.com 46m
storageclass.storage.k8s.io/wcpglobal-storage-profile csi.vsphere.vmware.com 25h
Let me try to delete storage class, which is successfully deleted.
root@4201c9358d96fda3a30840a3e7cc9795 [ ~ ]# kubectl delete storageclass wcpglobal-storage-profile
storageclass.storage.k8s.io "wcpglobal-storage-profile" deleted
root@4201c9358d96fda3a30840a3e7cc9795 [ ~ ]# kubectl get storageclass
NAME PROVISIONER AGE
0-wcp-test-storage-policyname-1jxtwc1tm3 csi.vsphere.vmware.com 22d
demo-storageclass csi.vsphere.vmware.com 27m
wcp-profile-1jxtwc1tm3 csi.vsphere.vmware.com 47m
Surprisingly, the pod, pvc, pv are still there
root@4201c9358d96fda3a30840a3e7cc9795 [ ~ ]# kubectl get pods,pvc,pv -n my-podvm-ns
NAME READY STATUS RESTARTS AGE
pod/demo-pod 1/1 Running 0 21m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/demo-pvc Bound pvc-bbc40450-e8f9-4699-95dc-af3a09710b5f 5Gi RWO wcpglobal-storage-profile 24m
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-1588c800-98ed-4a06-9d9d-48df8b010b75 1Gi RWO Delete Bound storage-policy-test/wcp-profile wcp-profile-1jxtwc1tm3 22d
persistentvolume/pvc-76cada0f-6685-4684-a0ce-bda9e20bcbf2 1Gi RWO Delete Bound storage-policy-test/wcp-profile-second wcp-profile-1jxtwc1tm3 22d
persistentvolume/pvc-bbc40450-e8f9-4699-95dc-af3a09710b5f 5Gi RWO Delete Bound my-podvm-ns/demo-pvc wcpglobal-storage-profile 24m
kubectl exec
into the demo pod, you will see you still have 5Gi capacity under the filesystem as you claimed from PVC:
root@demo-pod:/# df -h
Filesystem Size Used Avail Use% Mounted on
overlay 253M 14M 240M 6% /
/dev/sdb 4.9G 20M 4.9G 1% /usr/share/nginx/html
tmpfs 245M 12K 245M 1% /run/secrets/kubernetes.io/serviceaccount
Now, if you try to create another PVC by using the same storageclass, your PVC status will be under Pending.
root@4201c9358d96fda3a30840a3e7cc9795 [ ~ ]# kubectl get pvc -n my-podvm-ns
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
demo-pvc Bound pvc-bbc40450-e8f9-4699-95dc-af3a09710b5f 5Gi RWO wcpglobal-storage-profile 38m
demo-pvc-2 Pending wcpglobal-storage-profile 3m47s
If you re-create the StorageClass you just deleted, the PVC status will be changed to Bound after a while.
root@4201c9358d96fda3a30840a3e7cc9795 [ ~ ]# kubectl apply -f wcpglobal-storage-profile-storageclass.yaml
storageclass.storage.k8s.io/wcpglobal-storage-profile created
root@4201c9358d96fda3a30840a3e7cc9795 [ ~ ]#
root@4201c9358d96fda3a30840a3e7cc9795 [ ~ ]#
root@4201c9358d96fda3a30840a3e7cc9795 [ ~ ]# kubectl get storageclass
NAME PROVISIONER AGE
0-wcp-test-storage-policyname-1jxtwc1tm3 csi.vsphere.vmware.com 22d
demo-storageclass csi.vsphere.vmware.com 46m
wcp-profile-1jxtwc1tm3 csi.vsphere.vmware.com 66m
wcpglobal-storage-profile csi.vsphere.vmware.com 6s
root@4201c9358d96fda3a30840a3e7cc9795 [ ~ ]# kubectl get pvc -n my-podvm-ns
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
demo-pvc Bound pvc-bbc40450-e8f9-4699-95dc-af3a09710b5f 5Gi RWO wcpglobal-storage-profile 42m
demo-pvc-2 Pending wcpglobal-storage-profile 7m49s
root@4201c9358d96fda3a30840a3e7cc9795 [ ~ ]# kubectl get pvc -n my-podvm-ns
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
demo-pvc Bound pvc-bbc40450-e8f9-4699-95dc-af3a09710b5f 5Gi RWO wcpglobal-storage-profile 44m
demo-pvc-2 Bound pvc-0066d14e-8abf-4cab-aaa6-28b0ade16b66 5Gi RWO wcpglobal-storage-profile 9m21s
Conclusion: If you delete the storageclass, it will not affect your running Pod, PV, PVC. They will be still up and running. But you will not be able to create a new PVC by using the storageclass name you just deleted.
What will happen if I delete the PVC while a Pod is referencing to it ?
When I try to delete PVC, it seems get stuck(the terminal never returns):
root@4201c9358d96fda3a30840a3e7cc9795 [ ~ ]# kubectl delete -f pvc.yaml
persistentvolumeclaim "demo-pvc" deleted
^C
root@4201c9358d96fda3a30840a3e7cc9795 [ ~ ]# kubectl get pods,pvc -n my-podvm-ns
NAME READY STATUS RESTARTS AGE
pod/demo-pod 1/1 Running 0 69m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/demo-pvc Terminating pvc-bbc40450-e8f9-4699-95dc-af3a09710b5f 5Gi RWO wcpglobal-storage-profile 73m
persistentvolumeclaim/demo-pvc-2 Bound pvc-0066d14e-8abf-4cab-aaa6-28b0ade16b66 5Gi RWO wcpglobal-storage-profile 38m
Now, I delete the Pod which is succeeded, and the PVC is deleted as well
root@4201c9358d96fda3a30840a3e7cc9795 [ ~ ]# kubectl delete -f pod.yaml
pod "demo-pod" deleted
root@4201c9358d96fda3a30840a3e7cc9795 [ ~ ]# kubectl get pods,pvc -n my-podvm-ns
Conclusion: If a Pod is still using the PVC, the PVC could not be deleted, it will under terminating status forever until you have the Pod deleted.
Last updated
Was this helpful?