Design: Storage in Cluster-API architecture
Create Volume
Dynamic provisioning in Guest Cluster

kubectl apply -f pvc.yaml
is initiated by user, and the request is sent tokube-api-server
.kube-controller
create thePVC
, but its status will be pending.gcCSI
watches the creation ofPVC
then send request to thekube-api-server
inManagement Cluster
to create the volume.kube-controller
inManagement Cluster
creates thePVC
.mgmtCSI
watches the creation ofPVC
in turn calls the Storage Infra API to create the volume. Loop on the status checking.5.1 When volume is provisioned,
mgmtCSI
will create thePV
inManagement Cluster
. ThePV
creation is actually done by theexternal-provisioner
which runs as a side-car container withincontroller-plugin
of themgmtCSI
.kube-controller
listens on the event fromPV
. OncePV
's status is bound, it binds thePVC
to thePV
.gcCSI
waits on the status of PVC creation in management cluster and createPV
inGuest Cluster
accordingly.kube-controller
inGuest Cluster
binds thePVC
toPV
eventually.
Static provisioning in Guest Cluster
The static provisioning in Guest Cluster
is a little bit tricky. If we only have one K8S cluster, user needs to create the volume manually first, and then create PV
to bind to the volume manually created (Refer to Volume Provisioning for more details).
However, things are different in cluster API picture. First, we have to have the volume provisioned(no matter using dynamic provisioning or static provisioning) under Management Cluster
, so it could be used by one or multiple Guest Cluster
. The "volume provisioned" means that we have PV
and PVC
ready in Mangement Cluster
. Second, user needs to create the PV
in Guest cluster
which refers to the PVC
in Management Cluster
.
The differences between static provisioning in Guest Cluster
and the static provisioning in single cluster are:
Single cluster is just one layer, the
PV
refers to the volume user manually created.Guest cluster
has multiple layers, and itsPV
refers to thePVC
inManagement Cluster
instead of the volume.TBA
Open questions
How to deal with ReclaimPolicy
You probably noticed that we might have two ReclaimPolicy. One is in Management Cluster
, one is in Guest Cluster
.
In dynamic provisioning case, they should be the same in StorageClass spec.
In static provisioning case, the ReclaimPolicy in
PV
could be different betweenManagement Cluster
andGuest Cluster
. IfGuest Cluster
user wants the ReclaimPolicy to bedelete
, the ReclaimPolicy inManagement Cluster
needs to bedelete
as well. That is because the user wants the volume to be deleted ifPV
gets deleted. IfGuest Cluster
user wants the ReclaimPolicy to beretain
, the ReclaimPolicy inManagement Cluster
could to bedelete
(it does not neccessarily need to bedelete
, things could be different based on business logic). That is because user wants the volume to be retained ifPV
gets deleted, sogcCSI
should not invoke any API calls tomgmtCSI
to remove the volume.
Attach Volume

When a Volume gets created by mgmtCSI from Create Volume
Step 5 above, it needs to be attached to the node where the Pod
is running.
gcCSI
knows where thePod
is scheduled and updatesVirtualMachine.Spec.Volumes
VM-Operator
running on Mangement Cluster watches theVirtualMachine.Spec.Volume
, if new Volumes are added,VM-Operator
createsVolumeAttachment
instances accordingly with theNodeUUID
andVolumeName
. Those two are the information mgmtCSI needs to attach volumes to node.Once volumes are attached (No matter succeeded or failed), mgmtCSI will update
VolumeAttachmentStatus
VM-Operator
watches the changes fromVolumeAttachmentStatus
updatesVirtualMachine.Status.Volumes
accordingly.gcCSI
watches the changes fromVirtualMachine.Status.Volumes
and updates PVC accordingly.
Reference
Last updated
Was this helpful?