404 Not Found
  • Introduction
  • Monitoring related
    • K8s cluster monitoring
    • Monitor Jenkins with G.A.P on K8s cluster
    • Monitoring tools | projects
      • Grafana
      • AlertManager
      • Prometheus
      • Wavefront
  • Logging related
    • BOSH logs
    • How to gather systemd log
    • K8s cluster logging
    • Logging tools | projects
      • vRealize Log Insight
      • Fluentd
      • syslog vs fluentd
  • Having fun with docker
    • Using docker-compose for redmine
    • Customize Fluentd docker image
  • K8S or Apache Mesos
  • K8S Related
    • Main Architecture
      • Master components
        • API Server
        • etcd
        • Controller Manager
        • Kube Scheduler
      • Worker components
        • kubelet
        • kube-proxy
    • K8S Storage
      • Volume Provisioning
      • Understand CSI
      • How to write CSI
      • VMware CNS
      • K8S storage e2e experiment under VMware vSphere
      • Experiment on Persistent Volume Access Mode
      • Design: Storage in Cluster-API architecture
    • K8S Networking
      • Ingress
      • Endpoints
    • K8S Policies
      • Resource Quotas
    • K8S Management Platform
    • K8S Tests Tool
    • K8S Extension
      • CRDs
        • Custom Resources
        • Custom Controllers
        • How to user code-generator
        • K8S Operators
        • Operators Development Tools
          • Kubebuilder
          • Metacontroller
          • Operator SDK
      • Custom API Server
    • K8S Resource CRUD Workflow
    • K8S Garbage Collection
  • K8S CONTROLLER RELATED
    • IsController: true
    • Controller clients
  • PKS RELATED
    • How to Access VMs and Databases related to PKS
    • PKS Basics
    • BOSH Director
    • Backup and Restore on Ent. PKS with Velero
  • CICD RELATED
    • Configure Jenkins to run on K8S
    • Customize Jenkins JNLP slave image
    • Jenkins global shared libs
  • Google Anthos
    • Google Anthos Day from KubeCon 2019 San Diego
    • Migrate for Anthos
    • Config Connector
  • SYSTEM DESIGN RELATED
    • Design Data Intensive Application - Notes
      • RSM
        • Reliability
        • Scalability
      • Data models and Query Languages
      • Storage and Retrieval
    • How Alibaba Ensure K8S Performance At Large Scale
  • Miscellaneous
    • Knative
    • Serverless
    • Service Mesh
    • gRPC
    • Local persistent volumes
    • ownerReferences in K8S
    • File(NAS) vs Block(SAN) vs Object storage
    • KubeVirt
    • Why K8S HA chooses 3 instead of 5..6..7 as the size of masters?
    • goroutine & go channel
    • How to make docker images smaller
Powered by GitBook
On this page
  • Create Volume
  • Dynamic provisioning in Guest Cluster
  • Static provisioning in Guest Cluster
  • Open questions
  • Attach Volume
  • Reference

Was this helpful?

  1. K8S Related
  2. K8S Storage

Design: Storage in Cluster-API architecture

PreviousExperiment on Persistent Volume Access ModeNextK8S Networking

Last updated 5 years ago

Was this helpful?

Create Volume

Dynamic provisioning in Guest Cluster

  1. kubectl apply -f pvc.yaml is initiated by user, and the request is sent to kube-api-server .

  2. kube-controller create the PVC, but its status will be pending.

  3. gcCSI watches the creation of PVC then send request to the kube-api-server in Management Cluster to create the volume.

  4. kube-controller in Management Cluster creates the PVC.

  5. mgmtCSI watches the creation of PVC in turn calls the Storage Infra API to create the volume. Loop on the status checking.

    5.1 When volume is provisioned, mgmtCSI will create the PV in Management Cluster . The PV creation is actually done by the external-provisioner which runs as a side-car container within controller-plugin of the mgmtCSI .

  6. kube-controller listens on the event from PV. Once PV 's status is bound, it binds the PVC to the PV.

  7. gcCSI waits on the status of PVC creation in management cluster and create PV in Guest Cluster accordingly.

  8. kube-controller in Guest Cluster binds the PVC to PV eventually.

Static provisioning in Guest Cluster

The differences between static provisioning in Guest Cluster and the static provisioning in single cluster are:

  • Single cluster is just one layer, the PV refers to the volume user manually created. Guest cluster has multiple layers, and its PV refers to the PVC in Management Cluster instead of the volume.

  • TBA

Open questions

How to deal with ReclaimPolicy

You probably noticed that we might have two ReclaimPolicy. One is in Management Cluster, one is in Guest Cluster.

  • In dynamic provisioning case, they should be the same in StorageClass spec.

  • In static provisioning case, the ReclaimPolicy in PV could be different between Management Cluster and Guest Cluster. If Guest Cluster user wants the ReclaimPolicy to be delete, the ReclaimPolicy in Management Cluster needs to be delete as well. That is because the user wants the volume to be deleted if PV gets deleted. If Guest Cluster user wants the ReclaimPolicy to be retain, the ReclaimPolicy in Management Cluster could to be delete (it does not neccessarily need to be delete, things could be different based on business logic). That is because user wants the volume to be retained if PV gets deleted, so gcCSI should not invoke any API calls to mgmtCSI to remove the volume.

Attach Volume

When a Volume gets created by mgmtCSI from Create Volume Step 5 above, it needs to be attached to the node where the Pod is running.

  • gcCSI knows where the Pod is scheduled and updates VirtualMachine.Spec.Volumes

  • VM-Operator running on Mangement Cluster watches the VirtualMachine.Spec.Volume , if new Volumes are added, VM-Operator creates VolumeAttachment instances accordingly with the NodeUUID and VolumeName . Those two are the information mgmtCSI needs to attach volumes to node.

  • Once volumes are attached (No matter succeeded or failed), mgmtCSI will update VolumeAttachmentStatus

  • VM-Operator watches the changes from VolumeAttachmentStatus updates VirtualMachine.Status.Volumes accordingly.

  • gcCSI watches the changes from VirtualMachine.Status.Volumes and updates PVC accordingly.

Reference

The static provisioning in Guest Cluster is a little bit tricky. If we only have one K8S cluster, user needs to create the volume manually first, and then create PV to bind to the volume manually created (Refer to for more details).

However, things are different in cluster API picture. First, we have to have the volume provisioned(no matter using or ) under Management Cluster, so it could be used by one or multiple Guest Cluster. The "volume provisioned" means that we have PV and PVC ready in Mangement Cluster. Second, user needs to create the PV in Guest cluster which refers to the PVC in Management Cluster.

community/volume-provisioning.md at master · kubernetes/communityGitHub
Logo
Volume Provisioning
dynamic provisioning
static provisioning