K8S Resource CRUD Workflow

What is reconciliation ?

How Pod is created via Deployment ?

  1. Deployment controller (inside of kube-controller-manager)

    • Notices (through a deployment informer) that user creates a deployment

    • Create a replicaset

  2. ReplicaSet controller (inside of kube-controller-manager)

    • Notices (through a replicaSet informer) that the newly created replicaSet

    • Create Pod objects

  3. Scheduler which is also a controller (inside kube-scheduler binary)

    • Notices (through a pod informer) that the Pod object with empty spec.nodename

    • Put the Pod object into scheduling queue

  4. The meanwhile the kubelet (is also a controller)

    • Notices the Pod object (through a pod informer) that the Pod object's spec.nodename (which is empty) does not match its node name.

    • Ignore the Pod object and goes back to sleep

  5. Scheduler

    • Takes the Pod object out of its work queue

    • Schedule to a node which has enough resource by updating its spec.nodename

    • Write it to API Server

  6. kubelet wakes up by the Pod object update events

    • It compares the spec.nodename (in this case, we assume it matches node name)

    • Start the containers of the Pod object

    • Update the Pod object status with the information indicates that the containers have been started

    • Report back to API Server

  7. ReplicaSet controller notices the changes of the Pod object, but has nothing to do

  8. If Pod object terminates, kubelet notices the change

    • Get the Pod object from API Server

    • Change its status to "Terminated"

    • Write it back to API Server

  9. The replicaSet controller notices the terminated pod and decides that this pod must be replaced

    • It deletes the terminated pod on the API server and creates a new one

  10. And so on

How client-go library interactive with controller ?

Reflector & Informer & Indexer

They are instantiate by the custom controller code. The following is the sample-controller code.

func main() {
	klog.InitFlags(nil)
	flag.Parse()

	// set up signals so we handle the first shutdown signal gracefully
	stopCh := signals.SetupSignalHandler()

	cfg, err := clientcmd.BuildConfigFromFlags(masterURL, kubeconfig)
	if err != nil {
		klog.Fatalf("Error building kubeconfig: %s", err.Error())
	}

	kubeClient, err := kubernetes.NewForConfig(cfg)
	if err != nil {
		klog.Fatalf("Error building kubernetes clientset: %s", err.Error())
	}

	exampleClient, err := clientset.NewForConfig(cfg)
	if err != nil {
		klog.Fatalf("Error building example clientset: %s", err.Error())
	}

	kubeInformerFactory := kubeinformers.NewSharedInformerFactory(kubeClient, time.Second*30)
	exampleInformerFactory := informers.NewSharedInformerFactory(exampleClient, time.Second*30)

	controller := NewController(kubeClient, exampleClient,
		kubeInformerFactory.Apps().V1().Deployments(),
		exampleInformerFactory.Samplecontroller().V1alpha1().Foos())

	// notice that there is no need to run Start methods in a separate goroutine. (i.e. go kubeInformerFactory.Start(stopCh)
	// Start method is non-blocking and runs all registered informers in a dedicated goroutine.
	kubeInformerFactory.Start(stopCh)
	exampleInformerFactory.Start(stopCh)

	if err = controller.Run(2, stopCh); err != nil {
		klog.Fatalf("Error running controller: %s", err.Error())
	}
}

When InformerFactory get instantiated, the reflector, informer and indexer are instanciated accordingly. When calling InformerFactory.Start(), the Informer controller and Reflector are running. Indexer is wrapped as the FooLister() in the sample-controller example.

Last updated

Was this helpful?