Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enhance NodeIPAM to support multiple ClusterCIDRs #109090

Merged
merged 7 commits into from Aug 7, 2022

Conversation

sarveshr7
Copy link
Contributor

@sarveshr7 sarveshr7 commented Mar 29, 2022

What type of PR is this?

/kind feature

What this PR does / why we need it:

This PR implements kubernetes/enhancements#2593

Adds following components:

  • MultiCIDRRangeAllocator: CIDR Allocator type within kube-controller-manager, which can use multiple ClusterCIDRs to allocate Pod CIDRs to a node, it also reconciles ClusterCIDR objects, which are used to configure discontiguous ClusterCIDRs

Which issue(s) this PR fixes:

NONE

Special notes for your reviewer:

Please note that this PR is rebased over an open API PR #111123, please review commits [Add cidrset to support multiple CIDRs](Add cidrset to support multiple CIDRs) onwards for this PR

Does this PR introduce a user-facing change?

NodeIPAM support for multiple ClusterCIDRs (https://github.com/kubernetes/enhancements/issues/2593) introduced as an alpha feature.

Setting feature gate MultiCIDRRangeAllocator=true, determines whether the MultiCIDRRangeAllocator controller can be used, while the kube-controller-manager flag below will pick the active controller.

Enable the MultiCIDRRangeAllocator by setting --cidr-allocator-type=MultiCIDRRangeAllocator flag in kube-controller-manager.

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:

- [KEP]: https://github.com/kubernetes/enhancements/issues/2593

/sig network

@k8s-ci-robot k8s-ci-robot added release-note Denotes a PR that will be considered when it comes time to generate release notes. kind/feature Categorizes issue or PR as related to a new feature. sig/network Categorizes an issue or PR as relevant to SIG Network. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Mar 29, 2022
@k8s-ci-robot
Copy link
Contributor

k8s-ci-robot commented Mar 29, 2022

Hi @sarveshr7. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added area/apiserver area/code-generation area/kubectl area/test kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. sig/apps Categorizes an issue or PR as relevant to SIG Apps. sig/auth Categorizes an issue or PR as relevant to SIG Auth. sig/cli Categorizes an issue or PR as relevant to SIG CLI. sig/cloud-provider Categorizes an issue or PR as relevant to SIG Cloud Provider. sig/instrumentation Categorizes an issue or PR as relevant to SIG Instrumentation. sig/testing Categorizes an issue or PR as relevant to SIG Testing. labels Mar 29, 2022
@k8s-triage-robot
Copy link

k8s-triage-robot commented Mar 29, 2022

This PR may require API review.

If so, when the changes are ready, complete the pre-review checklist and request an API review.

Status of requested reviews is tracked in the API Review project.

@sarveshr7
Copy link
Contributor Author

sarveshr7 commented Mar 29, 2022

/cc thockin

@k8s-ci-robot k8s-ci-robot requested a review from thockin Mar 29, 2022
@aojea
Copy link
Member

aojea commented Mar 29, 2022

So, we have 2 API object we have to reconcile, networking/v1alpha1,ClusterCIDRConfig and v1.Node

The key of the reconciliation is the ClusterCIDRConfig , but we also should process events for Nodes.

The operations we have to do are

  • add/remove finalizers to ClusterCIDRConfig
  • allocate/deallocate subnet to node

In addition we have to deal with a bootstrap process that depends on the apiserver.

I think that all the code is valid, but we can use it as a level based controller like this

type Controller struct {
	client clientset.Interface
        // informers for nodes and clusterCIDRConfig
	nodeLister corelisters.NodeLister
	nodesSynced cache.InformerSynced
	clusterCIDRConfigLister networkinglisters.ClusterCIDRConfigLister
	clusterCIDRConfigSynced cache.InformerSynced

        // internal structures
        pq PriorityQueue
        CIDRMap map[string][]*cidrset.ClusterCIDR

	// queue is where incoming work is placed to de-dup and to allow "easy"
	// rate limited requeues on errors
	queue workqueue.RateLimitingInterface
}

func NewController(client, informers, ....) *Controller {
	c := &Controller{
	}
	
	// register event handlers to fill the queue with clusterCIDRConfig creations, updates and deletions
	clusterCIDRConfigInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
		AddFunc: func(obj interface{}) {
			key, err := cache.MetaNamespaceKeyFunc(obj)
			if err == nil {
				c.queue.Add(key)
			}
		},
		UpdateFunc: func(old interface{}, new interface{}) {
			key, err := cache.MetaNamespaceKeyFunc(new)
			if err == nil {
				c.queue.Add(key)
			}
		},
		DeleteFunc: func(obj interface{}) {
			// IndexerInformer uses a delta nodeQueue, therefore for deletes we have to use this
			// key function.
			key, err := cache.DeletionHandlingMetaNamespaceKeyFunc(obj)
			if err == nil {
				c.queue.Add(key)
			}
		},
	},)
	
	// register event handlers to fill the queue with clusterCIDRConfig creations, updates and deletions
        // the handlers should map the Node object to the corresponding ClusterCIDRConfig key
	nodeInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
		AddFunc: func(obj interface{}) {
                        // function that returns the ClusterCIDR key given a node object
			key, err := getClusterCIDRForNode(obj)
			c.queue.Add(key)
		},
		UpdateFunc: func(old interface{}, new interface{}) {
			key, err := getClusterCIDRForNode(new)
			c.queue.Add(key)
		},
		DeleteFunc: func(obj interface{}) {
			// IndexerInformer uses a delta nodeQueue, therefore for deletes we have to use this
			// key function.
			// key, err := cache.DeletionHandlingMetaNamespaceKeyFunc(obj)
			key, err := getClusterCIDRForDeletedNode(obj)
			c.queue.Add(key)
		},
	},)
	return c
}

func (c *Controller) Run(threadiness int, stopCh chan struct{}) {
	// don't let panics crash the process
	defer utilruntime.HandleCrash()
	// make sure the work queue is shutdown which will trigger workers to end
	defer c.queue.ShutDown()

	klog.Infof("Starting CIDR allocator controller")

	// wait for your secondary caches to fill before starting your work
	if !cache.WaitForCacheSync(stopCh, c.podsSynced) {
		return
	}

        // BOOTSTRAP , the caches are synced, we have the information required on the
        // information cache
       bootStrap()

	// start up your worker threads based on threadiness.  Some controllers
	// have multiple kinds of workers
	for i := 0; i < threadiness; i++ {
		// runWorker will loop until "something bad" happens.  The .Until will
		// then rekick the worker after one second
		go wait.Until(c.runWorker, time.Second, stopCh)
	}

	// wait until we're told to stop
	<-stopCh
	klog.Infof("Shutting down <NAME> controller")
}

func (c *Controller) runWorker() {
	// hot loop until we're told to stop.  processNextWorkItem will
	// automatically wait until there's work available, so we don't worry
	// about secondary waits
	for c.processNextWorkItem() {
	}
}

// processNextWorkItem deals with one key off the queue.  It returns false
// when it's time to quit.
func (c *Controller) processNextWorkItem() bool {
	// pull the next work item from queue.  It should be a key we use to lookup
	// something in a cache
	key, quit := c.queue.Get()
	if quit {
		return false
	}
	// you always have to indicate to the queue that you've completed a piece of
	// work
	defer c.queue.Done(key)

	// do your work on the key.  This method will contains your "do stuff" logic
	err := c.syncHandler(key.(string))
	if err == nil {
		// if you had no error, tell the queue to stop tracking history for your
		// key. This will reset things like failure counts for per-item rate
		// limiting
		c.queue.Forget(key)
		return true
	}

	// there was a failure so be sure to report it.  This method allows for
	// pluggable error handling which can be used for things like
	// cluster-monitoring
	utilruntime.HandleError(fmt.Errorf("%v failed with : %v", key, err))

	// since we failed, we should requeue the item to work on later.  This
	// method will add a backoff to avoid hotlooping on particular items
	// (they're probably still not going to work right away) and overall
	// controller protection (everything I've done is broken, this controller
	// needs to calm down or it can starve other useful work) cases.
	c.queue.AddRateLimited(key)

	return true
}

func (c *Controller) syncHandler(key string) error {
	obj, exists, err := c.indexer.GetByKey(key)
	if err != nil {
		klog.Errorf("Fetching object with key %s from store failed with %v", key, err)
		return err
	}

	if !exists {
                  // delete process
	} else {
                // create/update process
                
	}
	return nil

}

@enj enj added this to Needs Triage in SIG Auth Mar 29, 2022
@cici37
Copy link
Contributor

cici37 commented Mar 29, 2022

/remove-sig api-machinery

}

nodeInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: controllerutil.CreateAddNodeHandler(ra.AllocateOrOccupyCIDR),
Copy link
Member

@aojea aojea Aug 5, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that this is taking the lock during the whole operation, means we are serializing all the events

Copy link
Contributor Author

@sarveshr7 sarveshr7 Aug 5, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would be the same behavior here:

nodeInformer.Informer().AddEventHandler(cache.ResourceEventHandlerFuncs{
AddFunc: controllerutil.CreateAddNodeHandler(ra.AllocateOrOccupyCIDR),

Copy link
Member

@aojea aojea Aug 5, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I see now that we are carrying over some legacy behaviour that I can have an opinion since I never used it, maybe @bowei have more context about the stability of that controller

return err
}(data)

r.removeNodeFromProcessing(data.nodeName)
Copy link
Member

@aojea aojea Aug 5, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it this being used?

return nil
}

if !r.insertNodeToProcessing(node.Name) {
Copy link
Member

@aojea aojea Aug 5, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this make sense if we use the channel

@sarveshr7 sarveshr7 force-pushed the multicidr-rangeallocator branch 5 times, most recently from b11db08 to edf75e0 Compare Aug 5, 2022
@bowei
Copy link
Member

bowei commented Aug 5, 2022

/lgtm -- I am ok with the flag disablement for alpha.

@bowei
Copy link
Member

bowei commented Aug 5, 2022

/remove-hold

@k8s-ci-robot k8s-ci-robot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Aug 5, 2022
@bowei
Copy link
Member

bowei commented Aug 5, 2022

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Aug 5, 2022
sarveshr7 added 2 commits Aug 6, 2022
MultiCIDRRangeAllocator is a new Range Allocator which makes using
multiple ClusterCIDRs possible. It consists of two controllers, one for
reconciling the ClusterCIDR API objects and the other for allocating
Pod CIDRs to the nodes.

The allocation is based on the rules defined in
https://github.com/kubernetes/enhancements/tree/master/keps/sig-network/2593-multiple-cluster-cidrs
@sarveshr7 sarveshr7 force-pushed the multicidr-rangeallocator branch from edf75e0 to 1473e13 Compare Aug 6, 2022
@k8s-ci-robot k8s-ci-robot removed the lgtm Indicates that a PR is ready to be merged. label Aug 6, 2022
@sarveshr7
Copy link
Contributor Author

sarveshr7 commented Aug 6, 2022

/test pull-kubernetes-e2e-kind

@sarveshr7
Copy link
Contributor Author

sarveshr7 commented Aug 6, 2022

One of the verify tests was failing, so had to push a minor fix, which removed the lgtm from the PR, @aojea / @bowei Can you please LGTM again? Also if possible merge the PR? Thanks!

@aojea
Copy link
Member

aojea commented Aug 7, 2022

One of the verify tests was failing, so had to push a minor fix, which removed the lgtm from the PR, @aojea / @bowei Can you please LGTM again? Also if possible merge the PR? Thanks!

/lgtm

This lgtm for alpha ... but there is still a lot of things to do for beta :)
https://github.com/kubernetes/enhancements/tree/master/keps/sig-network/2593-multiple-cluster-cidrs#test-plan

@k8s-ci-robot k8s-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Aug 7, 2022
@k8s-ci-robot
Copy link
Contributor

k8s-ci-robot commented Aug 7, 2022

@sarveshr7: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
pull-kubernetes-local-e2e de6ef00 link false /test pull-kubernetes-local-e2e

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@aojea
Copy link
Member

aojea commented Aug 7, 2022

/test pull-kubernetes-conformance-kind-ga-only-parallel

unrelated

@k8s-ci-robot k8s-ci-robot merged commit 759785e into kubernetes:master Aug 7, 2022
20 checks passed
SIG Auth automation moved this from Needs Triage to Closed / Done Aug 7, 2022
@fedebongio
Copy link
Contributor

fedebongio commented Aug 9, 2022

/triage accepted

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Aug 9, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/apiserver area/code-generation area/kubectl area/test cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API kind/feature Categorizes issue or PR as related to a new feature. lgtm Indicates that a PR is ready to be merged. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. release-note Denotes a PR that will be considered when it comes time to generate release notes. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. sig/apps Categorizes an issue or PR as relevant to SIG Apps. sig/auth Categorizes an issue or PR as relevant to SIG Auth. sig/cli Categorizes an issue or PR as relevant to SIG CLI. sig/cloud-provider Categorizes an issue or PR as relevant to SIG Cloud Provider. sig/instrumentation Categorizes an issue or PR as relevant to SIG Instrumentation. sig/network Categorizes an issue or PR as relevant to SIG Network. sig/testing Categorizes an issue or PR as relevant to SIG Testing. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
Status: Closed / Done
SIG Auth
Closed / Done
Development

Successfully merging this pull request may close these issues.

None yet