Navigation Menu

Skip to content
This repository has been archived by the owner on Mar 9, 2022. It is now read-only.

Change to keep in sync with latest cni config #825

Merged
merged 2 commits into from Jun 21, 2018

Conversation

abhi
Copy link
Member

@abhi abhi commented Jun 20, 2018

This commit contains change to pick the latest cni config from the configured CNIConfDir.
With this change any changes made to the cni config file will be picked up on the kubelet's runtime status check call.
Ofcourse this would lead to undefined behavior when the cni config change is made in parallel during pod creation. However its reasonable to assume that the operator is aware of the need to drain the nodes of pods before making cni configuration change. The behavior is currently not defined in kubernetes. However I see that similar approach being adopted in the upstream kubernetes with dockershim. Keeping the behavior consistent for now.

Signed-off-by: Abhinandan Prativadi abhi@docker.com

// Load the latest cni configuration to be in sync with the latest network configuration
if err := c.netPlugin.Load(cni.WithLoNetwork(), cni.WithDefaultConf()); err != nil {
logrus.WithError(err).Errorf("Failed to load cni configuration")
}
// Check the status of the cni initialization
if err := c.netPlugin.Status(); err != nil {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mikebrow regarding load and status.
Load will load all the cni config files. Status will check if the required number of network configuration is loaded. So I believe this would need to be done post load.

@Random-Liu Random-Liu self-assigned this Jun 20, 2018
@Random-Liu Random-Liu added this to the v1.11 milestone Jun 20, 2018
@Random-Liu
Copy link
Member

LGTM. Let's get the dependency in first.

Copy link
Member

@mikebrow mikebrow left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Did we get the fix into the go-cni side so we don't actually load on every status check if there are no changes? Would not want to get a perf regression here.

@abhi
Copy link
Member Author

abhi commented Jun 21, 2018

@mikebrow at this point I am keeping the behavior same as upstream kubernetes. I have not added the go-cni side to load only on dirty file yet. I just wanted to understand all the possibilities of the config changes before we ignore it. Also FWIK to generate checksum u would still do an io.Copy ? and the Load is pretty much doing that and unmarshalls the config file. We can revisit if there is a perf diff with a suitable fix on the go-cni side or even on the cri side. WDYT ?

@mikebrow
Copy link
Member

@abhi sure ok.. maybe a todo for fsnotify.. cheers!

abhi added 2 commits June 21, 2018 20:43
This commit contains change to pick the latest cni config
from the configured CNIConfDir.
With this change any changes made to the cni config file will
be picked up on the kubelet's runtime status check call.
Ofcourse this would lead to undefined behavior when the cni config
change is made in parallel during pod creation. However its
reasonable to assume that the operator is aware of the need to
drain the nodes of pods before making cni configuration change.
The behavior is currently not defined in kubernetes. However
I see that similar approach being adopted in the upstream kubernetes
with dockershim. Keeping the behavior consistent for now.

Signed-off-by: Abhinandan Prativadi <abhi@docker.com>
Signed-off-by: Abhinandan Prativadi <abhi@docker.com>
@Random-Liu
Copy link
Member

/lgtm

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants