Change to keep in sync with latest cni config #825
Conversation
pkg/server/status.go
Outdated
// Load the latest cni configuration to be in sync with the latest network configuration | ||
if err := c.netPlugin.Load(cni.WithLoNetwork(), cni.WithDefaultConf()); err != nil { | ||
logrus.WithError(err).Errorf("Failed to load cni configuration") | ||
} | ||
// Check the status of the cni initialization | ||
if err := c.netPlugin.Status(); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mikebrow regarding load and status.
Load will load all the cni config files. Status will check if the required number of network configuration is loaded. So I believe this would need to be done post load.
LGTM. Let's get the dependency in first. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did we get the fix into the go-cni side so we don't actually load on every status check if there are no changes? Would not want to get a perf regression here.
@mikebrow at this point I am keeping the behavior same as upstream kubernetes. I have not added the go-cni side to load only on dirty file yet. I just wanted to understand all the possibilities of the config changes before we ignore it. Also FWIK to generate checksum u would still do an io.Copy ? and the Load is pretty much doing that and unmarshalls the config file. We can revisit if there is a perf diff with a suitable fix on the go-cni side or even on the cri side. WDYT ? |
@abhi sure ok.. maybe a todo for fsnotify.. cheers! |
This commit contains change to pick the latest cni config from the configured CNIConfDir. With this change any changes made to the cni config file will be picked up on the kubelet's runtime status check call. Ofcourse this would lead to undefined behavior when the cni config change is made in parallel during pod creation. However its reasonable to assume that the operator is aware of the need to drain the nodes of pods before making cni configuration change. The behavior is currently not defined in kubernetes. However I see that similar approach being adopted in the upstream kubernetes with dockershim. Keeping the behavior consistent for now. Signed-off-by: Abhinandan Prativadi <abhi@docker.com>
Signed-off-by: Abhinandan Prativadi <abhi@docker.com>
/lgtm |
This commit contains change to pick the latest cni config from the configured CNIConfDir.
With this change any changes made to the cni config file will be picked up on the kubelet's runtime status check call.
Ofcourse this would lead to undefined behavior when the cni config change is made in parallel during pod creation. However its reasonable to assume that the operator is aware of the need to drain the nodes of pods before making cni configuration change. The behavior is currently not defined in kubernetes. However I see that similar approach being adopted in the upstream kubernetes with dockershim. Keeping the behavior consistent for now.
Signed-off-by: Abhinandan Prativadi abhi@docker.com