-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement DeepCopy for CiliumNetworkPolicy objects #1885
Labels
kind/enhancement
This would improve or streamline existing functionality.
priority/low
This is considered nice to have.
sig/k8s
Impacts the kubernetes API, or kubernetes -> cilium internals translation layers.
Comments
ianvernon
added
sig/k8s
Impacts the kubernetes API, or kubernetes -> cilium internals translation layers.
kind/enhancement
This would improve or streamline existing functionality.
labels
Oct 27, 2017
ianvernon
pushed a commit
that referenced
this issue
Oct 27, 2017
An issue was observed where creation and deletion of CiliumNetworkPolicy objects kept occurring in the case where a CiliumNetworkPolicy with a Kafka rule was imported via Kubernetes. This kept occurring because when a CiliumNetworkPolicy object is updated in Kubernetes, calls to `daemon/k8s_watcher.go:updateCiliumNetworkPolicy` checked if the provided old / new CiliumNetworkPolicy objects were equal. The old and new objects were not equal due to the fact that parsing is performed upon CiliumNetworkPolicy pointers. The rules bieng pointed do are modified when they are parsed. The pointers to the rules are cached locally by the Kubernetes watcher. Thus the rules that are parsed into Kubernetes which are provided into `daemon/k8s_watcher.go:updateCiliumNetworkPolicy` upon future updates are the modified copies of the rules that were parsed previously. The provided old rule was modified due to being parsed, while the new rule was not. These modified copies were not truly equal, and thus did not cause `daemon/k8s_watcher.go:updateCiliumNetworkPolicy` to return: ``` // Since we are updating the status map from all nodes we need to prevent // deletion and addition of all rules in cilium. if oldRule.SpecEquals(newRules) { return } d.deleteCiliumNetworkPolicy(oldObj) d.addCiliumNetworkPolicy(newObj) ``` and instead kept triggering deletions and updates of CiliumNetworkPolicies. This commit simply parses the rules before checking whether they are equal to normalize them. This is a workaround fix. The correct fix is to DeepCopy the old / new objects before they are modified, which #1885 covers. Signed-off by: Ian Vernon <ian@cilium.io>
tgraf
pushed a commit
that referenced
this issue
Oct 28, 2017
An issue was observed where creation and deletion of CiliumNetworkPolicy objects kept occurring in the case where a CiliumNetworkPolicy with a Kafka rule was imported via Kubernetes. This kept occurring because when a CiliumNetworkPolicy object is updated in Kubernetes, calls to `daemon/k8s_watcher.go:updateCiliumNetworkPolicy` checked if the provided old / new CiliumNetworkPolicy objects were equal. The old and new objects were not equal due to the fact that parsing is performed upon CiliumNetworkPolicy pointers. The rules bieng pointed do are modified when they are parsed. The pointers to the rules are cached locally by the Kubernetes watcher. Thus the rules that are parsed into Kubernetes which are provided into `daemon/k8s_watcher.go:updateCiliumNetworkPolicy` upon future updates are the modified copies of the rules that were parsed previously. The provided old rule was modified due to being parsed, while the new rule was not. These modified copies were not truly equal, and thus did not cause `daemon/k8s_watcher.go:updateCiliumNetworkPolicy` to return: ``` // Since we are updating the status map from all nodes we need to prevent // deletion and addition of all rules in cilium. if oldRule.SpecEquals(newRules) { return } d.deleteCiliumNetworkPolicy(oldObj) d.addCiliumNetworkPolicy(newObj) ``` and instead kept triggering deletions and updates of CiliumNetworkPolicies. This commit simply parses the rules before checking whether they are equal to normalize them. This is a workaround fix. The correct fix is to DeepCopy the old / new objects before they are modified, which #1885 covers. Signed-off by: Ian Vernon <ian@cilium.io>
tgraf
pushed a commit
that referenced
this issue
Oct 30, 2017
An issue was observed where creation and deletion of CiliumNetworkPolicy objects kept occurring in the case where a CiliumNetworkPolicy with a Kafka rule was imported via Kubernetes. This kept occurring because when a CiliumNetworkPolicy object is updated in Kubernetes, calls to `daemon/k8s_watcher.go:updateCiliumNetworkPolicy` checked if the provided old / new CiliumNetworkPolicy objects were equal. The old and new objects were not equal due to the fact that parsing is performed upon CiliumNetworkPolicy pointers. The rules bieng pointed do are modified when they are parsed. The pointers to the rules are cached locally by the Kubernetes watcher. Thus the rules that are parsed into Kubernetes which are provided into `daemon/k8s_watcher.go:updateCiliumNetworkPolicy` upon future updates are the modified copies of the rules that were parsed previously. The provided old rule was modified due to being parsed, while the new rule was not. These modified copies were not truly equal, and thus did not cause `daemon/k8s_watcher.go:updateCiliumNetworkPolicy` to return: ``` // Since we are updating the status map from all nodes we need to prevent // deletion and addition of all rules in cilium. if oldRule.SpecEquals(newRules) { return } d.deleteCiliumNetworkPolicy(oldObj) d.addCiliumNetworkPolicy(newObj) ``` and instead kept triggering deletions and updates of CiliumNetworkPolicies. This commit simply parses the rules before checking whether they are equal to normalize them. This is a workaround fix. The correct fix is to DeepCopy the old / new objects before they are modified, which #1885 covers. Backports: cda7d47 Signed-off by: Ian Vernon <ian@cilium.io>
tgraf
pushed a commit
that referenced
this issue
Oct 31, 2017
An issue was observed where creation and deletion of CiliumNetworkPolicy objects kept occurring in the case where a CiliumNetworkPolicy with a Kafka rule was imported via Kubernetes. This kept occurring because when a CiliumNetworkPolicy object is updated in Kubernetes, calls to `daemon/k8s_watcher.go:updateCiliumNetworkPolicy` checked if the provided old / new CiliumNetworkPolicy objects were equal. The old and new objects were not equal due to the fact that parsing is performed upon CiliumNetworkPolicy pointers. The rules bieng pointed do are modified when they are parsed. The pointers to the rules are cached locally by the Kubernetes watcher. Thus the rules that are parsed into Kubernetes which are provided into `daemon/k8s_watcher.go:updateCiliumNetworkPolicy` upon future updates are the modified copies of the rules that were parsed previously. The provided old rule was modified due to being parsed, while the new rule was not. These modified copies were not truly equal, and thus did not cause `daemon/k8s_watcher.go:updateCiliumNetworkPolicy` to return: ``` // Since we are updating the status map from all nodes we need to prevent // deletion and addition of all rules in cilium. if oldRule.SpecEquals(newRules) { return } d.deleteCiliumNetworkPolicy(oldObj) d.addCiliumNetworkPolicy(newObj) ``` and instead kept triggering deletions and updates of CiliumNetworkPolicies. This commit simply parses the rules before checking whether they are equal to normalize them. This is a workaround fix. The correct fix is to DeepCopy the old / new objects before they are modified, which #1885 covers. Backports: cda7d47 Signed-off by: Ian Vernon <ian@cilium.io>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
kind/enhancement
This would improve or streamline existing functionality.
priority/low
This is considered nice to have.
sig/k8s
Impacts the kubernetes API, or kubernetes -> cilium internals translation layers.
TODO
The text was updated successfully, but these errors were encountered: