New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] Egress policy #1987
[WIP] Egress policy #1987
Conversation
Codecov Report
@@ Coverage Diff @@
## main #1987 +/- ##
==========================================
- Coverage 65.39% 58.03% -7.37%
==========================================
Files 197 203 +6
Lines 17217 18069 +852
==========================================
- Hits 11259 10486 -773
- Misses 4785 6472 +1687
+ Partials 1173 1111 -62
Flags with carried forward coverage won't be shown. Click here to find out more.
|
23393d3
to
ba64e3c
Compare
} | ||
c.ReplaceEgressGroups(policies) | ||
return nil | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The above function variables seem useless? It can be just inline code like:
func (c *Controller) watch() {
klog.Info("Starting watch for EgressGroup")
antreaClient, err := c.antreaClientProvider.GetAntreaClient()
...
watcher, err := antreaClient.ControlplaneV1beta1().EgressGroups().Watch(context.TODO(), options)
...
for {
select {
case event, ok := <-watcher.ResultChan():
if !ok {
return
}
switch event.Type {
case watch.Added:
klog.V(2).Infof("Added EgressGroup (%#v)", event.Object)
c.addEgressGroup(...)
The abstract class in networkpolicy is to share code for 3 watchers, which doesn't make sense here, unless you extract the abstract class from networkpolicy controller to a common package, but I would defer its refactoring before other essential changes are finished.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is not resolved?
) *Controller { | ||
c := &Controller{ | ||
kubeClient: kubeClient, | ||
nodeIP: nodeIP, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The SNAT IP can be any IP configured on the node, not only the Node IP in K8s Node IP.
If you are not sure how to get it, I could provide an interface like LocalIPDetector.IsLocalIP(ip string) bool
for you to consume.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, I will try to implement LocalIPDetector.IsLocalIP(ip string) bool
by myself first.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have added IsLocalIP(ip string)
in this PR. Please check if it is correct, thanks
Can you squash your commits into one so that it's easy to review changes of this PR? I mean the commits except the ones merged from other PRs. |
} | ||
c.ReplaceEgressGroups(policies) | ||
return nil | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is not resolved?
func (c *Controller) AddEgressGroup(group *v1beta1.EgressGroup) error { | ||
c.setByGroupLock.Lock() | ||
defer c.setByGroupLock.Unlock() | ||
klog.Infof("%#v", group) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you polish logging in this file? Currently they will be unreadable and unnecessarily verbose
|
||
} | ||
if isLocalIP(newEgressIP) { | ||
id, errAllocate := c.IPAllocator.allocateForIP(newEgressIP) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The usage of the IPAllocatorLock
is not right. What if the oldEgressIP is not local IP? is allocateForIP thread-safe? Typically lock should be inside the called function
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed the logic. should checkif isLocalIP(newEgressIP) {
and if isLocalIP(oldEgressIP) {
as well
klog.Infof("%#v\n", c.IPAllocator) | ||
|
||
if errAllocate != nil { | ||
c.IPAllocator.allocateForIP(oldEgressIP) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this kind of retry? Doesn't seem elegant to me. And many error handling is missing here as well as other places in this file.
|
||
klog.Infof("%#v", gm.Member) | ||
klog.Infof("%#v", interfaces) | ||
klog.Infof("%t", isInterfaceOnPod(interfaces)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ditto
Member: group, | ||
} | ||
klog.Infof("%#v", groupMember) | ||
c.queue.Add(groupMember) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is not the correct usage pattern of workqueue. This is nothing different from just using a channel. Keys won't be merged as they key may be different even the IP and the pod's namespace and name is same. And it will have race condition if one pod is processed by two workers in parallel.
It should be the "Egress" name being the item enqueued to the workqueue in my mind. All egress and egressGroup handlers just enqueue affected egresses and syncHandler called by worker is the only one that will do the ID allocation and calls the underlying interfaces based on the eventual state at the moment the function is called.
418051a
to
669395e
Compare
Add types for egress policy CRDs, including Egress for the egress policy definition, and a common AppliedTo struct for defining the scope to which a policy is applied.
Add types for egress policy CRDs, including Egress for the egress policy definition, and a common AppliedTo struct for defining the scope to which a policy is applied.
47d3c96
to
4c0c149
Compare
…s and iptables changes
4c0c149
to
45617b7
Compare
squashed. I think there are files I have handled incorrectly in the git history, but it looks better now and I will organize my commits again after other works are merged. |
superseded by #2026, closing it |
No description provided.