New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support k8s egress policies #2624

Closed
awh opened this Issue Nov 8, 2016 · 41 comments

Comments

Projects
None yet
@awh
Member

awh commented Nov 8, 2016

From @awh on September 21, 2016 9:6

The k8s network SIG is developing an egress policy specification which we should support.

Copied from original issue: weaveworks-experiments/weave-npc#9

@shaun-willows

This comment has been minimized.

shaun-willows commented Feb 21, 2017

Having egress policies would be very beneficial to my organisation. We are trying to restrict access from pods to an ExternalName Services (e.g. an AWS RDS instance) as noted in this StackOverflow post.

Is there any ballpark timescale for when this might be implemented? It does seem as if the discussion regarding egress policies has stalled, and the proposal document has not been updated in a while.

Project Calico provides egress policy via their own policy extensions. Could Weave to do anything similar?

For reference, this is the post on the K8S slack community that lead me to this issue.

@brb

This comment has been minimized.

Contributor

brb commented Feb 21, 2017

@iblocks-shaun Thanks for the info.

We will take into consideration the egress policies during the next release planning which should happen during the second half of March.

@Cryptophobia

This comment has been minimized.

Cryptophobia commented Jul 11, 2017

@brb : Any update on this issue/feature?

We are also interested in specifying egress policies and restricting access from pods to ExternalName Services like AWS RDS being a big need right now for us.

@brb

This comment has been minimized.

Contributor

brb commented Jul 24, 2017

@Cryptophobia Sorry, but the issue hasn't been prioritized yet. However, contributions are more than welcome.

@Cryptophobia

This comment has been minimized.

Cryptophobia commented Jul 27, 2017

@brb Can anyone point me to what sections of the code need to be updated for this to be added as a feature?

@bboreham

This comment has been minimized.

Member

bboreham commented Jul 27, 2017

The current Network Policy Controller is under https://github.com/weaveworks/weave/tree/master/npc, with main program at https://github.com/weaveworks/weave/tree/master/prog/weave-npc

Note that there's no way in Kubernetes to define an egress policy, so that would need to be stored in an annotation or something else free-form until the official version is agreed and included.

@brb

This comment has been minimized.

Contributor

brb commented Jul 30, 2017

@RRAlex

This comment has been minimized.

RRAlex commented Nov 17, 2017

Since 1.8 release, it seems the K8s documentation has egress examples:

kubernetes/enhancements#366
https://kubernetes.io/docs/concepts/services-networking/network-policies/#the-networkpolicy-resource

... so I'm just posting this here, in case that's news to some, to indicate that K8s is eventually moving toward the egress support (which seems to be limited to design principles for now?).

@bboreham

This comment has been minimized.

Member

bboreham commented Nov 17, 2017

Thanks @RRAlex, the api definitions for egress policies were added as beta in Kubernetes 1.8; see kubernetes/enhancements#367

This remains on the todo list for Weave Net.

@mattgarkusha

This comment has been minimized.

mattgarkusha commented Nov 21, 2017

We are very interested in this. Would love to see this done soon! 👍

@xeor

This comment has been minimized.

xeor commented Jan 14, 2018

Any status on this? No support for egress policies is a blocker for us :/

@zetaab

This comment has been minimized.

zetaab commented Feb 13, 2018

or could we start from point: is this currently possible implement to weave, if yes how(like theory behind the solution)? Because at least I could implement something but I do not have any idea where to start because I have not implemented anything to weave.

@Cryptophobia

This comment has been minimized.

Cryptophobia commented Feb 14, 2018

Good question @zetaab

@brb

This comment has been minimized.

Contributor

brb commented Feb 14, 2018

@zetaab Yes, it's possible to implement it. You would need to extend weave-npc (src, design doc) to make it manage appropriate iptables rules of the filter table and ipsets. I could guide you with implementation after you have familiarized yourself with the existing design and concepts of weave-npc.

Also, AFAIK, @bjhaid has been working on this feature. Perhaps you could synchronize with @bjhaid first.

@bjhaid

This comment has been minimized.

Contributor

bjhaid commented Feb 14, 2018

Also, AFAIK, @bjhaid has been working on this feature. Perhaps you could synchronize with @bjhaid first.

I stopped working on this, when you mentioned Martynas was working on it.

@Cryptophobia

This comment has been minimized.

Cryptophobia commented Feb 14, 2018

Can we give this feature some priority? I can try to take a look as well if help is needed.

NetworkPolicies are a feature that everyone wants and K8s is moving that way. Flannel supports them by integrating with Canal. Not sure if using Canal is possible with Weave since weave uses weave-npc.

Calico supports them: https://docs.projectcalico.org/v2.0/getting-started/kubernetes/tutorials/advanced-policy

@zetaab

This comment has been minimized.

zetaab commented Feb 14, 2018

I think quite many enterprise companies wants egress networkpolicies as well. The common problem in kubernetes is that if you want connect to databases outside kubernetes - you need to open whole cluster towards databases. If we have lets say team1 and team2 using same cluster, we do not want that team2 cannot even connect to team1 database. Egress networkpolicies could help here, and thats why I want these.

@brb

This comment has been minimized.

Contributor

brb commented Feb 14, 2018

@bjhaid I guess there is some misunderstanding, as I've been working on the ingress policy and unfortunately I've never intended to work on the egress part (I'm Martynas, btw;-).

@bjhaid

This comment has been minimized.

Contributor

bjhaid commented Feb 14, 2018

@brb sorry I mistook you for @bboreham my bad

@mrdima

This comment has been minimized.

mrdima commented Mar 21, 2018

We also need egress badly. The lack of this in weave will force us to move to calico (or cilium, but that still seems a bit young) in a about a month or so, we don't have time to extend weave ourselves. So to confirm, at this moment nobody is working on this and there is no priority from weaveworks or other devs?

@brb

This comment has been minimized.

Contributor

brb commented Mar 21, 2018

@mrdima We're considering to add this to the next milestone. I will update once a decision has been made.

@carpenterm

This comment has been minimized.

carpenterm commented May 9, 2018

@brb did you get to a decision on this in the end? We are considering moving off Weave to another network provider in order to get egress support but I'd rather not have to go through the pain of re-provisioning clusters if it is just around the corner.

@brb brb self-assigned this May 13, 2018

@brb brb added this to the 2.4 milestone May 13, 2018

@brb

This comment has been minimized.

Contributor

brb commented May 13, 2018

I'm going to work on this (no ETA at the moment, sorry).

To everyone who raised an interest in the feature: what is your use case? Do you want to filter off-cluster traffic (e.g. deny the internet, but allow github.com) or in-cluster traffic (e.g. allow only podA -> podB)? If the latter, do you plan to use ipBlock (CIDR) selector or pod/namespace label selectors?

@sstarcher

This comment has been minimized.

sstarcher commented May 13, 2018

My primary interest is having other things running in AWS in different VPCs and being able to only allow egress to a specific CIDR outside of the cluster in a 10.x.x.x network. For pod->pod traffic I would tend to use ingress to block who can talk to a pod instead of using egress to block pod to pod.

I will echo @carpenterm statement that I may need to move off weave in the next month to gain egress network policies.

@Cryptophobia

This comment has been minimized.

Cryptophobia commented May 13, 2018

@brb In our use case we would like to be able to filter in-cluster traffic (podA -> podB). Ideally we would want to be able to do it with pod/namespace label selectors as that would be easiest. Lets say we have third-party applications on our cluster. We would like for them not to be able to discover other apps running inside the cluster. Does that mean ipBlock (CIDR) selectors can be automatic per namespace?

@brb

This comment has been minimized.

Contributor

brb commented May 14, 2018

@Cryptophobia

Lets say we have third-party applications on our cluster. We would like for them not to be able to discover other apps running inside the cluster.

This use-case is already supported by ingress NetworkPolicy:

  1. Put the third-party apps in a separate namespace (ns-foobar).
  2. Create an ingress netpol which selects others pods and allows traffic to them only from listed namespaces minus ns-foobar.

Does that mean ipBlock (CIDR) selectors can be automatic per namespace?

Not sure whether I understood your question. ipBlock is specified by user and does not change automatically, see https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#ipblock-v1beta1-extensions.

@Cryptophobia

This comment has been minimized.

Cryptophobia commented May 14, 2018

@brb, Yes, most of the use-case is supported by ingress NetworkPolicy but for egress we would want to block CIDR ranges like specific database ranges (AWS RDS) from discoverability. This is where we would need weave-net to support it.

@RRAlex

This comment has been minimized.

RRAlex commented May 15, 2018

My use case is to block exfiltration of data from pods that have no reasons to talk to the outside world beyond their upstream (ingress) pod.

ie: if someone gets to the backend database, he has to go through where he came from and, thus, an executed (exploit) script can't simply push data to the internet at large.

@jeroenjacobs1205

This comment has been minimized.

jeroenjacobs1205 commented May 24, 2018

Another use case would be, when you have to run untrusted code.

Let's say you have 3 namespaces with trusted code, and 1 namespace where you run untrusted code.

With one 1 egress rule you can block that entire untrusted namespace from making connections to other systems.

Without egress rule support, you need to do it the other way: create ingress rules for each trusted namespace to block incoming traffic from the untrusted namespace.

@Cryptophobia

This comment has been minimized.

Cryptophobia commented May 24, 2018

@jeroenjacobs1205 : That is also a good use case and it will be relevant for our environment as well. The egress rule support definitely makes a lot of the "inverse" ingress rules simpler.

@brb

This comment has been minimized.

Contributor

brb commented May 26, 2018

Thanks for the use cases.

Short update from my side: I made some progress and most functionality has been implemented. Going to provide images for testing next week.

@brb

This comment has been minimized.

Contributor

brb commented Jun 3, 2018

I've published the test images: weaveworks/weave-kube:egress-test and weaveworks/weave-npc:egress-test (amd64 only, no ipblock in ingress, works in the non-legacy netpol mode (default)). To use them, you need to update the weave DaemonSet file.

Please let me know about any bugs you encounter.

@brb

This comment has been minimized.

Contributor

brb commented Jun 11, 2018

Any feedback regarding the test images?

The Egress PR is ready for a code review: #3313.

@pgandhipfpt

This comment has been minimized.

pgandhipfpt commented Jun 19, 2018

@brb - I tried these test images to enforce egress policies with weavenet and it did work..
I had to apply n/w policy .

@zeeZ

This comment has been minimized.

zeeZ commented Jun 26, 2018

After rolling out the egress-test images over 2.3.0 I'm seeing a lot of the following:

Jun 26 11:15:21 worker-4 kubelet-wrapper[796]: E0626 11:15:21.233392     796 remote_runtime.go:92] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to set up pod "REDACTED" network: netplugin failed but error parsing its diagnostic message "": unexpected end of JSON input
Jun 26 11:15:21 worker-4 kubelet-wrapper[796]: E0626 11:15:21.233586     796 kuberuntime_sandbox.go:54] CreatePodSandbox for pod "REDACTED(2ac293b6-7932-11e8-8362-005056ac7291)" failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to set up pod "REDACTED" network: netplugin failed but error parsing its diagnostic message "": unexpected end of JSON input
Jun 26 11:15:21 worker-4 kubelet-wrapper[796]: E0626 11:15:21.233675     796 kuberuntime_manager.go:647] createPodSandbox for pod "REDACTED(2ac293b6-7932-11e8-8362-005056ac7291)" failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to set up pod "REDACTED" network: netplugin failed but error parsing its diagnostic message "": unexpected end of JSON input
Jun 26 11:15:21 worker-4 kubelet-wrapper[796]: E0626 11:15:21.233959     796 pod_workers.go:186] Error syncing pod 2ac293b6-7932-11e8-8362-005056ac7291 ("REDACTED(2ac293b6-7932-11e8-8362-005056ac7291)"), skipping: failed to "CreatePodSandbox" for "REDACTED(2ac293b6-7932-11e8-8362-005056ac7291)" with CreatePodSandboxError: "CreatePodSandbox for pod \"REDACTED(2ac293b6-7932-11e8-8362-005056ac7291)\" failed: rpc error: code = Unknown desc = NetworkPlugin cni failed to set up pod \"REDACTED\" network: netplugin failed but error parsing its diagnostic message \"\": unexpected end of JSON input"
Jun 26 11:15:22 worker-4 kubelet-wrapper[796]: E0626 11:15:22.378308     796 cni.go:259] Error adding network: netplugin failed but error parsing its diagnostic message "": unexpected end of JSON input
Jun 26 11:15:22 worker-4 kubelet-wrapper[796]: E0626 11:15:22.379410     796 cni.go:227] Error while adding to cni network: netplugin failed but error parsing its diagnostic message "": unexpected end of JSON input

For what seems like new pods.

Rolling back to 2.3.0 causes NPC to fail on all nodes with

ERRO: 2018/06/26 11:33:52.265018 Failed to destroy ipset 'weave-local-pods'
FATA: 2018/06/26 11:33:52.265252 ipset [destroy weave-local-pods] failed: ipset v6.32: Set cannot be destroyed: it is in use by a kernel component
: exit status 1

Kubernetes 1.9.8
CoreOS 1745.7.0
CNI Plugins 0.7.0

No debug logs available right now as I don't have the time.

@brb

This comment has been minimized.

Contributor

brb commented Jun 28, 2018

@zeeZ

No debug logs available right now as I don't have the time.

Without the logs I can't help much.

After rolling back to 2.3.0 you need either to restart each machine or to manually remove iptables rules which involve weave-local-pods ipset.

@zeeZ

This comment has been minimized.

zeeZ commented Jun 29, 2018

I found a little bit of time to try again.

It's not a global problem and seems to be limited to a few of my nodes. They should all be identical from the kubelet configuration all the way down to the hypervisor, and only differ in the work put on them.

There's no difference in weave (kube and npc) debug logs between working and failing nodes. I still have a few ideas where I can poke them to see if I can reproduce it, but I can only do that during off-hours since the cluster is quite busy...

@lukaskorte

This comment has been minimized.

lukaskorte commented Jul 13, 2018

For me weave-npc does not even start up with the provided test image. It fails creating an already existing ipset:

INFO: 2018/07/13 16:55:51.645144 Starting Weaveworks NPC git-df5b97d862ed; node name "ip-172-29-46-227.eu-central-1.compute.internal"
INFO: 2018/07/13 16:55:51.673792 Serving /metrics on :6781
Fri Jul 13 16:55:51 2018 <5> ulogd.c:843 building new pluginstance stack: 'log1:NFLOG,base1:BASE,pcap1:PCAP'
DEBU: 2018/07/13 16:55:51.776015 Got list of ipsets: [weave-local-pods weave-UMxK?{GO%7OycP~!^^s*L05Sp weave-_LBCcOyuz1AmElI4Pd1F%SsUz weave-Zo/!7V2R{w|%#l*./m|7u6fMN weave-rcinEqHh2sh^C@s0~:OXV|Yvb weave-|/@lloKZP8!AOO#2a}MxZZfoC weave-U5SiJFJN~eirGIBbiP;ioZoH+ weave-R[=C1lRjOBDgUKPgZq.C~?=Dl weave-P.B|!ZhkAr5q=XZ?3}tMBA+0 weave-E1ney4o[ojNrLk.6rOHi;7MPE weave-iuZcey(5DeXbzgRFs8Szo]+@p weave-hWs^#nTNMc3*;Fl5A/)w_@}|u weave-o608|!Z0:jrD1x|cH@@*PM49l weave-Mw4P)/_2XIQh7ovSsfwQt@6QP weave-e/%EK}DzcI:S7d%1(LJJ[?j/R weave-=i5([apPyor5NS@|:sYOqiS:l]
DEBU: 2018/07/13 16:55:51.776038 Flushing ipset 'weave-local-pods'
DEBU: 2018/07/13 16:55:51.776575 Flushing ipset 'weave-UMxK?{GO%7OycP~!^^s*L05Sp'
DEBU: 2018/07/13 16:55:51.777285 Flushing ipset 'weave-_LBCcOyuz1AmElI4Pd1F%SsUz'
DEBU: 2018/07/13 16:55:51.777875 Flushing ipset 'weave-Zo/!7V2R{w|%#l*./m|7u6fMN'
DEBU: 2018/07/13 16:55:51.778507 Flushing ipset 'weave-rcinEqHh2sh^C@s0~:OXV|Yvb'
DEBU: 2018/07/13 16:55:51.779049 Flushing ipset 'weave-|/@lloKZP8!AOO#2a}MxZZfoC'
DEBU: 2018/07/13 16:55:51.779572 Flushing ipset 'weave-U5SiJFJN~eirGIBbiP;ioZoH+'
DEBU: 2018/07/13 16:55:51.780092 Flushing ipset 'weave-R[=C1lRjOBDgUKPgZq.C~?=Dl'
DEBU: 2018/07/13 16:55:51.780646 Flushing ipset 'weave-P.B|!ZhkAr5q=XZ?3}tMBA+0'
DEBU: 2018/07/13 16:55:51.781445 Flushing ipset 'weave-E1ney4o[ojNrLk.6rOHi;7MPE'
DEBU: 2018/07/13 16:55:51.781989 Flushing ipset 'weave-iuZcey(5DeXbzgRFs8Szo]+@p'
DEBU: 2018/07/13 16:55:51.782549 Flushing ipset 'weave-hWs^#nTNMc3*;Fl5A/)w_@}|u'
DEBU: 2018/07/13 16:55:51.783131 Flushing ipset 'weave-o608|!Z0:jrD1x|cH@@*PM49l'
DEBU: 2018/07/13 16:55:51.783605 Flushing ipset 'weave-Mw4P)/_2XIQh7ovSsfwQt@6QP'
DEBU: 2018/07/13 16:55:51.784223 Flushing ipset 'weave-e/%EK}DzcI:S7d%1(LJJ[?j/R'
DEBU: 2018/07/13 16:55:51.784784 Flushing ipset 'weave-=i5([apPyor5NS@|:sYOqiS:l'
DEBU: 2018/07/13 16:55:51.785397 Destroying ipset 'weave-UMxK?{GO%7OycP~!^^s*L05Sp'
DEBU: 2018/07/13 16:55:51.785927 Destroying ipset 'weave-_LBCcOyuz1AmElI4Pd1F%SsUz'
DEBU: 2018/07/13 16:55:51.786414 Destroying ipset 'weave-Zo/!7V2R{w|%#l*./m|7u6fMN'
DEBU: 2018/07/13 16:55:51.787019 Destroying ipset 'weave-rcinEqHh2sh^C@s0~:OXV|Yvb'
DEBU: 2018/07/13 16:55:51.787525 Destroying ipset 'weave-|/@lloKZP8!AOO#2a}MxZZfoC'
DEBU: 2018/07/13 16:55:51.788090 Destroying ipset 'weave-U5SiJFJN~eirGIBbiP;ioZoH+'
DEBU: 2018/07/13 16:55:51.788628 Destroying ipset 'weave-R[=C1lRjOBDgUKPgZq.C~?=Dl'
DEBU: 2018/07/13 16:55:51.789153 Destroying ipset 'weave-P.B|!ZhkAr5q=XZ?3}tMBA+0'
DEBU: 2018/07/13 16:55:51.789647 Destroying ipset 'weave-E1ney4o[ojNrLk.6rOHi;7MPE'
DEBU: 2018/07/13 16:55:51.790124 Destroying ipset 'weave-iuZcey(5DeXbzgRFs8Szo]+@p'
DEBU: 2018/07/13 16:55:51.790651 Destroying ipset 'weave-hWs^#nTNMc3*;Fl5A/)w_@}|u'
DEBU: 2018/07/13 16:55:51.791235 Destroying ipset 'weave-o608|!Z0:jrD1x|cH@@*PM49l'
DEBU: 2018/07/13 16:55:51.791694 Destroying ipset 'weave-Mw4P)/_2XIQh7ovSsfwQt@6QP'
DEBU: 2018/07/13 16:55:51.792205 Destroying ipset 'weave-e/%EK}DzcI:S7d%1(LJJ[?j/R'
DEBU: 2018/07/13 16:55:51.792746 Destroying ipset 'weave-=i5([apPyor5NS@|:sYOqiS:l'
INFO: 2018/07/13 16:55:51.866488 EVENT AddNetworkPolicy {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"networking.k8s.io/v1\",\"kind\":\"NetworkPolicy\",\"metadata\":{\"annotations\":{},\"name\":\"default\",\"namespace\":\"my-namespace-develop\"},\"spec\":{\"egress\":[{\"to\":[{\"ipBlock\":{\"cidr\":\"0.0.0.0/0\",\"except\":[\"172.31.0.0/16\",\"172.29.0.0/16\"]}}]}],\"ingress\":[{\"from\":[{\"podSelector\":{}},{\"namespaceSelector\":{\"matchLabels\":{\"purpose\":\"ingress\"}}},{\"namespaceSelector\":{\"matchLabels\":{\"purpose\":\"monitoring\"}}}]}],\"podSelector\":{},\"policyTypes\":[\"Ingress\",\"Egress\"]}}\n"},"creationTimestamp":"2018-07-13T14:09:30Z","generation":1,"name":"default","namespace":"my-namespace-develop","resourceVersion":"16025576","selfLink":"/apis/networking.k8s.io/v1/namespaces/my-namespace-develop/networkpolicies/default","uid":"5bf4350b-86a6-11e8-9016-060dee68c5ac"},"spec":{"egress":[{"to":[{"ipBlock":{"cidr":"0.0.0.0/0","except":["172.31.0.0/16","172.29.0.0/16"]}}]}],"ingress":[{"from":[{"podSelector":{}},{"namespaceSelector":{"matchLabels":{"purpose":"ingress"}}},{"namespaceSelector":{"matchLabels":{"purpose":"monitoring"}}}]}],"podSelector":{},"policyTypes":["Ingress","Egress"]}}
INFO: 2018/07/13 16:55:51.886563 creating ipset: &npc.selectorSpec{key:"", selector:labels.internalSelector{}, policyTypes:[]npc.policyType(nil), ipsetType:"hash:ip", ipsetName:"weave-U5SiJFJN~eirGIBbiP;ioZoH+", nsName:"my-namespace-develop"}
INFO: 2018/07/13 16:55:51.887355 creating ipset: &npc.selectorSpec{key:"purpose=ingress", selector:labels.internalSelector{labels.Requirement{key:"purpose", operator:"=", strValues:[]string{"ingress"}}}, policyTypes:[]npc.policyType(nil), ipsetType:"list:set", ipsetName:"weave-e/%EK}DzcI:S7d%1(LJJ[?j/R", nsName:""}
INFO: 2018/07/13 16:55:51.887866 creating ipset: &npc.selectorSpec{key:"purpose=monitoring", selector:labels.internalSelector{labels.Requirement{key:"purpose", operator:"=", strValues:[]string{"monitoring"}}}, policyTypes:[]npc.policyType(nil), ipsetType:"list:set", ipsetName:"weave-=i5([apPyor5NS@|:sYOqiS:l", nsName:""}
INFO: 2018/07/13 16:55:51.888406 creating ipset: &npc.ipBlockSpec{key:"172.29.0.0/16 172.31.0.0/16", ipsetName:"weave-R[=C1lRjOBDgUKPgZq.C~?=Dl", ipBlock:(*v1.IPBlock)(0xc42138cf00)}
INFO: 2018/07/13 16:55:51.888939 adding entry 172.29.0.0/16 to weave-R[=C1lRjOBDgUKPgZq.C~?=Dl of 5bf4350b-86a6-11e8-9016-060dee68c5ac
INFO: 2018/07/13 16:55:51.888961 added entry 172.29.0.0/16 to weave-R[=C1lRjOBDgUKPgZq.C~?=Dl of 5bf4350b-86a6-11e8-9016-060dee68c5ac
INFO: 2018/07/13 16:55:51.889490 adding entry 172.31.0.0/16 to weave-R[=C1lRjOBDgUKPgZq.C~?=Dl of 5bf4350b-86a6-11e8-9016-060dee68c5ac
INFO: 2018/07/13 16:55:51.889522 added entry 172.31.0.0/16 to weave-R[=C1lRjOBDgUKPgZq.C~?=Dl of 5bf4350b-86a6-11e8-9016-060dee68c5ac
INFO: 2018/07/13 16:55:51.890290 adding rule [-m set --match-set weave-U5SiJFJN~eirGIBbiP;ioZoH+ src -m set --match-set weave-U5SiJFJN~eirGIBbiP;ioZoH+ dst -m comment --comment pods: namespace: my-namespace-develop, selector:  -> pods: namespace: my-namespace-develop, selector:  (ingress) -j ACCEPT] to "WEAVE-NPC-INGRESS" chain
INFO: 2018/07/13 16:55:51.892693 adding rule [-m set --match-set weave-e/%EK}DzcI:S7d%1(LJJ[?j/R src -m set --match-set weave-U5SiJFJN~eirGIBbiP;ioZoH+ dst -m comment --comment namespaces: selector: purpose=ingress -> pods: namespace: my-namespace-develop, selector:  (ingress) -j ACCEPT] to "WEAVE-NPC-INGRESS" chain
INFO: 2018/07/13 16:55:51.896384 adding rule [-m set --match-set weave-=i5([apPyor5NS@|:sYOqiS:l src -m set --match-set weave-U5SiJFJN~eirGIBbiP;ioZoH+ dst -m comment --comment namespaces: selector: purpose=monitoring -> pods: namespace: my-namespace-develop, selector:  (ingress) -j ACCEPT] to "WEAVE-NPC-INGRESS" chain
INFO: 2018/07/13 16:55:51.898636 adding rule [-m set --match-set weave-U5SiJFJN~eirGIBbiP;ioZoH+ src -d 0.0.0.0/0 -m set ! --match-set weave-R[=C1lRjOBDgUKPgZq.C~?=Dl dst -m comment --comment pods: namespace: my-namespace-develop, selector:  -> cidr: 0.0.0.0/0 except [172.29.0.0/16 172.31.0.0/16] (egress) -j WEAVE-NPC-EGRESS-ACCEPT] to "WEAVE-NPC-EGRESS-CUSTOM" chain
INFO: 2018/07/13 16:55:51.901729 adding rule [-m set --match-set weave-U5SiJFJN~eirGIBbiP;ioZoH+ src -d 0.0.0.0/0 -m set ! --match-set weave-R[=C1lRjOBDgUKPgZq.C~?=Dl dst -m comment --comment pods: namespace: my-namespace-develop, selector:  -> cidr: 0.0.0.0/0 except [172.29.0.0/16 172.31.0.0/16] (egress) -j RETURN] to "WEAVE-NPC-EGRESS-CUSTOM" chain
INFO: 2018/07/13 16:55:51.906520 EVENT AddNetworkPolicy {"metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"networking.k8s.io/v1\",\"kind\":\"NetworkPolicy\",\"metadata\":{\"annotations\":{},\"name\":\"default\",\"namespace\":\"my-namespace-release\"},\"spec\":{\"egress\":[{\"to\":[{\"ipBlock\":{\"cidr\":\"0.0.0.0/0\",\"except\":[\"172.31.0.0/16\",\"172.29.0.0/16\"]}}]}],\"ingress\":[{\"from\":[{\"podSelector\":{}},{\"namespaceSelector\":{\"matchLabels\":{\"purpose\":\"ingress\"}}},{\"namespaceSelector\":{\"matchLabels\":{\"purpose\":\"monitoring\"}}}]}],\"podSelector\":{},\"policyTypes\":[\"Ingress\",\"Egress\"]}}\n"},"creationTimestamp":"2018-07-13T15:14:42Z","generation":1,"name":"default","namespace":"my-namespace-release","resourceVersion":"16032359","selfLink":"/apis/networking.k8s.io/v1/namespaces/my-namespace-release/networkpolicies/default","uid":"77ee4354-86af-11e8-9016-060dee68c5ac"},"spec":{"egress":[{"to":[{"ipBlock":{"cidr":"0.0.0.0/0","except":["172.31.0.0/16","172.29.0.0/16"]}}]}],"ingress":[{"from":[{"podSelector":{}},{"namespaceSelector":{"matchLabels":{"purpose":"ingress"}}},{"namespaceSelector":{"matchLabels":{"purpose":"monitoring"}}}]}],"podSelector":{},"policyTypes":["Ingress","Egress"]}}
INFO: 2018/07/13 16:55:51.907661 creating ipset: &npc.selectorSpec{key:"", selector:labels.internalSelector{}, policyTypes:[]npc.policyType(nil), ipsetType:"hash:ip", ipsetName:"weave-Mw4P)/_2XIQh7ovSsfwQt@6QP", nsName:"my-namespace-release"}
INFO: 2018/07/13 16:55:51.908305 creating ipset: &npc.ipBlockSpec{key:"172.29.0.0/16 172.31.0.0/16", ipsetName:"weave-R[=C1lRjOBDgUKPgZq.C~?=Dl", ipBlock:(*v1.IPBlock)(0xc42138d050)}
FATA: 2018/07/13 16:55:51.908851 add network policy: ipset [create weave-R[=C1lRjOBDgUKPgZq.C~?=Dl hash:net comment] failed: ipset v6.32: Set cannot be created: set with the same name already exists
: exit status 1

The rollback to 2.3.0 worked after restarting the node.

@brb

This comment has been minimized.

Contributor

brb commented Jul 15, 2018

@lukaskorte Thanks for testing and providing the logs. I've fixed your issue and published new images: weaveworks/weave-kube:egress-test-v2 and weaveworks/weave-npc:egress-test-v2.

@lukaskorte

This comment has been minimized.

lukaskorte commented Jul 16, 2018

@brb, thanks this one is working great! Egress policy is working as expected. I tested it with an ipBlock like this:

  egress:
  - to:
    - ipBlock:
        cidr: 0.0.0.0/0
        except:
        - 172.31.0.0/16
        - 172.29.0.0/16

bboreham added a commit that referenced this issue Jul 24, 2018

@brb

This comment has been minimized.

Contributor

brb commented Jul 25, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment