New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Flanneld with IPSec backend not working #1027
Comments
Just out of curiosity. Why is there a need for a Ipsec backend, in the case of K8s when the Network traffic between pods can be configured as TLS? In this case it just feels to me as if the only thing Ipsec will provide is double encryption and a greater load on the node cpus. I might be missing something. Is there a part of the traffic in a K8s cluster that will make the cluster vulnerable over the public internet without Ipsec tunnels/transport? (please, read this as me trying to learn. No implicit meaning intended.) |
Okay. I think I understand. VxLAN, which is the recommended backend is insecure over the public internet. Therefore it would be a good idea to have an alternate method of isolating the network over the internet. Am I right? |
@artheus: Sorry for the late response. Yes, you are right. When running on cloud service providers like AWS, it may make sense to encrypt pod to pod traffic (CPU utilization vs. security) since instances (EC2) are connected over an insecure network. For on-prem it may not be much of an issue since it may be well isolated but in case of running in cloud it cannot be assumed. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Flanneld with IPSec backend not working
Expected Behavior
I have a two host setup, running kubernetes 1.6.8. I have several pods running on each of these hosts. When running with flanneld using the VxLAN backend, there is connectivity between pods running on different hosts. For e.g. I have kube-dns pod running on one host and vault pod running on second host and running "nslookup" resolves pod names. tcpdump captures otv/vxlan encapsulation and in general this setup is working.
However, when I switch out the backend to IPSec, this functionality does not work.
Current Behavior
See above
Possible Solution
I am not sure what I am missing in the configuration. I do not see flannel.1 interface (which existed when the backend was VxLAN), is this needed?
Here are some of the dumps that may be helpful to debug:
// Configuration prior to moving to IPSec backend
etcdctl get /atomic.io/network/config {"Network":"172.17.0.0/16", "Backend":{"Type":"vxlan"}}
// Configuration after moving to IPSec backend
etcdctl set /atomic.io/network/config '{"Network":"172.17.0.0/16", "Backend":{"Type":"ipsec", "PSK": "18bb8c151cf37225c276373a25857da26d34e13c1d258bc80872dc9a9b8569d3c22e9927c400dace1029ca4df76c90d4"}}'
// SA @Host1
[root@host1]# ip xfrm sa
src 10.193.37.153 dst 10.193.37.216
proto esp spi 0xc41726d2 reqid 11 mode tunnel
replay-window 0 flag af-unspec
aead rfc4106(gcm(aes)) 0x7ed5d2355fbee089c08b1c0db1e99e8c5031d11d 128
anti-replay context: seq 0x0, oseq 0x0, bitmap 0x00000000
src 10.193.37.216 dst 10.193.37.153
proto esp spi 0xcbe7f2ee reqid 11 mode tunnel
replay-window 32 flag af-unspec
aead rfc4106(gcm(aes)) 0xc58d57a9a8cd7188b35d5f34e20810d25d956e40 128
anti-replay context: seq 0x0, oseq 0x0, bitmap 0x00000000
[root@host1]#
// SA @host2
[root@host2]# ip xfrm sa
src 10.193.37.216 dst 10.193.37.153
proto esp spi 0xcbe7f2ee reqid 11 mode tunnel
replay-window 0 flag af-unspec
aead rfc4106(gcm(aes)) 0xc58d57a9a8cd7188b35d5f34e20810d25d956e40 128
anti-replay context: seq 0x0, oseq 0x0, bitmap 0x00000000
src 10.193.37.153 dst 10.193.37.216
proto esp spi 0xc41726d2 reqid 11 mode tunnel
replay-window 32 flag af-unspec
aead rfc4106(gcm(aes)) 0x7ed5d2355fbee089c08b1c0db1e99e8c5031d11d 128
anti-replay context: seq 0x0, oseq 0x0, bitmap 0x00000000
[root@host2]#
// Policy on host1
[root@host1]# ip xfrm policy
src 0.0.0.0/0 dst 0.0.0.0/0
socket in priority 0 ptype main
src 0.0.0.0/0 dst 0.0.0.0/0
socket out priority 0 ptype main
src 0.0.0.0/0 dst 0.0.0.0/0
socket in priority 0 ptype main
src 0.0.0.0/0 dst 0.0.0.0/0
socket out priority 0 ptype main
src 172.17.10.0/24 dst 172.17.36.0/24
dir fwd priority 0 ptype main
tmpl src 10.193.37.216 dst 10.193.37.153
proto esp reqid 11 mode tunnel
src 172.17.10.0/24 dst 172.17.36.0/24
dir in priority 0 ptype main
tmpl src 10.193.37.216 dst 10.193.37.153
proto esp reqid 11 mode tunnel
src 172.17.36.0/24 dst 172.17.10.0/24
dir out priority 0 ptype main
tmpl src 10.193.37.153 dst 10.193.37.216
proto esp reqid 11 mode tunnel
// Policy on host2
[root@host2]# ip xfrm policy
src 0.0.0.0/0 dst 0.0.0.0/0
socket in priority 0 ptype main
src 0.0.0.0/0 dst 0.0.0.0/0
socket out priority 0 ptype main
src 0.0.0.0/0 dst 0.0.0.0/0
socket in priority 0 ptype main
src 0.0.0.0/0 dst 0.0.0.0/0
socket out priority 0 ptype main
src 172.17.36.0/24 dst 172.17.10.0/24
dir fwd priority 0 ptype main
tmpl src 10.193.37.153 dst 10.193.37.216
proto esp reqid 11 mode tunnel
src 172.17.36.0/24 dst 172.17.10.0/24
dir in priority 0 ptype main
tmpl src 10.193.37.153 dst 10.193.37.216
proto esp reqid 11 mode tunnel
src 172.17.10.0/24 dst 172.17.36.0/24
dir out priority 0 ptype main
tmpl src 10.193.37.216 dst 10.193.37.153
proto esp reqid 11 mode tunnel
// Route table on host1
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.193.37.1 0.0.0.0 UG 100 0 0 eth0
10.193.37.0 0.0.0.0 255.255.255.0 U 100 0 0 eth0
172.17.36.0 0.0.0.0 255.255.255.0 U 0 0 0 docker0
// Route table on host2
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.193.37.1 0.0.0.0 UG 100 0 0 eth0
10.193.37.0 0.0.0.0 255.255.255.0 U 100 0 0 eth0
172.17.10.0 0.0.0.0 255.255.255.0 U 0 0 0 docker0
I have referred to https://github.com/coreos/flannel/blob/master/Documentation/backends.md
and restarted flanneld after flushing policy and state but does not help
I have also referred to a issue report #966 but did not help.
Steps to Reproduce (for bugs)
Context
I am trying to switch out existing overlay to secure overlay in kube environment.
Your Environment
The text was updated successfully, but these errors were encountered: