Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Flanneld with IPSec backend not working #1027

Closed
bchannak opened this issue Aug 7, 2018 · 4 comments
Closed

Flanneld with IPSec backend not working #1027

bchannak opened this issue Aug 7, 2018 · 4 comments
Labels

Comments

@bchannak
Copy link

bchannak commented Aug 7, 2018

Flanneld with IPSec backend not working

Expected Behavior

I have a two host setup, running kubernetes 1.6.8. I have several pods running on each of these hosts. When running with flanneld using the VxLAN backend, there is connectivity between pods running on different hosts. For e.g. I have kube-dns pod running on one host and vault pod running on second host and running "nslookup" resolves pod names. tcpdump captures otv/vxlan encapsulation and in general this setup is working.

However, when I switch out the backend to IPSec, this functionality does not work.

Current Behavior

See above

Possible Solution

I am not sure what I am missing in the configuration. I do not see flannel.1 interface (which existed when the backend was VxLAN), is this needed?

Here are some of the dumps that may be helpful to debug:

// Configuration prior to moving to IPSec backend
etcdctl get /atomic.io/network/config {"Network":"172.17.0.0/16", "Backend":{"Type":"vxlan"}}

// Configuration after moving to IPSec backend
etcdctl set /atomic.io/network/config '{"Network":"172.17.0.0/16", "Backend":{"Type":"ipsec", "PSK": "18bb8c151cf37225c276373a25857da26d34e13c1d258bc80872dc9a9b8569d3c22e9927c400dace1029ca4df76c90d4"}}'

// SA @Host1
[root@host1]# ip xfrm sa
src 10.193.37.153 dst 10.193.37.216
proto esp spi 0xc41726d2 reqid 11 mode tunnel
replay-window 0 flag af-unspec
aead rfc4106(gcm(aes)) 0x7ed5d2355fbee089c08b1c0db1e99e8c5031d11d 128
anti-replay context: seq 0x0, oseq 0x0, bitmap 0x00000000
src 10.193.37.216 dst 10.193.37.153
proto esp spi 0xcbe7f2ee reqid 11 mode tunnel
replay-window 32 flag af-unspec
aead rfc4106(gcm(aes)) 0xc58d57a9a8cd7188b35d5f34e20810d25d956e40 128
anti-replay context: seq 0x0, oseq 0x0, bitmap 0x00000000
[root@host1]#

// SA @host2
[root@host2]# ip xfrm sa
src 10.193.37.216 dst 10.193.37.153
proto esp spi 0xcbe7f2ee reqid 11 mode tunnel
replay-window 0 flag af-unspec
aead rfc4106(gcm(aes)) 0xc58d57a9a8cd7188b35d5f34e20810d25d956e40 128
anti-replay context: seq 0x0, oseq 0x0, bitmap 0x00000000
src 10.193.37.153 dst 10.193.37.216
proto esp spi 0xc41726d2 reqid 11 mode tunnel
replay-window 32 flag af-unspec
aead rfc4106(gcm(aes)) 0x7ed5d2355fbee089c08b1c0db1e99e8c5031d11d 128
anti-replay context: seq 0x0, oseq 0x0, bitmap 0x00000000
[root@host2]#

// Policy on host1
[root@host1]# ip xfrm policy
src 0.0.0.0/0 dst 0.0.0.0/0
socket in priority 0 ptype main
src 0.0.0.0/0 dst 0.0.0.0/0
socket out priority 0 ptype main
src 0.0.0.0/0 dst 0.0.0.0/0
socket in priority 0 ptype main
src 0.0.0.0/0 dst 0.0.0.0/0
socket out priority 0 ptype main
src 172.17.10.0/24 dst 172.17.36.0/24
dir fwd priority 0 ptype main
tmpl src 10.193.37.216 dst 10.193.37.153
proto esp reqid 11 mode tunnel
src 172.17.10.0/24 dst 172.17.36.0/24
dir in priority 0 ptype main
tmpl src 10.193.37.216 dst 10.193.37.153
proto esp reqid 11 mode tunnel
src 172.17.36.0/24 dst 172.17.10.0/24
dir out priority 0 ptype main
tmpl src 10.193.37.153 dst 10.193.37.216
proto esp reqid 11 mode tunnel

// Policy on host2
[root@host2]# ip xfrm policy
src 0.0.0.0/0 dst 0.0.0.0/0
socket in priority 0 ptype main
src 0.0.0.0/0 dst 0.0.0.0/0
socket out priority 0 ptype main
src 0.0.0.0/0 dst 0.0.0.0/0
socket in priority 0 ptype main
src 0.0.0.0/0 dst 0.0.0.0/0
socket out priority 0 ptype main
src 172.17.36.0/24 dst 172.17.10.0/24
dir fwd priority 0 ptype main
tmpl src 10.193.37.153 dst 10.193.37.216
proto esp reqid 11 mode tunnel
src 172.17.36.0/24 dst 172.17.10.0/24
dir in priority 0 ptype main
tmpl src 10.193.37.153 dst 10.193.37.216
proto esp reqid 11 mode tunnel
src 172.17.10.0/24 dst 172.17.36.0/24
dir out priority 0 ptype main
tmpl src 10.193.37.216 dst 10.193.37.153
proto esp reqid 11 mode tunnel

// Route table on host1
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.193.37.1 0.0.0.0 UG 100 0 0 eth0
10.193.37.0 0.0.0.0 255.255.255.0 U 100 0 0 eth0
172.17.36.0 0.0.0.0 255.255.255.0 U 0 0 0 docker0

// Route table on host2
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.193.37.1 0.0.0.0 UG 100 0 0 eth0
10.193.37.0 0.0.0.0 255.255.255.0 U 100 0 0 eth0
172.17.10.0 0.0.0.0 255.255.255.0 U 0 0 0 docker0

I have referred to https://github.com/coreos/flannel/blob/master/Documentation/backends.md
and restarted flanneld after flushing policy and state but does not help

I have also referred to a issue report #966 but did not help.

Steps to Reproduce (for bugs)

  1. Deploy/Run kubernetes on a two node setup
  2. Verify functionality (like pod to pod connectivity)
  3. Update flanneld config using etcdctl to use IPSec backend (exact configuration captured above)
  4. Restart flanneld (systemctl restart flanneld)
  5. Verify policy/SA, flush, restart to see if it works

Context

I am trying to switch out existing overlay to secure overlay in kube environment.

Your Environment

  • Flannel version: Building from master at 8a083a8
  • Backend used (e.g. vxlan or udp): IPSec
  • Etcd version: 3.0.15
  • Kubernetes version (if used): 1.6.8
  • Operating System and version: CentOS Linux release 7.4.1708 (Core), 3.10.0-693.17.1.el7.x86_64
  • Link to your project (optional):
@artheus
Copy link

artheus commented Dec 15, 2018

Just out of curiosity. Why is there a need for a Ipsec backend, in the case of K8s when the Network traffic between pods can be configured as TLS? In this case it just feels to me as if the only thing Ipsec will provide is double encryption and a greater load on the node cpus.
Authentication to K8s and Etcd is Two-way SSL, which should provide you with all the security you need.

I might be missing something. Is there a part of the traffic in a K8s cluster that will make the cluster vulnerable over the public internet without Ipsec tunnels/transport?

(please, read this as me trying to learn. No implicit meaning intended.)

@artheus
Copy link

artheus commented Dec 15, 2018

Okay. I think I understand. VxLAN, which is the recommended backend is insecure over the public internet. Therefore it would be a good idea to have an alternate method of isolating the network over the internet. Am I right?

@bchannak
Copy link
Author

bchannak commented Jan 8, 2019

@artheus: Sorry for the late response. Yes, you are right.

When running on cloud service providers like AWS, it may make sense to encrypt pod to pod traffic (CPU utilization vs. security) since instances (EC2) are connected over an insecure network. For on-prem it may not be much of an issue since it may be well isolated but in case of running in cloud it cannot be assumed.

@stale
Copy link

stale bot commented Jan 26, 2023

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the wontfix label Jan 26, 2023
@stale stale bot closed this as completed Feb 16, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants