Skip to content
This repository has been archived by the owner on Nov 30, 2021. It is now read-only.

Add documentation about proxyrealipcidrs for configuring proxy protocol #529

Open
kmala opened this issue Sep 29, 2016 · 6 comments
Open

Comments

@kmala
Copy link
Contributor

kmala commented Sep 29, 2016

#512 (comment)

@felixbuenemann
Copy link
Contributor

This is not only relevant for proxy protocol, but also if using a http mode load balancer that uses the X-Forwarded-For header.

Btw. on kube-aws the default range of 10/8 is fine, unless the ip range for the VPC has been customised.

@felixbuenemann
Copy link
Contributor

@kmala I've looked at the access logs in my kube-aws CoreOS K8s cluster with flannel overlay networking and the originating IP for requests from the ELB was 10.2.71.1 which is actually inside the POD CIDR (in my case 10.2.0.0/16) and not the private IP of the load balancer in any AZs (10.0.0.140 / 10.0.1.184), so there seems to be some natting between the host network and the pod network.

Do you know if the networking set-up by kube-up.sh or kubeadm differs in this regard?

I'm asking because in this case we need to document to set the router.deis.io/nginx.proxyRealIpCidrs to the POD CIDR range and not to the private IPs or CIDRs of the load balancers.

@kmala
Copy link
Contributor Author

kmala commented Sep 30, 2016

@felixbuenemann if you see my comment i had said to add both the pod CIDR and private ip CIDR of the nodes because the load balancer sends the request in round robin fashion and if the request goes to the node where the pod is scheduled it goes through the kubelet and hence you see the kubelet ip otherwise you see the private ip of the node.

@felixbuenemann
Copy link
Contributor

That makes sense, thanks for the clarification.

@felixbuenemann
Copy link
Contributor

felixbuenemann commented Oct 2, 2016

@kmala I did some more checking and I believe the requests always come from IPs in the pod network, no matter if they hit the router straight from the ELB or via another node, because requests always go through the kube proxy.

If we look at the instance ports of the ELB, we will see something like this:

LoadBalancerPort: 2222
InstancePort: 31166

So all traffic that arrives at the ELB is load balanced across all nodes on port 31166, but the actual container for the deis router exposes port 2222. So traffic from the ELB always goes through the kube-proxy, no matter if it hits a node that is running a router instance or if it gets relayed by another node.

I have also verified this by checking the logs. I have the router instance running on the node 10.0.0.200 and another node with no instance in another AZ with the IP 10.0.1.249.

If I set the proxyRealIpCidrs to some value that doesn't include my pod network, I see the remote IP alternating between 10.2.71.1 and 10.2.29.0. If I reboot the node with no router instance, the remaining requests all come from 10.2.71.1.

@Cryptophobia
Copy link

This issue was moved to teamhephy/workflow#50

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants